id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.09930 | Exchange interactions and intermolecular hybridization in a spin-1/2
nanographene dimer | Phenalenyl is a radical nanographene with triangular shape that hosts an
unpaired electron with spin S = 1/2. The open-shell nature of phenalenyl is
expected to be retained in covalently bonded networks. Here, we study a first
step in that direction and report the synthesis of the phenalenyl dimer by
combining in-solution synthesis and on-surface activation and its
characterization both on Au(111) and on a monolayer of NaCl on top of Au(111)
by means of inelastic electron tunneling spectroscopy (IETS). IETS shows
inelastic steps that, together with a thorough theoretical analysis, are
identified as the singlet-triplet excitation arising from interphenalenyl
exchange. Two prominent features of our data permit to shed light on the nature
of spin interactions in this system. First, the excitation energies with and
without the NaCl decoupling layer are 48 and 41 meV, respectively, indicating a
significant renormalization of the spin excitation energies due to exchange
with the Au(111) electrons. Second, a position-dependent bias-asymmetry of the
height of the inelastic steps is accounted for by an interphenalenyl
hybridization of the singly occupied phenalenyl orbitals that is only possible
via third neighbor hopping. This hybridization is also essential to activate
kinetic interphenalenyl exchange. Our results set the stage for future work on
the bottom-up synthesis of spin S = 1/2 spin lattices with large exchange
interaction. | N. Krane, E. Turco, A. Bernhardt, D. Jacob, G. Gandus, D. Passerone, M. Luisier, M. Juríček, R. Fasel, J. Fernández-Rossier, P. Ruffieux | 2023-07-19T12:11:37Z | http://arxiv.org/abs/2307.09930v1 | Exchange interactions and intermolecular hybridization
###### Abstract
**Phenalenenyl is a radical nanographene with triangular shape that hosts an unpaired electron with spin \(S=\) %. The open-shell nature of phenalenyl is expected to be retained in covalently bonded networks. Here, we study a first step in that direction and report the synthesis of the phenalenyl dimer by combining in-solution synthesis and on-surface activation and its characterization both on Au(111) and on a monolayer of NaCl on top of Au(111) by means of inelastic electron tunneling spectroscopy (IETS). IETS shows inelastic steps that, together with a thorough theoretical analysis, are identified as the singlet-triplet excitation arising from interphenalenyl exchange. Two prominent features of our data permit to shed light on the nature of spin interactions in this system. First, the excitation energies with and without the NaCl decoupling layer are 48 and 41 meV, respectively, indicating a significant
renormalization of the spin excitation energies due to exchange with the Au(111) electrons. Second, a position-dependent bias-asymmetry of the height of the inelastic steps is accounted for by an interphenalenyl hybridization of the singly occupied phenalenyl orbitals that is only possible via third neighbor hopping. This hybridization is also essential to activate kinetic interphenalenyl exchange. Our results set the stage for future work on the bottom-up synthesis of spin \(S=\) % spin lattices with large exchange interaction.
The preparation of diphenalenyl is achieved through a combined solution and on-surface synthesis approach. In the solution phase, the 2_H_-diphenalenyl precursor is synthesized in a sequence of nine steps starting from naphthalene (for details, see the SI). The use of hydro precursors has the advantage that activation of the target open-shell compound can be achieved using atom manipulation with a scanning tunneling microscopy (STM) tip[8] and does not necessarily require the catalytic action of metal substrates. We have recently shown that phenalenyl and triangulene can be achieved through selective activation of the corresponding hydro precursors using controlled voltage pulses from the STM tip[2]. Here, we follow a similar
Figure 1: **(a)** Tip-induced activation of diphenalenyl on Au(111). STM (top) and nc-AFM (bottom) images before and after dehydrogenation via voltage pulses (black crosses). The images were taken with CO tip at closed feedback with -100 mV/50 pA (STM) and opened feedback on Au(111) with -5 mV/100 pA, \(\Delta\)z = 1.8 Å (AFM). (**b**) dI/dV spectroscopy taken with CO tip on diphenalenyl (green) and Au(111) (grey), revealing the SOMOs and SUMOs at -0.6 V and +1.1 V, respectively, as well as the onsets of the HOMO-1 and LUMO+1 at -1.9 V and +1.9 V. The inset displays the positions, where the spectra were taken. Feedback loop opened at -2 V/350 pA, \(V_{\mathrm{rms}}\) = 20 mV. Dashed lines mark energies of dI/dV maps displayed in (**d**). (**c**) MFH Energy diagram of the single-particle states. (**d**,**e**) Constant current dI/dV maps and MFH calculated projected density of states of the corresponding orbitals. HOMO-1 and LUMO+1 taken at 250 pA and \(V_{\mathrm{rms}}\) = 30 mV, SOMOs and SUMOs at 200 pA and \(V_{\mathrm{rms}}\) = 14 mV. Scale bars: 0.5 nm (a, d).
approach to sequentially activate the two hydro-phenalenyl subunits of the precursor. The substrate was prepared by sublimation of NaCl on a clean Au(111) surface held at room temperature, which leads to a sub-monolayer coverage of NaCl organized into 1ML and 2ML islands. The molecular 2_H_-diphenalenyl precursor was deposited by sublimation onto the previously prepared sample, kept at a temperature of about 100 K during the deposition and rapidly transferred into the STM. An overview STM image of the so-obtained sample is reported in Figure S2, where, the molecular precursors can be found in sub-monolayer coverage adsorbed both on the Au(111) surface and NaCl islands. Sequential tip-induced cleaving of the hydrogen atoms from the \(sp^{3}\) carbon atoms[2, 8] yields the target diphenalenyl diradical, as proven by constant-height nc-AFM measurements of the precursor and the target compound (Figure 1a, bottom) both adsorbed on Au(111). The precursor molecules adsorbed on 1ML NaCl were similarly manipulated into diphenalenyl diradical as shown in Figure S2 (b,c). The change in the electronic structure can also be observed in the STM images (Figure 1a), showing distinct lobes and nodal planes for the activated molecule. Constant-height d\(l\)/d\(V\) spectroscopy of the activated molecule adsorbed on Au(111) is shown in Figure 1b, revealing the presence of two distinct resonances at -0.6 eV and 1.1 eV, and the onset of two conductance peaks at \(\pm\)1.9 eV. To do a first assignment of the observed conductance peaks to the respective molecular orbitals, we used a tight-binding (TB) level of theory, taking into account the electron-electron Coulomb repulsion within the mean-field Hubbard (MFH) approximation. The calculated energy diagram reported in Figure 1c features two frontier states, commonly denoted as singly occupied and unoccupied molecular orbitals, SOMOs and SUMOs, respectively[16]. A comparison of the calculated local density of states (LDOS) and the experimental d\(l\)/d\(V\) maps of the molecule's electronic resonances allows a clear assignment of the experimentally observed resonances (Figure 1d,e).
In order to probe the magnetic properties of diphenalenyl, d\(l\)/d\(V\) spectroscopy at low bias voltages was conducted both on Au(111) and monolayer NaCl. As displayed in Figure 2a, the spectra show steps in the differential conductance at \(\pm\) 41 mV and \(\pm\)48 mV for the molecules on Au(111) and NaCl, respectively. Ovchinnikov's rule[17] and Lieb's theorem[18] predict the single spins of the two phenalenyl units to form an \(S=0\) ground state. For two coupled spins with \(S=\gamma_{2}\) and Heisenberg coupling \(JS_{1}\cdot S_{2}\), the excited state is \(S=1\) with energy \(E=J\)
Therefore, the observed steps in differential conductance can be assigned to spin excitations from the singlet ground state to the triplet excited state.
Figure 2: Inelastic spectroscopy of the spin excitations of diphenalenyl on Au(111) and 1ML-NaCl/Au(111). (**a**) dI/dV spectroscopy (solid lines) and second derivative (circles) of diphenalenyl adsorbed on Au(111). Tip positions are marked in the image above and the reference spectrum (bottom) is taken on Au(111). The dotted vertical lines mark the dip/peak positions of the second derivative at \(\pm\)41 mV. Feedback opened at -100 mV/750 pA; \(V_{rms}\) = 2 mV (**b**) Constant dI/dV images (10 nS, \(V_{rms}\) = 3.5 mV) taken at energies above spin excitation. The red arrows in bottom panel highlight the asymmetry for different bias polarities. (**c**) dI/dV spectroscopy (solid lines) and second derivative (circles) of diphenalenyl on monolayer NaCl. Tip positions are marked in the image above and the reference spectrum (bottom) is taken on NaCl. Dotted vertical lines at dip/peak of second derivative at \(\pm\)48 mV. Feedback opened at -100 mV/250 pA; \(V_{rms}\) = 2.8 mV (**d**) Constant current STM images (20 pA) of diphenalenyl on NaCl/Au(111) taken at the onset energies of the frontier orbitals, displaying the asymmetry of the SOMOs and SUMOs. Scale bars: 0.5 nm (b, d).
A specific feature of our system is the marked asymmetry of the height of the conductance steps in Figure 2a. The d/d/d\(V\) spectra taken between the phenalenyl units (blue lines) show a higher step at negative bias polarity, compared to the smaller step at positive bias polarity. This effect can be seen for diphenalenyl on Au(**111**) as well as on NaCl islands. Figure 2b displays two iso-d/d\(V\) maps[19], using the d/d\(V\) signal as feedback, taken at energies just outside of the spin excitation gap. The asymmetry with bias polarity becomes clearly visible. Therefore, the asymmetry of the spectra taken in the center of the dimer is an intrinsic property of the diphenalenyl molecules and independent from the underlying substrate. On the other hand, for the molecule deposited on Au(**111**), the inelastic steps are broadened and show pronounced triangular overshoots. For the spectra taken at the sides of the molecule these overshoots are significantly larger for positive bias (Figure 2a). This asymmetry is not present when the molecule is adsorbed on NaCl, and thus is non-generic. As discussed below, our calculations are able to account for both asymmetries.
Importantly, there are three main differences between the spectra acquired for diphenalenyl on Au(**111**) with those taken on NaCl. First, the singlet-triplet excitation energy is significantly higher on NaCl with 48 meV, compared to 41 meV on Au(**111**). Second, the peak-like features at the excitation steps are not present in the spectra taken on NaCl and, third, the width of the steps is much broader on Au(**111**) than on NaCl. The latter becomes apparent in the numerical derivation of the d/d\(V\) signal in Figure 2a.
We now resort to theory to rationalize the main properties of the observed inelastic excitations by addressing i) their different energies and broadening, depending on the presence, or not, of a NaCl decoupling layer, and ii) the bias asymmetry of the excitation line shapes. First, we provide evidence that inelastic excitations observed at around 40 meV are associated with interphenalenyl exchange. For that matter, following our previous works[4, 20, 21], we build a generalized Hubbard model, that also includes long-range Coulomb interactions (see SI for computational methods). Our calculations show that diphenalenyl remains open-shell, possessing an \(S\) = 0 ground state and an \(S\) = 1 excited state with energy in
the range of the experimental observations. Thus, diphenyll hosts two antiferromagnetically coupled unpaired spins.
There are two mechanisms for intermolecular exchange in this type of system, as discussed by two of us recently [20]. One is driven by intermolecular hybridization of the zero modes of the phenalenyls, the other by Coulomb-mediated virtual occupation of higher-energy extended molecular orbitals. Based on a model with first-neighbor hopping only, one could
Figure 3: Calculations for extended Hubbard model of diphenyll (see SI and Ref. [20] for details). (**a**) Zero mode distribution of diphenyll. Due to missing weight at the binding site, the intermolecular hybridization between the two phenalenyl units is driven by third-neighbor hopping \(t_{2}\) instead of \(t_{1}\). (**b**) Simplified two-site model of diphenyll with effective hopping parameter \(t_{\text{eff}}\) and effective Coulomb repulsion \(U_{\text{eff}}\). (**c**) Molecular orbitals in the single-particle picture. Hybridization of the SOMOs (blue, red) leads to bonding (HOMO, purple) and anti-bonding (LUMO, green) frontier orbitals. (**d**) Singlet–triplet splitting of diphenyll as function of Coulomb repulsion for kinetic exchange only (KE, purple squares) and Coulomb driven exchange (CDE, green circles). Considering both exchanges (light blue triangles) predicts a splitting close to the experimentally observed energies (horizontal dashed line) for a wide range of values for the Coulomb repulsion.
expect that intermolecular hybridization vanishes, as the zero-modes have a null weight on the binding sites (as depicted in Figure 3a). Our DFT calculations show that this description is not complete and that hybridization does not vanish (see the SI). This automatically suggests that third-neighbor hopping is non-zero (see figure 3a and also ref. [15]). Therefore, intermolecular hybridization is present, so that the two mechanisms for intermolecular exchange are active and can be accounted for by means of exact diagonalization of the model in a restricted set of multi-electronic states (see the SI). The predictions of this model for the total exchange, and the relative contribution of the two mechanisms are shown in Figure 3d. They show that, for a wide range of values of the Coulomb interactions, the predicted singlet-triplet excitation energy is close to the experimental value. Importantly, the ratio of intermolecular hybridization and intramolecular addition energies show the dimer has a strong diradical character.
We now address the substrate-dependent energy and linewidth of the excitations. To do so, we include in our Hamiltonian the hybridization of the molecular orbitals (MOs) with the conduction electrons in the substrate. Our ab initio calculations show that this hybridization strongly depends on both substrate and MO. (see the SI and refs. [21, 22]). The interacting MOs coupled to the conduction electrons in the substrate constitute a multi-orbital Anderson model that we solve in the one-crossing approximation [23] (OCA).
The key quantity that permits to relate the model calculations to the experimental results are the spectral functions of the electrons in the molecule which are directly connected to the dI/dV in the tunneling regime [24] and include both the many-body interactions and the influence of the substrate. We compute the spectral functions _A\({}_{\text{d}}\)(\(\omega\))_ projected MOs within OCA, taking into account the MOs shown in Figure 3c, i.e. the HOMO-1, HOMO, LUMO and LUMO+1 (see the SI and ref. [21] for details). The results are shown in Figure 4. The coupling to the substrate has two major effects on the system: on the one hand, it leads to screening of the Coulomb tensor, generally lowering the Hubbard \(U\), and therefore modifying the bare excitation energies. On the other hand, the coupling to the substrate gives rise to finite linewidths of the spectral features and relatedly to Kondo exchange which renormalizes the excitation energies [21, 25]. In our calculations, the screening of the Coulomb tensor is taken into account by a screening parameter in our model Hamiltonian (see the SI and ref. [20]). Finite
linewidths and Kondo exchange on the other hand are a consequence of solving the Anderson model (by OCA) which includes the single-particle broadening of the MOs due to coupling to the substrate obtained from realistic DFT calculations of the molecule on both surfaces (see the SI and refs. [21, 22]).
Our DFT calculations (see the SI) show that in the presence of the NaCl monolayer, the coupling of the molecule to the substrate is very weak. Therefore, renormalization of the excitation energy by Kondo exchange coupling is negligible, and we can account for the experimentally observed excitation energy of \(\sim 48\) meV in our model, if we take \(U\sim 5.4\) eV (c.f. Figure 3d). In contrast, for the Au(111) surface, DFT calculations show that hybridization with the molecule is appreciable, and thus leads to a substantial renormalization (see Figure 4f and the SI). At the same time, we expect the Coulomb interaction in the molecule to be smaller for Au(111) than for the NaCl monolayer due to screening by the conduction electrons. We find good agreement with the experimental value for Au(111) of \(\sim 41\) meV for a Coulomb interaction tensor corresponding to a Hubbard-\(U\) of \(2.5\) eV. Therefore, we conclude that the observed energy difference is due to a combination of enhanced substrate-induced renormalization and screening of the interactions when the molecules are deposited on Au(111). As expected from previous work [21], the linewidth of the calculated spin-excitation steps is larger for the molecule on Au(111), in agreement with the experiments. It is important to note, however, that the linewidth is somewhat smaller than the experimental value, which probably reflects the limitations of the OCA method.
Finally, we address the origin of the bias asymmetries, both, the generic one seen for both substrates for spectra taken at the central part of the molecule (blue lines in Figure 2a,c), as well as the bias asymmetry observed in the spectra taken at the outer parts of the molecule deposited on Au(111). First, we note that the contribution of the HOMO and the LUMO to the step heights (Figure 4a,d) already results in pronounced and opposite asymmetries for both substrates: the HOMO has a significantly larger step for negative bias than for positive bias, while for the LUMO it is exactly the opposite. This asymmetry is ultimately caused by proximity and height of the ion resonance closest to the step, that is, with the same bias polarity. Additionally, in the case of Au(111) the steps show the characteristic triangular overshoots induced by Kondo exchange with the conduction electrons [21, 24, 25, 26, 27, 28, 29]. The overshoot
is especially pronounced for the positive-bias step in the LUMO spectral function. The reason is the enhancement of the product of the conduction electron density of states \(N(E_{F})\) and Kondo exchange \(J_{K}\), given by \(\Gamma_{\text{LUMO}}/E_{\text{A}}\), due to the proximity of the positive ion resonance and thus low electron-addition energy \(E_{\text{A}}\).
The bias and position dependence of the d\(l\)/d\(V\) are related to the LDOS that in our many-body picture relates to orbital-resolved spectral function \(A_{k}(\omega)\) as:
\[\rho(\vec{r};\omega)=\sum_{k}|\psi_{k}(\vec{r})|^{2}A_{k}(\omega)\]
where the index \(k\) labels the MO, and \(|\psi_{k}(\vec{r})|^{2}\) are the square of the MO wave functions. Figures 4c and 4f show the LDOS computed at three different points over the molecule as indicated by the colors corresponding to the circles of the same color in Figure 4b and 4e, respectively. These show the same bias asymmetries observed in the experiment. First, over the center of the molecule (blue line) the LDOS shows the generic asymmetry common to both substrates where the negative bias step is significantly larger than the positive bias step. The reason is the vanishing of the LUMO wave function at the center of the molecule (c.f. Figure 3c). Therefore, the LDOS is dominated by the HOMO spectral function (purple lines in Figure 4a,d). In contrast, both HOMO and LUMO contribute to the LDOS at the outer parts of the molecule (red and black lines in Figure 4c,f). In the case of the NaCl monolayer, this leads to almost symmetric steps, in agreement with experiment (c.f. Figure 2c). On the other hand, for the gold substrate the dominant height of the positive bias step in the HOMO also leads to a predominance of the positive bias step in the LDOS over the side units with the characteristic Kondo exchange induced overshoot, again in agreement with experiment.
Figure 4b,e shows LDOS maps, \(\rho(\vec{r};\omega)\) computed at the energies \(\omega\) of the highest Hubbard peak in the HOMO and the LUMO spectral functions for both substrates. These correspond to constant height maps of the d/d/d\(V\) at fixed voltage. Clearly, comparison with Figure 3c shows that the LDOS maps at negative energies resemble the density map of the HOMO while those at positive energies resemble the density map of the LUMO. Naturally, the correspondence is not 100% since also the other orbital(s) contribute with some weight.
Figure 4: (**a,d**) Spectral function of HOMO (purple) and LUMO (green) for Diphenalenyl on ML-NaCl/Au(111) and Au(111), showing significant spectral weight at both bias polarities. The inset in (a) displays a zoom of the inelastic step features. (**b,e**) Constant height local density of states (LDOS) maps at energies of the highest Hubbard peaks of the HOMO and LUMO spectral functions. (**c,f**) Calculated d/d/d\(V\) spectra for different positions, marked by circles in (**b,e**). Vertical dotted lines mark the singlet–triplet excitation energies. (**g**) Measured d/d/d\(V\) spectroscopy taken with CO tip on diphenalenyl on Au(111). The vertical dotted lines indicate the positions of the resonances corresponding to the HOMO (purple) and LUMO (green), for both polarities. Position of spectra are marked by circles in (h). Feedback loop opened at -1 V/350 pA, \(V_{\rm rms}\) = 14 mV. (**h**) Measured d/d\(V\) maps at negative bias voltages, matching the LDOS of the LUMO (top) and HOMO (bottom). Maps taken at 200 pA and \(V_{\rm rms}\) = 14 mV.
Further validation of our model, and additional evidence for intermolecular hybridization, comes from the prediction for the molecule on Au(111) of a splitting of the negative ion resonance, that arises from the HOMO-LUMO hybridization splitting, of about 100 meV. In the many-body picture, both MO contribute to the negative ion resonance, at slightly different energies (Figure 4a,b). This prediction is confirmed by STS with a CO-functionalized tip, which, depending on the position of the STM tip over the molecule (shown in the inset), exhibits peaks at different voltages. For the position over one of the lobes at the center of the dimer (blue), where only the HOMO contributes (see Figure 4h, bottom), there is a pronounced peak at around -0.6 V, while at the position away from the center (red), where the LUMO contribution is the strongest (see Figure 4h top), a broader peak around -0.7 V is found. Constant current d/d/_dV_ maps taken at these voltages (Figure 4h, top) further confirm the assignment of these peaks to HOMO and LUMO.
In summary, thorough spectroscopy studies combined with theory portray diphenalenyl as an open-shell molecule, where strong interphenalenyl antiferromagnetic exchange leads to an \(S\) = 0 ground state and an \(S\) = 1 excited state. Additionally, we have provided strong evidence for the existence of intermolecular hybridization. The peculiar nature of the zero-modes in this class of system makes it possible to unveil the role of third-neighbor hopping, dominant for these states, and very frequently ignored in the modeling of graphene. By comparing the spectra for the same molecule on two different surfaces, we also show the relevant role of coupling to the substrate, that changes not only the lifetimes, but also the energies of the inelastic excitations. These findings need to be taken into account for the design of platforms that exploit phenalenyl and other planar nanographene radicals as molecular building blocks for quantum technologies such as quantum computing, quantum simulation or quantum sensing.
## Associated content
Experimental and computational methods, supporting STM and STS data, additional calculations, and a detailed synthetic description and characterization of chemical compounds reported in this study (PDF).
## Author Contributions
P.R., R.F and M.J conceived the experiments. A.B. synthesized and characterized the precursors in solution. N.K and E.T performed the on-surface synthesis and scanning probe measurements. N.K and E.T performed TB calculations and analyzed the data. D.J, J.F.R and G.G simulated the system using different levels of theory. All authors discussed the results and contributed to the writing of the manuscript.
## Funding Sources
This research was supported by the Swiss National Science Foundation (SNSF; Grant No. CRSII5_205987, 200020_18201, PP00P2_170534 and PP00P2_198900), CarboQuant funded by the Werner Siemens Foundation, the EU Horizon 2020 research and innovation program - Marie Sklodowska-Curie grant no. 813036, and ERC Starting grant (INSPIRAL, grant no. 716139). The work was also financially supported from Grant PID2020-112811GB-I00 funded by MCIN/AEI/10.13039/501100011033 and Grant No. IT1453-22 from the Basque Government. The research was also supported by NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602 and by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID $1142. JFR further acknowledges financial support from FCT (Grant No. PTDC/FIS-MAC/2045/2021), Generalitat Valenciana funding Prometeo2021/017 and MFA/2022/045, and MICIN-Spain (Grant No. PID2019-109539GB-C41).
For the purpose of Open Access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.
## Acknowledgement
We thank Oliver Groning, Kristjan Emre and Carlo Antonio Pignedoli for the fruitful scientific discussions. Skillful technical assistance by Lukas Rotach is gratefully acknowledged.
## References
* [1] Uchida, K.; Kubo, T. Recent Advances in the Chemistry of Phenalenyl. _J. Synth. Org. Chem. Jpn._**2016**, _74_ (11), 1069-1077. [https://doi.org/10.5059/yukigoseikyokaishi.74.1069](https://doi.org/10.5059/yukigoseikyokaishi.74.1069).
* [2] Turco, E.; Bernhardt, A.; Krane, N.; Valenta, L.; Fasel, R.; Juricek, M.; Ruffieux, P. Observation of the Magnetic Ground State of the Two Smallest Triangular
Nanographenes. _JACS Au_**2023**, jacsau.2c00666. [https://doi.org/10.1021/jacsau.2c00666](https://doi.org/10.1021/jacsau.2c00666).
* Hirjibehedin et al. 2006 Hirjibehedin, C. F.; Lutz, C. P.; Heinrich, A. J. Spin Coupling in Engineered Atomic Structures. _Science_**2006**, _312_ (5776), 1021. [https://doi.org/10.1126/science.1125398](https://doi.org/10.1126/science.1125398).
* Mishra et al. 2021 Mishra, S.; Catarina, G.; Wu, F.; Ortiz, R.; Jacob, D.; Eimre, K.; Ma, J.; Pignedoli, C. A.; Feng, X.; Ruffieux, P.; Fernandez-Rossier, J.; Fasel, R. Observation of Fractional Edge Excitations in Nanographene Spin Chains. _Nature_**2021**, _598_ (7880), 287-292. [https://doi.org/10.1038/s41586-021-03842-3](https://doi.org/10.1038/s41586-021-03842-3).
* Hieulle et al. 2021 Hieulle, J.; Castro, S.; Friedrich, N.; Vegliante, A.; Lara, F. R.; Sanz, S.; Rey, D.; Corso, M.; Frederiksen, T.; Pascual, J. I.; Pena, D. On-Surface Synthesis and Collective Spin Excitations of a Triangulene-Based Nanostar. _Angew. Chem. Int. Ed._**2021**, _60_ (48), 25224-25229. [https://doi.org/10.1002/anie.202108301](https://doi.org/10.1002/anie.202108301).
* Mishra et al. 2020 Mishra, S.; Beyer, D.; Eimre, K.; Ortiz, R.; Fernandez-Rossier, J.; Berger, R.; Groning, O.; Pignedoli, C. A.; Fasel, R.; Feng, X.; Ruffieux, P. Collective All-Carbon Magnetism in Triangulene Dimers. _Angew. Chem. Int. Ed._**2020**, _59_ (29), 12041-12047. [https://doi.org/10.1002/anie.202002687](https://doi.org/10.1002/anie.202002687).
* Cheng et al. 2022 Cheng, S.; Xue, Z.; Li, C.; Liu, Y.; Xiang, L.; Ke, Y.; Yan, K.; Wang, S.; Yu, P. On-Surface Synthesis of Triangulene Trimers via Dehydration Reaction. _Nat. Commun._**2022**, _13_ (1), 1705. [https://doi.org/10.1038/s41467-022-29371-9](https://doi.org/10.1038/s41467-022-29371-9).
* Pavlicek et al. 2017 Pavlicek, N.; Mistry, A.; Majzik, Z.; Moll, N.; Meyer, G.; Fox, D. J.; Gross, L. Synthesis and Characterization of Triangulene. _Nat. Nanotechnol._**2017**, _12_ (4), 308-311. [https://doi.org/10.1038/nnano.2016.305](https://doi.org/10.1038/nnano.2016.305).
* Li et al. 2020 Li, J.; Sanz, S.; Castro-Esteban, J.; Vilas-Varela, M.; Friedrich, N.; Frederiksen, T.; Pena, D.; Pascual, J. I. Uncovering the Triplet Ground State of Triangular Graphene Nanoflakes Engineered with Atomic Precision on a Metal Surface. _Phys. Rev. Lett._**2020**, _124_ (17), 177201. [https://doi.org/10.1103/PhysRevLett.124.177201](https://doi.org/10.1103/PhysRevLett.124.177201).
* Mishra et al. 2020 Mishra, S.; Beyer, D.; Eimre, K.; Kezilebieke, S.; Berger, R.; Groning, O.; Pignedoli, C. A.; Mullen, K.; Liljeroth, P.; Ruffieux, P.; Feng, X.; Fasel, R. Topological Frustration Induces Unconventional Magnetism in a Nanographene. _Nat. Nanotechnol._**2020**, _15_ (1), 22-28. [https://doi.org/10.1038/s41565-019-0577-9](https://doi.org/10.1038/s41565-019-0577-9).
* Turco et al. 2021 Turco, E.; Mishra, S.; Melidonie, J.; Eimre, K.; Obermann, S.; Pignedoli, C. A.; Fasel, R.; Feng, X.; Ruffieux, P. On-Surface Synthesis and Characterization of Super-Nonazethrene. _J. Phys. Chem. Lett._**2021**, _12_ (34), 8314-8319. [https://doi.org/10.1021/acs.jpclett.1c02381](https://doi.org/10.1021/acs.jpclett.1c02381).
* Biswas et al. 2022 Biswas, K.; Urgel, J. I.; Ajayakumar, M. R.; Ma, J.; Sanchez-Grande, A.; Edalatmanesh, S.; Lauwaet, K.; Mutombo, P.; Gallego, J. M.; Miranda, R.; Jelinek, P.; Feng, X.; Ecija, D. Synthesis and Characterization of Peri-Heptacene on a Metallic Surface. _Angew. Chem._**2022**, _134_ (23), e202114983. [https://doi.org/10.1002/ange.202114983](https://doi.org/10.1002/ange.202114983).
* de Oteyza and Frederiksen 2022 de Oteyza, D. G.; Frederiksen, T. Carbon-Based Nanostructures as a Versatile Platform for Tunable \(n\)-Magnetism. _J Phys_**2022**, 42.
* Mishra et al. 2021 Mishra, S.; Yao, X.; Chen, Q.; Eimre, K.; Groning, O.; Ortiz, R.; Di Giovannantonio, M.; Sancho-Garcia, J. C.; Fernandez-Rossier, J.; Pignedoli, C. A.; Mullen, K.; Ruffieux, P.; Narita, A.; Fasel, R. Large Magnetic Exchange Coupling in Rhombus-Shaped Nanographenes with Zigzag Periphery. _Nat. Chem._**2021**. [https://doi.org/10.1038/s41557-021-00678-2](https://doi.org/10.1038/s41557-021-00678-2).
* Ortiz et al. 2022 Ortiz, R.; Catarina, G.; Fernandez-Rossier, J. Theory of Triangulene Two-Dimensional Crystals. _2D Mater._**2022**, _10_ (1), 015015. [https://doi.org/10.1088/2053-1583/aca4e2](https://doi.org/10.1088/2053-1583/aca4e2).
* Repp et al. 2006 Repp, J.; Meyer, G.; Paavilainen, S.; Olsson, F. E.; Persson, M. Imaging Bond Formation Between a Gold Atom and Pentacene on an Insulating Surface. _Science_**2006**, _312_ (5777), 1196-1199. [https://doi.org/10.1126/science.1126073](https://doi.org/10.1126/science.1126073).
* Ovchinnikov 1978 Ovchinnikov, A. A. Multiplicity of the Ground State of Large Alternant Organic Molecules with Conjugated Bonds. _Theor. Chim. Acta_**1978**, _47_ (4), 297-304. [https://doi.org/10.1007/BF00549259](https://doi.org/10.1007/BF00549259).
* Lieb 1989 Lieb, E. H. Two Theorems on the Hubbard Model. _Phys. Rev. Lett._**1989**, _62_ (10), 1201-1204. [https://doi.org/10.1103/PhysRevLett.62.1201](https://doi.org/10.1103/PhysRevLett.62.1201).
* Reecht et al. 2017 Reecht, G.; Heinrich, B. W.; Bulou, H.; Scheurer, F.; Limot, L.; Schull, G. Imaging Isodensity Contours of Molecular States with STM. _New J. Phys._**2017**, _19_ (11), 113033. [https://doi.org/10.1088/1367-2630/aa969a](https://doi.org/10.1088/1367-2630/aa969a).
* Jacob and Fernandez-Rossier 2022 Jacob, D.; Fernandez-Rossier, J. Theory of Intermolecular Exchange in Coupled Spin- 1 2 Nanographenes. _Phys. Rev. B_**2022**, _106_ (20), 205405. [https://doi.org/10.1103/PhysRevB.106.205405](https://doi.org/10.1103/PhysRevB.106.205405).
* Jacob et al. 2021 Jacob, D.; Ortiz, R.; Fernandez-Rossier, J. Renormalization of Spin Excitations and Kondo Effect in Open-Shell Nanographenes. _Phys. Rev. B_**2021**, _104_ (7), 075404. [https://doi.org/10.1103/PhysRevB.104.075404](https://doi.org/10.1103/PhysRevB.104.075404).
* Gandus et al. 2022 Gandus, G.; Passerone, D.; Stadler, R.; Luisier, M.; Valli, A. Strongly Correlated Physics in Organic Open-Shell Quantum Systems. arXiv December 31, 2022. [https://doi.org/10.48550/arXiv.2301.00282](https://doi.org/10.48550/arXiv.2301.00282).
* Haule et al. 2001 Haule, K.; Kirchner, S.; Kroha, J.; Wolfle, P. Anderson Impurity Model at Finite Coulomb Interaction U: Generalized Noncrossing Approximation. _Phys. Rev. B_**2001**, _64_ (15), 155111. [https://doi.org/10.1103/PhysRevB.64.155111](https://doi.org/10.1103/PhysRevB.64.155111).
* Jacob 2018 Jacob, D. Simulation of Inelastic Spin Flip Excitations and Kondo Effect in STM Spectroscopy of Magnetic Molecules on Metal Substrates. _J. Phys. Condens. Matter_**2018**, _30_ (35), 354003. [https://doi.org/10.1088/1361-648X/aad523](https://doi.org/10.1088/1361-648X/aad523).
* Oberg et al. 2014 Oberg, J. C.; Calvo, M. R.; Delgado, F.; Moro-Lagares, M.; Serrate, D.; Jacob, D.; Fernandez-Rossier, J.; Hirjibehedin, C. F. Control of Single-Spin Magnetic Anisotropy by Exchange Coupling. _Nat. Nanotechnol._**2014**, \(9\) (1), 64-68. [https://doi.org/10.1038/nnano.2013.264](https://doi.org/10.1038/nnano.2013.264).
* Zitko and Pruschke 2010 Zitko, R.; Pruschke, T. Many-Particle Effects in Adsorbed Magnetic Atoms with Easy-Axis Anisotropy: The Case of Fe on the CuN/Cu(100) Surface. _New J. Phys._**2010**, _12_ (6), 063040. [https://doi.org/10.1088/1367-2630/12/6/063040](https://doi.org/10.1088/1367-2630/12/6/063040).
* Korytar et al. 2012 Korytar, R.; Lorente, N.; Gauyacq, J.-P. Many-Body Effects in Magnetic Inelastic Electron Tunneling Spectroscopy. _Phys. Rev. B_**2012**, _85_ (12), 125434. [https://doi.org/10.1103/PhysRevB.85.125434](https://doi.org/10.1103/PhysRevB.85.125434).
* Ternes 2015 Ternes, M. Spin Excitations and Correlations in Scanning Tunneling Spectroscopy. _New J. Phys._**2015**, _17_ (6), 063016. [https://doi.org/10.1088/1367-2630/17/6/063016](https://doi.org/10.1088/1367-2630/17/6/063016).
* Jacob and Fernandez-Rossier 2016 Jacob, D.; Fernandez-Rossier, J. Competition between Quantum Spin Tunneling and Kondo Effect. _Eur. Phys. J. B_**2016**, _89_ (10), 210. [https://doi.org/10.1140/epjb/e2016-70402-2](https://doi.org/10.1140/epjb/e2016-70402-2).
* Mugarza et al. 2016 Mugarza, A.; Robles, R.; Krull, C.; Korytar, R.; Lorente, N.; Gambardella, P. Electronic and Magnetic Properties of Molecule-Metal Interfaces: Transition-Metal
Phthalocyanines Adsorbed on Ag(100). _Phys. Rev. B_**2012**, _85_ (15), 155437. [https://doi.org/10.1103/PhysRevB.85.155437](https://doi.org/10.1103/PhysRevB.85.155437).
* [31] Ternes, M.; Heinrich, A. J.; Schneider, W.-D. Spectroscopic Manifestations of the Kondo Effect on Single Adatoms. _J. Phys. Condens. Matter_**2008**, _21_ (5), 053001. [https://doi.org/10.1088/0953-8984/21/5/053001](https://doi.org/10.1088/0953-8984/21/5/053001).
Supplementary Information
Exchange interactions and intermolecular hybridization
in a spin-1/2 nanographene dimer
N. Krane, E. Turco, A. Bernhardt, D. Jacob, G. Gandus, D. Passerone, M. Luisier, M. Juricek, R. Fasel, J. Fernandez-Rossier, and P. Ruffieux
###### Contents
* 1 Synthetic procedures
* 2 Additional STM/STS data
* 3 Experimental methods
* 4 Tight-binding (TB) and mean-field Hubbard (MFH) calculations
* 5 DFT of diphenalenyl on Au(111) and NaCl
* 6 Many-body calculations for diphenalenyl model
* 7 Copies of NMR and HRMS spectra
_The raw NMR data is available free of charge on a public repository Zenodo under the link [https://zenodo.org/record/8128962](https://zenodo.org/record/8128962) (DOI: 10.5281/zenodo.8128962)._
## 1 Synthetic procedures
### Chemicals
All chemicals and solvents were used without further purification if not specified otherwise. Chemicals and solvents were purchased from abcr GmbH, Acros Organics, Alfa Aesar, Merck/Sigma-Aldrich, and Flourochem. Dry solvents were purchased from Acros Organics and used without further treatment if not described otherwise.
### Remarks for reaction procedures and analytical methods
For reactions that were carried out under inert conditions, nitrogen was used as an inert gas. Glassware was oven-dried at 150 \({}^{\circ}\)C, cooled down in vacuum (oil pump, \(\sim 10^{-3}\) mbar), and flushed with nitrogen. After adding solid reagents and a magnetic stirring bar under ambient conditions, the closed vessels were evacuated and flushed with nitrogen three times. Dry solvents and liquid reagents were added afterwards under a flow of nitrogen. If an air- or moisture-sensitive solid was used, it was transferred into the reaction vessel in a nitrogen-filled glovebox. Solvents and liquid reagents were deoxygenated by bubbling with nitrogen while sonication in an ultrasonic bath for 15-30 min (method A) or by freeze-pump-thaw technique in three cycles under a nitrogen atmosphere (method B). Reactions that were carried out at room temperature were running between 20 and 25 \({}^{\circ}\)C depending on fluctuating external factors as the season and ventilation. Custom-made glassware was used for the column chromatography under inert conditions (Figure S1a) as well as for photochemical brominations (Figure S1b). Reactions that were carried out in sealed tubes were heated in a custom-made aluminum heating block (Figure S5).
Figure S5. Custom-made glassware for (a) column chromatography under inert conditions and aluminum heating block. (b) Custom-made reaction vessel for photobrominations.
For inert chromatography, the column was equipped with a connector for gas supply on its top, the bottom was connected to a Schlenk flask to evacuate the system. Before use, the column was washed with deoxygenated solvent. The eluent and the crude mixtures were inserted through the septum.
The photoreceptor consists of a glass tube that is constructed similarly to a cold trap and is circulated with coolant (_i_-PrOH cooled with a chiller) during the reaction. This tube was inserted into the reaction vessel and immersed into the reaction solution. The reaction vessel was wrapped with a commercial flexible LED stripe for the photoirradiation. Two more joints at the upper rim of the reaction vessel allow to connect a dropping funnel with bromine and a gas washing bottle for the neutralization of the bromine gas.
Thin-layer chromatography (TLC) was performed using analytical silica gel aluminum plates 60 F\({}_{254}\) by Merck. Flash column chromatography was performed by standard technique using silica gel 60 mesh or Florisil\({}^{*}\).
Proton and decoupled carbon NMR spectra were recorded with a Bruker FT-NMR Avance III 400 and Avance 600 spectrometers at 298 K unless indicated otherwise. The frequency and the solvent are given separately for each substance. Chemical shifts are given in units of the \(\delta\)-scale in ppm. Shifts for \({}^{1}\)H and \({}^{13}\)C NMR spectra are given relative to the residual proton and carbon signal of the indicated solvent, respectively: CDCl\({}_{3}\) (7.26 and 77.16), CH\({}_{2}\)Cl\({}_{2}\) (5.32 and 53.84), DMSO-\(d_{6}\) (2.50 and 39.52), acetone-\(d_{6}\) (2.05 and 29.84), THF-\(d_{8}\) (1.72 and 25.31), toluene-\(d_{8}\) (2.08 and 20.43) and benzene-\(d_{6}\) (7.16 and 128.06).\({}^{[1]}\) Coupling constants are given in Hertz (Hz). Processing and interpretation was performed using MestReNova 14.1.
High-resolution mass spectra were measured as EI (ThermoFischer Scientifics DFS), ESI or APCI (ThermoFischer Scientifics QExactive MS).
### Solution synthesis of 2_H_-diphenalenyl
The target molecule 2_H_-diphenalenyl (**10**) was prepared from naphthalene in a sequence of nine synthetic steps, involving the preparation of functionalized phenalenone derivatives. The dimeric structure was constructed via a Suzuki cross-coupling. The subsequent reduction yielded the target compound 2_H_-diphenalenyl that was subjected to experiments on a Au(**111**) surface (Scheme 1).
### Scheme 1
Scheme 1. Synthetic route to 2_H_-diphenalenyl (**10**) starting from naphthalene (**1**).[2, 3, 4, 5, 6, 7]
1,2,3,4-Tetrabromo-1,2,3,4-tetrahydronaphthalene (2). The reaction was carried out using a modified literature procedure[2] in a custom-made bioreactor, where a tube with cooling medium (isopropanol cooled by a chiller to -2 *C] was immersed into the reaction vessel equipped with a stirring bar. A commercial LED stripe emitting white light was fitted around the vessel. A solution of naphthalene (1; 9.00 g, 70.2 mmol) in carbon tetrachloride (140 mL) was cooled to 0 *C and irradiated. A solution of bromine (14.4 mL, 281 mmol) in carbon tetrachloride (66 mL) was added dropwise. After the addition of bromine was complete, the irradiation was continued at 0 *C. The reaction was monitored by TLC (SiO\({}_{2}\), cyclohexane). After 1.5 h, the starting material was consumed and some of the target compound was precipitated. The reaction mixture was quenched with a solution of NaHSO\({}_{3}\) (36.5 g, 351 mmol) dissolved in water (150 mL). After phase separation, the organic layer was washed with water followed by saturated aqueous NaHCO\({}_{3}\) before carbon tetrachloride was removed under reduced pressure and recovered. The previously obtained aqueous layer after quenching the reaction was again extracted with dichloromethane. The new organic layer was also washed with water, followed by saturated aqueous NaHCO\({}_{3}\) and then brine. The combined organic layers were combined with the remaining solids and the solvent was removed under reduced pressure. The crude product was recrystallized from _n_-hexane and dichloromethane in a refrigerator. The desired compound was obtained as a colorless crystalline solid (22.3 g, 49.8 mmol, 72%). The obtained NMR spectra are in accord with the previously reported data.[8]\({}^{1}\)**H NMR (400 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 7.61 (dd, \(J\) = 5.7, 3.4 Hz, 2H), 7.42 (dd, \(J\) = 5.8, 3.3 Hz, 2H), 5.77-5.67 (m, 2H), 5.06-4.95 (m, 2H). **13C NMR (101 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 133.0, 130.2, 129.8, 54.2, 50.3.
**1,3-Dibromonaphthalene (3).[2]** The reaction was carried out under inert conditions. To a solution of 1,2,3,4-tetrahydrono-1,2,3,4-tetrahydronaphthalene (2; 20.4 g, 45.6 mmol) in THF (200 mL), a solution of _t_-BuOK (11.8 g, 105 mmol) in THF (110 mL) was added dropwise at room temperature. After the addition was completed, the reaction mixture was stirred overnight before it was quenched with brine and extracted with methyl _tert_-butyl ether. The combined organic layers were washed with water, followed by brine and dried over MgSO\({}_{4}\). After evaporation of the solvent under reduced pressure, the crude product was purified over a short plug (SiO\({}_{2}\), cyclohexane). The desired compound was obtained as a colorless solid in almost quantitative yield (12.9 g, 45.0 mmol, 99%). The obtained NMR spectra are in accord with the previously reported data.[8]\({}^{1}\)**H NMR (400 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 8.19 (d, \(J\) = 8.3 Hz, 1H), 7.97 (d, \(J\) = 2.3 Hz, 1H), 7.89 (d, \(J\) = 1.6 Hz, 1H), 7.74 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.66 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.67 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.68 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.69 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.66 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.67 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.68 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.69 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.66 (dd, \(J\) = 8.1, 1.7 Hz, 1H), 7.67 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.68 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.69 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.66 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.67 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.68 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.69 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.66 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.67 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.68 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.69 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.66 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.67 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.68 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.69 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.61 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.62 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.60 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.63 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.64 (ddd, \(J\) = 8.1, 1.7 Hz, 1H), 7.65 (ddd, \(J\) = 8
= 8.6, 6.9, 1.4 Hz, 1H), 7.55 (ddd, \(J\) = 8.1, 6.7, 1.2 Hz, 1H). **13C NMR (101 MHz, CDCl3, ppm)**: \(\delta\) 135.3, 132.7, 130.8, 130.1, 127.9, 127.8, 127.6, 127.4, 123.7, 119.0.
**3-Bromo-1-naphthaldehyde (4).** The reaction was carried out under inert conditions. To a cooled (-78 "C) solution of 1,3-dibromonaphthalene (**3**; 2.12 g, 7.42 mmol) in dry THF (40 mL), cooled _n_-BuLi (2.82 mL, 7.05 mmol, 2.5 M in _n_-hexane) was added dropwise while the temperature of the reaction mixture was kept below -70 "C. The mixture was stirred for 0.5 h before DMF (861 "L", 11.1 mmol) was added. After the addition, the reaction mixture was stirred for 1 h at the same temperature before it was quenched with brine. The layers were separated and the aqueous layer was extracted with methyl _tert_-butyl ether. The combined organic layers were dried over MgSO4 and the solvent was removed under reduced pressure. The crude product was purified by flash column chromatography (SiO2, cyclohexane/EtOAc 5:1 v/v) to afford the desired compound as a colorless solid (1.21 g, 5.13 mmol, 69%). The yield varied between 32 and 69%, the best result was obtained with freshly distilled DMF and a constant temperature of less than -70 "C. The obtained NMR spectra are in accord with the previously reported data.[3] **1H NMR (400 MHz, CDCl3, ppm)**: \(\delta\) 10.36 (s, 1H), 9.16 (dd, \(J\) = 8.6, 1.2 Hz, 1H), 8.26 (d, \(J\) = 2.1 Hz, 1H), 8.06 (d, \(J\) = 2.0 Hz, 1H), 7.84 (d, \(J\) = 8.1 Hz, 1H), 7.71 (ddd, \(J\) = 8.5, 6.9, 1.4 Hz, 1H), 7.62 (ddd, \(J\) = 8.2, 7.0, 1.2 Hz, 1H). **13C NMR (101 MHz, CDCl3, ppm)**: \(\delta\) 192.1, 138.9, 137.0, 135.3, 133.0, 129.5, 129.2, 128.1, 127.8, 125.1, 118.7.
**Ethyl (_E_)-3-(3-bromonaphthalene-1-v)acrylate (5).** The reaction was carried out under inert conditions using a modified literature procedure.[4] A suspension of NaH (562 mg, 23.4 mmol) in THF (80 mL) was cooled in an ice bath before triethyl phosphonoacetate (3.87 mL, 19.5 mmol) was added dropwise. The mixture was stirred for 0.5 h at the same temperature before a solution of 3-bromo-1-naphthaldehyde (**5**; 4.09 g, 13.0 mmol) in dry THF (40 mL) was added. The reaction mixture was allowed to warm to room temperature and was stirred overnight. Subsequently, the reaction mixture was quenched with brine and the layers were separated. The aqueous layer was extracted with methyl _tert_-butyl ether and the combined organic layers were dried over MgSO4. After removal of the solvent under reduced pressure, the crude product was purified by flash column chromatography (SiO2, cyclohexane/EtOAc 5:1 v/v) to afford the desired compound as a colorless solid (3.09 g, 10.1 mmol, 78%). The obtained NMR spectra are in accord with the previously reported data.[3] **1H NMR (400 MHz,
**CDCl\({}_{3}\), ppm):**\(\delta\) 8.40 (d, \(J\) = 15.7 Hz, 1H), 8.12 (d, \(J\) = 8.1 Hz, 1H), 8.01 (d, \(J\) = 1.9 Hz, 1H), 7.79 (d, \(J\) = 1.9 Hz, 1H), 7.76 (d, \(J\) = 7.9 Hz, 1H), 7.57 (ddd, \(J\) = 8.1, 7.0, 1.6 Hz, 1H), 7.53 (ddd, \(J\) = 7.7, 6.9, 1.4 Hz, 1H), 6.51 (d, \(J\) = 15.7 Hz, 1H), 4.32 (q, \(J\) = 7.1 Hz, 2H), 1.38 (t, \(J\) = 7.1 Hz, 3H). \({}^{13}\)**C NMR (101 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 166.6, 140.2, 134.9, 134.0, 132.1, 130.0, 128.0, 127.9, 127.4, 127.3, 123.7, 122.5, 119.5, 60.9, 14.5.
_(E)-3-(3-Bromonaphthalen-1-yl)acrylic acid (6)._ Ethyl (_E_)-3-(3-bromonaphthalen-1-yl)acrylate (5; 2.39 g, 7.80 mmol) was dissolved in THF (63 mL), a solution of KOH (875 mg) in water (7 mL) was added and the reaction mixture was refluxed for 4 h. After cooling to room temperature, most of the solvent was evaporated under reduced pressure. The residing liquid was acidified with concentrated HCl. The formed precipitate was filtered, washed several times with water and then dissolved in dichloromethane. Evaporation of the solvent under reduced pressure yielded the desired compound as a white solid (1.61 g, 5.81 mmol, 75%). The obtained NMR spectra are in accord with the previously reported data.\({}^{[3]}\)**H NMR (400 MHz, DMSO-d\({}_{6}\), ppm):**\(\delta\) 12.66 (br s, 1H), 8.30 (d, \(J\) = 15.6 Hz, 1H), 8.29 (s, 1H), 8.18 (d, \(J\) = 7.4 Hz, 1H), 8.04 (s, 1H), 7.98 (d, \(J\) = 7.6 Hz, 1H), 7.70-7.59 (m, 2H), 6.68 (d, \(J\) = 15.7 Hz, 1H). \({}^{13}\)**C NMR (101 MHz, DMSO-d\({}_{6}\), ppm):**\(\delta\) 167.1, 138.6, 134.5, 133. 6, 131.7, 129.3, 128.0, 127.61, 127.57, 127.5, 123.7, 123.4, 118.9.
**5-Bromo-1_H_-phenalen-1-one (7). The reaction was carried out under inert conditions using a modified literature procedure.\({}^{[3]}\) (_E_)-3-(3-Bromonaphthalen-1-yl)acrylic acid (6; 1.35 g, 4.87 mmol) was dissolved in oxalyl chloride (12 mL) and the mixture was refluxed for 2 h. After cooling to room temperature, excessive oxalyl chloride was removed under reduced pressure and the remaining yellow solid was dissolved in dichloromethane (35 mL). The solution was cooled in an ice bath before solid AlCl\({}_{3}\) (1.95 g, 14.6 mmol) was added. The ice bath was removed and the reaction mixture was stirred overnight at room temperature. Subsequently, the reaction mixture was poured into water, the layers were separated and the aqueous layer was extracted with dichloromethane. The combined organic layers were washed with aqueous NaHCO\({}_{3}\) and dried over MgSO\({}_{4}\) before the solvent was removed under
reduced pressure. The crude product was purified by flash column chromatography (SiO\({}_{2}\), CH\({}_{2}\)Cl\({}_{2}\)) to afford the desired compound as a yellow solid (917 mg, 3.54 mmol, 73%). The obtained NMR spectra are in accord with the previously reported data.[3] **H NMR (400 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 8.60 (dd, \(J\) = 7.3, 1.2 Hz, 1H), 8.18 (d, \(J\) = 1.8 Hz, 1H), 8.11 (dd, \(J\) = 8.1, 1.2 Hz, 1H), 7.83 (d, \(J\) = 1.9 Hz, 1H), 7.79 (dd, \(J\) = 8.0, 7.4 Hz, 1H), 7.67 (d, \(J\) = 9.8 Hz, 1H), 6.75 (d, \(J\) = 9.8 Hz, 1H). **13C NMR (101 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 185.3, 140.6, 134.0, 133.8, 133.4, 133.3, 130.6, 130.5, 129.8, 129.6, 128.3, 126.3, 120.6.
**5-(4,4,5,5-Tetramethyl-1,3,2-dioxaborolan-2-yl)-1H-phenalen-1-one (8).** The compound was prepared using a modified literature procedure.[5] The reaction was carried out under inert conditions using deoxygenated solvents (method A). 5-Bromo-1H-phenalen-1-one (**7**; 80 mg, 0.31 mmol), B\({}_{2}\)pin\({}_{2}\) (78 mg, 0.31 mmol), potassium acetate (60 mg, 0.61 mmol) and PdCl\({}_{2}\)(dppf) (11 mg, 0.015 mmol) were suspended in 1,4-dioxane (6 mL) and the mixture was heated for 3 h at 80 "C. After cooling to room temperature, the reaction mixture was diluted with methyl _tert_-butyl ether and washed with water followed by brine. The phases were separated and the organic layer was dried over MgSO\({}_{4}\). After evaporation of the solvents under reduced pressure, the crude product was passed through a pad of silica with methyl _tert_-butyl ether as an eluent. The desired compound was obtained as a yellow-brown solid (89 mg, 0.29 mmol, 93%). **1H NMR (600 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 8.66 (dd, \(J\) = 7.3, 1.2 Hz, 1H), 8.53 (s, 1H), 8.25 (d, \(J\) = 8.0 Hz, 1H), 8.13 (s, 1H), 7.79 (dd, \(J\) = 7.6, 7.6 Hz, 1H), 7.79 (d, \(J\) = 9.7 Hz, 1H), 6.74 (d, \(J\) = 9.7 Hz, 1H), 1.42 (s, 12H). **13C NMR (151 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 185.8, 142.3, 140.2, 136.5, 135.8, 131.8, 131.5, 129.7, 129.4, 129.2, 127.3, 127.2, 84.6, 25.1 (the resonance of C\({}_{\mathrm{q}}\)(B) was not detected, presumably due to long relaxation time). HRMS (EI) **m/z:**\([M]^{+}\) Calcd for C\({}_{19}\)H\({}_{19}\)O\({}_{3}\)B 306.1422; Found 306.1426.
**6H,6'H-[2,2'-Biphenalene]-6,6'-dione (9).** The reaction was carried out under inert conditions with deoxygenated solvents (method A) using a modified literature procedure.[6] 5-Bromo-1H-phenalen-1-one (**7**; 20 mg, 80 \(\upmu\)mol), 5-(4,4,5,5-tetramethyl-1,3,2-dioxaborolan-2-yl)-1H-phenalen-1-one (**8**; 24 mg, 80 \(\upmu\)mol), potassium carbonate (21.3 mg, 150 \(\upmu\)mol) and Pd(PPh\({}_{3}\))\({}_{4}\) (0.89 mg, 10 mol%) were suspended in 1,4-dioxane (1.6 mL) and water (0.4 mL), and the reaction mixture was heated overnight at 85 "C. After cooling to room temperature, the reaction mixture was diluted with water. The formed yellow precipitate was filtered off and
washed with water, acetone and CH\({}_{2}\)Cl\({}_{2}\) before it was dried under reduced pressure. The desired compound was obtained as a yellow solid (34.2 mg, 75.0 \(\mu\)mol, 98%). \({}^{\textbf{1}}\)**H NMR (600 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 8.67 (dd, \(J\) = 7.3, 1.2 Hz, 2H), 8.38 (d, \(J\) = 1.7 Hz, 2H), 8.34 (ddd, \(J\) = 8.0, 1.2, 0.6 Hz, 2H), 8.15 (d, \(J\) = 1.7 Hz, 2H), 7.89 (d, \(J\) = 9.7 Hz, 2H), 7.88 (dd, \(J\) = 8.0, 7.3 Hz, 2H), 6.83 (d, \(J\) = 9.7 Hz, 2H). \({}^{\textbf{13}}\)**C NMR (151 MHz, CDCl\({}_{3}\), ppm):**\(\delta\) 185.7, 141.6, 138.5, 135.3, 132.8, 130.8, 130.6, 130.2, 129.9, 129.6, 129.0, 128.1, 127.3. **HRMS (EI) \(m/z\):**\([M]^{+}\) Calcd for C\({}_{26}\)H\({}_{14}\)O\({}_{2}\) 358.0988; Found 358.0989.
**6H,6\({}^{\textbf{1}}\)-2,2\({}^{\textbf{1}}\)-Biphenalene (10, 2H-diphenalenyl).** The compound was prepared using a modified literature procedure.\({}^{\textbf{7}}\) The reaction was carried out under inert conditions with deoxygenated solvents (method B). 6\(H\),6\({}^{\textbf{1}}\)_-[2,2\({}^{\textbf{1}}\)-Biphenalene]-6,6\({}^{\textbf{1}}\)-dione (**9**; 50 mg, 0.56 mmol) was suspended in toluene (5 mL). A solution of DIBAL-H (0.56 mL, 0.56 mmol, 1.0 M in toluene) was added dropwise at room temperature and the reaction mixture was heated at 100 \({}^{\textbf{\circ}}\)C overnight. After cooling to room temperature, the reaction mixture was passed through a pad of Florisil\({}^{\textbf{*}}\) with toluene as an eluent under the exclusion of air. After removal of the solvent in vacuum, the desired compound was obtained as a light-yellow solid (28 mg, 0.14 mmol, 61%). \({}^{\textbf{1}}\)**H NMR (400 MHz, CD\({}_{2}\)Cl\({}_{2}\)):** Compound **10** is obtained as a mixture of regioisomers, which differ by positions of the methylene groups that can occupy any \(\alpha\)-position (marked with asterisks) of the phenalenyl subunit. This mixture of regioisomers gives a complex proton NMR spectrum with characteristic singlets for the methylene groups. The ratio between the integrated signal intensity of all methylene groups and all aromatic protons is 2:7, matching the expected value. **HRMS (EI) \(m/z\):**\([M]^{+}\) Calcd for C\({}_{26}\)H\({}_{18}\) 330.1403; Found 330.1390.
## References
* (1) Fulmer, G. R.; Miller, A. J. M.; Sherden, N. H.; Gottlieb, H. E.; Nudelman, A.; Stoltz, B. M.; Bercaw, J. E.; Goldberg, K. I. NMR Chemical Shifts of Trace Impurities: Common Laboratory Solvents, Organics, and Gases in Deuterated Solvents Relevant to the Organometallic Chemist. _Organometallics_**2010**, _29_, 2176-2179. [https://doi.org/10.1021/om100106e](https://doi.org/10.1021/om100106e).
* (2) Cakmak, O. Bromination of Naphthalene. Preparation of 1,3-Dibromonaphthalene. _J. Chem. Res._**1999**, 366-367. [https://doi.org/10.1039/a809375j](https://doi.org/10.1039/a809375j).
* (3) Wang, M.-Z.; Ku, C.-F.; Si, T.-X.; Tsang, S.-W.; Lv, X.-M.; Li, X.-W.; Li, Z.-M.; Zhang, H.-J.; Chan, A. S. C. Concise Synthesis of Natural Phenylphenalenone Phytoalexins and a
Regioisomer. _J. Not. Prod._**2018**, _81_, 98-105. [https://doi.org/10.1021/acs.jnatprod.7b00709](https://doi.org/10.1021/acs.jnatprod.7b00709).
* [4] He, B.; Mann, M.; Jablons, D. Targeting Gli Proteins in Human Cancer by Small Molecules. United States Patent, US 9840470 B2, December 12, 2017. [https://worldwide.espacenet.com/publicationDetails/biblo?l=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=20151217&CC=US&NR=2015361048A1&KC=A1](https://worldwide.espacenet.com/publicationDetails/biblo?l=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=20151217&CC=US&NR=2015361048A1&KC=A1)
* [5] Ishiyama, T.; Murata, M.; Miyamura, N. Palladium(0)-Catalyzed Cross-Coupling Reaction of Alkoxydiboron with Haloarenes: A Direct Procedure for Arylboronic Esters. _J. Org. Chem._**1995**, _60_, 7508-7510. [https://doi.org/10.1021/jo00128a024](https://doi.org/10.1021/jo00128a024).
* [6] Webber, S. E.; Almassy, R. J. Immune Checkpoint Inhibitors, Compositions and Methods Thereof. United States Patent Application, US 2018/0065917 A1, March 8, 2018. [https://worldwide.espacenet.com/publicationDetails/biblo?CC=US&NR=2018065917&KC=&FT=E&locale=en_EP#](https://worldwide.espacenet.com/publicationDetails/biblo?CC=US&NR=2018065917&KC=&FT=E&locale=en_EP#).
* [7] Boudjouk, P.; Johnson, P. D. Improved Routes to Phenalene and Phenalanone. Alane, Borane, and Silane Reductions of Phenalenone. _J. Org. Chem._**1978**, _43_ (20), 3979-3980. [https://doi.org/10.1021/jo00414a044](https://doi.org/10.1021/jo00414a044).
* [8] Cakmak, O.; Demirtas, I.; Balaydin, H. T. Selective Bromination of 1-Bromonaphthalene: Efficient Synthesis of Bromonaphthalene Derivatives. _Tetrahedron_**2002**, _58_ (28), 5603-5609. [https://doi.org/10.1016/S0040-4020](https://doi.org/10.1016/S0040-4020)(02)00549-5.
## 2 Additional STM/STS data
## 3 Experimental methods
All experiments were conducted on a Au(111) surface after repeated sputtering and annealing cycles until an atomically clean surface was achieved. NaCl was then deposited from a Knudsen cell evaporator at 730 "C, while the sample temperature was kept around 270 K to promote the growth of monolayer NaCl. The 2H-Diphenalenyl precursor (for synthesis see SI section 1) was deposited from a Knudsen cell evaporator onto the sample at \(T_{\text{sample}}\) = 110 K. The two additional hydrogen atoms on the 2H-diphenalenyl passivate the precursor for better handling before the deposition.
On the surface, sequential tip-induced cleaving of the hydrogen atoms from the \(sp^{3}\) carbon atoms[1, 2] yields the target diphenalenyl diradical, as was proven by constant height AFM measurements shown in Fig. 1a. In order to activate a 2H-diphenalenyl precursor on monolayer NaCl, the tip was placed above the precursor at 1.5 V bias voltage and a current set point of 20 pA, then retracted by 5-6 A and for a few seconds a bias voltage of 4 V was applied[2]. The successful cleavage of the hydrogen atoms and change in the electronic structure can be observed in the STM images, showing distinct lobes and nodal planes for the activated molecule. A precursor adsorbed on the Au(111) surface can be activated by tunneling currents at bias voltages around -2V[1].
## References
* [1] Turco, E.; Bernhardt, A.; Krane, N.; Valenta, L.; Fasel, R.; Juricek, M.; Ruffieux, P. Observation of the Magnetic Ground State of the Two Smallest Triangular Nanographenes. _JACS Au_**2023**, jacau.2c00666. [https://doi.org/10.1021/jacsau.2c00666](https://doi.org/10.1021/jacsau.2c00666).
* [2] Pavlicek, N.; Mistry, A.; Majzik, Z.; Moll, N.; Meyer, G.; Fox, D. J.; Gross, L. Synthesis and Characterization of Triangulene. _Nat. Nanotechnol._**2017**, _12_ (4), 308-311. [https://doi.org/10.1038/nnano.2016.305](https://doi.org/10.1038/nnano.2016.305).
## 4 Tight-binding (TB) and mean-field Hubbard (MFH) calculations
TB-MFH calculations were performed by numerically solving the mean-field Hubbard Hamiltonian with third-nearest-neighbor hopping
\[\hat{H}_{MFH}=\sum_{j}\sum_{(\alpha,\overline{\beta})_{j},\sigma}-t_{j}c_{ \alpha,\sigma}^{\dagger}\,c_{\beta,\sigma}+U\sum_{\alpha,\sigma}\langle n_{ \alpha,\sigma}\rangle n_{\alpha,\overline{\sigma}}-U\sum_{\alpha}\langle n_{ \alpha,\uparrow}\rangle\langle n_{\alpha,\downarrow}\rangle.\] (SE1)
Here, \(c_{\alpha,\sigma}^{\dagger}\) and \(c_{\beta,\sigma}\) denote the spin selective (\(\sigma\in\{\uparrow,\downarrow\}\) with \(\overline{\sigma}\in\{\downarrow,\uparrow\}\)) creation and annihilation operator at sites \(\alpha\) and \(\beta\), \(\langle\alpha,\beta\rangle_{j}\) (\(j=\{\)1,2,3\(\}\)) denotes the nearest-neighbor, second-nearest-neighbor and third-nearest-neighbor sites for \(j\)= 1, 2 and 3, respectively, \(t_{j}\) denotes the corresponding hopping parameters (with \(t_{1}\)= 2.7 eV, \(t_{2}\)= 0.1 eV and \(t_{3}\)= 0.27 eV for nearest-neighbor, second-nearest-neighbor and third-nearest-neighbor hopping[1]), \(U\) denotes the on-site Coulomb repulsion, \(n_{\alpha,\sigma}\) denotes the number operator, and \(\langle n_{\alpha,\sigma}\rangle\) denotes the mean occupation number at site \(\alpha\). Orbital electron densities, \(\rho\), of the \(n^{\rm th}\)-eigenstate with energy \(E_{n}\) have been simulated from the corresponding state vector \(a_{n,i,\sigma}\) by
\[\rho_{n,\sigma}(\vec{r})=\left|\sum_{i}a_{n,i,\sigma}\phi_{2p_{x}}(\vec{r}- \vec{r}_{i})\right|^{2},\] (SE2)
where \(i\) denotes the atomic site index and \(\phi_{2p_{x}}\) denotes the Slater \(2p_{x}\) orbital for carbon.
All the TB-MFH calculations presented in the manuscript were done in the third-nearest-neighbor approximation and using an on-site Coulomb term \(U=3.5\) eV.
The TB-MFH software library[2] was developed within the Python programming language. The code is open-source, available at [https://github.com/eimrek/tb-mean-field-hubbard](https://github.com/eimrek/tb-mean-field-hubbard).
## References
* [1] Tran, V.-T.; Saint-Martin, J.; Dollfus, P.; Volz, S. Third Nearest Neighbor Parameterized Tight Binding Model for Graphene Nano-Ribbons. _AIP Adv._**2017**, 7 (7), 075212. [https://doi.org/10.1063/1.4994771](https://doi.org/10.1063/1.4994771).
* [2] Eimre, K. Eimrek/Tb-Mean-Field-Hubbard: V1.2.0, 2021. [https://doi.org/10.5281/zenodo.4708340](https://doi.org/10.5281/zenodo.4708340).
## 5 DFT results of diphenalenyl on Au(111) and NaCl
The initial setup of the molecular structures was carried out using the Atomic Simulation Environment (ASE) package. [1] The electronic structure calculations in this study were performed using the Generalized Projector Augmented Wave (GPAW) software package, [2] employing the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional and the double-\(\chi\) polarized (dzp) basis set. To ensure the structural accuracy, the geometries were optimized until the maximum force on any atom was below the threshold of 0.05 Hartree Bohr\({}^{-1}\). Single-point energy calculations were carried out after the geometry optimization, followed by non-equilibrium Green's function (NEGF) calculations to simulate the electronic properties of the open system coupled to the semi-infinite Au(111) substrate. Open boundary conditions (OBCs) were incorporated at the bottom of the simulation cell through a self-energy matrix, coupling the part of the Au(111) substrate included in the simulation cell to the semi-infinite bulk Au(111) substrate. The OBCs were computed using the method described in Ref. [3] The bulk Au(111) substrate was modeled by a three-layer thick Au(111) slab and sampled with a 3x1x1 k-mesh along the transport direction.
The diphenalenyl molecule was investigated when directly absorbed on top of Au(111) and when separated by a monolayer of NaCl, as shown in Figs. S4 and S5, respectively. The molecule absorbed roughly 2.67 Angstrom above the substrate when in contact with Au(111), while the absorption distance increased to approximately 3.31 Angstrom when a layer of NaCl was placed in-between, indicating weaker hybridization of the molecule with the substrate. The projected density of states (PDOS) for the four frontier MOs, ranging from HOMO-1 to LUMO+1, for Au(111) and NaCl are reported in Fig. S6, left and right panels respectively. The MOs hybridize with the bulk states of the substrate, resulting in a finite lifetime for each MO, and the strength of hybridization determines the ease with which the two systems can exchange electrons via the corresponding MO. The PDOS shows that the delta peaks at the position of the MOs eigenenergies broaden into Lorentzians L(x)=a/n f/2 1/((x-\(\mu\))2-(f/2)2), where the value of f reflects the strength of hybridization. The values of f for the four frontier MOs estimated from fitting the PDOS computed using NEGF with the function L(x) can be found in the legend of Fig. S6, with the HOMO and LUMO showing hybridization strength more than one order of magnitude stronger when in direct contact with Au(111). The imaginary part of the hybridization functions [4], which describe the coupling strength dispersion as a function of energy, are shown in Fig. S7 (left and right panels) for the same MOs, normalized to the integral of the corresponding PDOS. The hybridization functions exhibit a nearly constant behavior around the MO eigenenergies, resulting in a well-defined and symmetric Lorentzian shape of the corresponding PDOS. The legend in Fig. S7, left and right panels, provides the values of the hybridization functions at the energy of the corresponding MO eigenenergies, which give a complementary estimate for f. These values agree closely with the values of f obtained from the PDOS, supporting the conclusion that the MOs hybridize more strongly with Au(111) than with NaCl.
given by \((G^{0}_{M})^{\text{-}}\{2\}=z_{M}\text{-}H_{M}\), where \(H_{M}\) is the Hamiltonian block of the molecule in the Hamiltonian matrix \(H_{C}\) of the simulation cell, \(I_{M}\) is an identity matrix, and \(z\)=E+\(i0^{+}\) is a complex energy with a small positive imaginary shift. The retarded Green's function \(G_{M}\) projected onto the molecular subspace is given by [4]\(G_{M}(z)=S_{MC}G_{C}(z)S_{CM}\), where \(S_{MC}\) is the overlap matrix between the molecule orbitals with all orbitals, and \(G_{C}\) is the Green's function of the simulation cell, which can be expressed as \(G_{C}(z)=zS_{C}\text{-}H_{C}\text{-}\Sigma_{L}(z)\), taking into account the OBCs through the self-energy \(\Sigma_{L}\). The PDOS can be computed from \(G_{M}\) as \(PDOS_{i}(E)\)=-1/\(\pi\)\(Im[G_{M}(z)]_{i}\), where the subscript \(i\) denotes the \(i\)-th diagonal component of the Green's function.
Figure S4: Atomic structure of diphenalenyl molecule adsorbed on Au(111) surface. The pink spheres represent the Au atoms, the grey spheres represent the C atoms, and the white spheres the H atoms. The molecule absorbs approximately 2.67 Angstrom above the substrate.
Figure S5: Atomic structure of diphenalenyl molecule adsorbed on 1ML NaCl/Au(111) surface. The pink spheres represent the Au atoms, the green spheres the Cl atoms, the blue spheres the Na atoms, the grey spheres the C atoms, and the white spheres the H atoms. The molecule absorbs approximately 3.31 Angstrom above the NaCl layer.
**Figure S7**. The figure illustrates the computed hybridization functions of the diphenalenyl molecule adsorbed on Au(111) and NaCl, using NEGF. The observed hybridization functions verify the trend of a stronger coupling between the diphenalenyl molecule and Au(111) compared to NaCl, as demonstrated by the values of \(\Gamma\) at the positions of the MOs eigenenergies. Furthermore, these values are in agreement with those obtained from the fitting of the PDOS to Lorentzians.
The non-vanishing intermolecular coupling is verified using the local orbital (LO) approach [5]. It involves constructing a set of localized wavefunctions tied to specific atoms in the system to describe the KS states in a particular energy window of interest. In this study, the LO technique is employed to construct a minimal model of pz-like orbitals, with one orbital for each Carbon atom of the adsorbed molecule, to capture the relevant physics around the Fermi level, including the energy and electronic distribution of the DFT frontier molecular orbitals. The LO approach is chosen due to its proven accuracy in yielding minimal models for organic compounds [5]. The Hamiltonian of the minimal model is obtained by downfolding the electronic structure of the open system, i.e., including the coupling to the semi-infinite Au(111) substrate, onto the pz local orbitals (pz-LOs). This is achieved practically by incorporating the static component of the pz-LOs' hybridization function, evaluated at the Fermi level, into the Hamiltonian of the pz-LOs (see also Eq. (11) of [6] for more details). The resulting on-site energy and hopping values between the pz-LOs models for the diphenalenyl molecule on Au(111) and NaCl are reported in Fig. S8, left and right panel respectively, with a cut-off value of 0.15 eV set for the hoppings. It can be observed that the Hamiltonian matrix elements are very similar in the two models, and the hopping values between 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) neighbors are comparable, with approximate magnitudes of \(\left|\text{t}_{1}\right|\)\(\sim\) 2.8 eV and \(\left|\text{t}_{2}\right|\)\(\sim\) 0.2 eV or \(\left|\text{t}_{2}\right|\)\(\sim\) 0.27 eV (see Fig. S8), respectively. The computed intermolecular exchange is estimated to be \(\left|\text{t}_{3}\right|\)\(\sim\) 0.25 eV. These values are subsequently used to parameterize the tight-binding model. \(\left|\text{t}_{3}\right|\) also corroborates the hypothesis of a non-negligible intermolecular coupling that mixes the zero modes of the two phenalenyl units.
## References
* [1] Larsen, Ask Hjorth, et al. "The atomic simulation environment--a Python library for working with atoms." Journal of Physics: Condensed Matter 29.27 (2017): 273002.
* [2] Enkovaara, Jussi, et al. "Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method." Journal of physics: Condensed matter 22.25 (2010): 253202.
* [3] Gandus, Guido, et al. "Efficient partitioning of surface Green's function: toward ab initio contact resistance study." 2020 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD). IEEE, 2020.
* [4] Jacob, David. "Towards a full ab initio theory of strong electronic correlations in nanoscale devices." Journal of Physics: Condensed Matter 27.24 (2015): 245606.
* [5] Gandus, Guido, et al. "Smart local orbitals for efficient calculations within density functional theory and beyond." The Journal of Chemical Physics 153.19 (2020): 194103.
* [6] Gandus, Guido, et al. "Strongly correlated physics in organic open-shell quantum systems." arXiv preprint arXiv:2301.00282 (2022).
## 6 Many-body calculations for Phenalenyl dimer model
### Extended Hubbard model for Phenalenyl dimer
Similar to previous work by two of us [1] we consider an extended Hubbard model, including first and third nearest-neighbor hopping, as well as parts of the long-range (LR) Coulomb interaction in addition to the local (on-site) Hubbard interaction:
\[\hat{H}=\hat{H}_{0}+\hat{W}=\sum_{i,j,\sigma}t_{ij}\left(c_{i\sigma}^{\dagger}c _{j\sigma}+\mathrm{h.c.}\right)+\frac{1}{2}\sum_{i,j,k,l\atop\sigma,\sigma^{ \prime}}W_{ijkl}\,c_{i\sigma}^{\dagger}c_{j\sigma^{\prime}}^{\dagger}c_{l \sigma^{\prime}}c_{k\sigma}\] (S1)
where \(t_{ij}\) is the hopping between carbon sites \(i\) and \(j\) and \(W_{ijkl}\) is the Coulomb matrix in the site basis. We assume nearest-neighbor hopping \(t=-2.7\)eV and third-neighbor hopping \(t_{3}=0.1t\). As before [1] we take into account the Hubbard on-site interaction \(U\equiv W_{iiii}\), the Coulomb repulsion \(W_{ijij}\) and Coulomb exchange \(W_{ijji}\) between electrons on different sites \(i\) and \(j\). Additionally, here we also take into account the pair-hopping \(W_{iijj}\) and the density assisted hopping \(W_{ijkj}\). Following Ref. [1] we compute the Coulomb matrix elements \(W_{ijkl}\) in the site basis \(\{|i\rangle\}\) with the Gaussian09 quantum chemistry code as follows: first, the matrix elements of the bare Coulomb interaction \(\hat{v}_{c}=1/r\) are computed in Gaussian09 for the \(p_{z}\) orbitals of the carbon sites in the phenalenyl dimer in the LANL2MB minimal basis set, \(V_{ijkl}=\langle i,j|\,\hat{v}_{c}\,|k,l\rangle\). Second, screening effects by the other orbitals in the molecule and by the substrate are simply taken into account by a dielectric constant: \(W_{ijkl}=V_{ijkl}/\epsilon\). The dielectric constant \(\epsilon\) is related to the on-site Hubbard interaction by \(U=W_{iiii}=V_{iiii}/\epsilon\). The parameter \(U\) (or equivalently \(\epsilon\)) is adjusted in the model such that the OCA calculation (see below) yields the (renormalized) spin excitation energy observed in experiment. Thus for the molecule on the NaCl ML we use \(U=5.4\)eV, while for the molecule directly on Au(111) we use \(U=2.5\)eV.
### Complete Active Space (CAS)
Diagonalization of the single-particle part \(\hat{H}_{0}\) of (S1) yields the molecular orbitals \(|\psi_{k}\rangle\):
\[\hat{H}_{0}|\psi_{k}\rangle=\epsilon_{k}|\psi_{k}\rangle\] (S2)
which can be expanded in the site basis \(\{|i\rangle\}\) as \(|\psi_{k}\rangle=\sum_{j}\psi_{k}(j)|j\rangle\). In the absence of third-neighbor hopping (\(t_{3}=0\)), the diagonalization yields two zero modes (\(\epsilon_{k}=0\)) which may be localized on each of the phenalenyl units (see Fig. 3a in main text). Switching on \(t_{3}\) the zero modes hybridize and split in energy forming HOMO and LUMO (see Fig. 3c in main text). The interactions between the zero modes via \(t_{3}\) and via Coulomb interactions with the HOMO-1 and LUMO+1 give rise to kinetic and Coulomb-driven exchange interactions, respectively. In the case of the phenalenyl dimer these are both antiferromagnetic in nature [1]. Thus here for the many-body calculations we take the four orbitals HOMO-1, HOMO, LUMO and LUMO+1, shown in Fig. 3c in the main text, as the complete active space (CAS). The extended Hubbard Hamiltonian (S1) projected onto the CAS can then be written as
\[\hat{H}_{\mathrm{C}}=\sum_{k\in\mathrm{C}}\epsilon_{k}\,\hat{N}_{k}+\frac{1}{2 }\sum_{k,k^{\prime}\atop\sigma,\sigma^{\prime}}\mathcal{W}_{kk^{\prime}qq^{ \prime}}\,C_{k\sigma}^{\dagger}C_{k^{\prime}\sigma^{\prime}}^{\dagger}C_{q^{ \prime}\sigma^{\prime}}C_{q\sigma}\] (S3)
where \(C^{\dagger}_{k\sigma}\) (\(C_{k\sigma}\)) creates (destroys) one electron of spin \(\sigma\) in MO \(\psi_{k_{\sigma}}\)\(N_{k}=\sum_{\sigma}C^{\dagger}_{k\sigma}C_{k\sigma}\) measures the total occupation of MO \(\psi_{k_{\sigma}}\) and the Coulomb interaction tensor between orbitals in the CAS is given by \(\mathcal{W}_{k_{1}k_{2}k_{3}k_{4}}\equiv\langle\psi_{k_{1}},\psi_{k_{2}}| \hat{W}|\psi_{k_{3}},\psi_{k_{4}}\rangle\). Exact diagonalization of this model for different values of \(U\) (corresponding to a dielectric constant \(\epsilon=V_{iiii}/U\), see above) then yields the blue curve in Fig. 3d in the main text. Kinetic exchange can be switched off by switching off \(t_{3}\) (green curve), while Coulomb-driven exchange can switched off by restriction to HOMO-LUMO model (purple curve), see Ref. [1].
## 3 Anderson impurity model
The CAS described by the Hamiltonian (53) and coupled to the conduction electrons in the substrate defines an Anderson impurity model (AIM):
\[\hat{H}_{\rm AIM}=\hat{H}^{\prime}_{\rm C}+\hat{H}_{\rm B}+\hat{V}_{\rm hyb} \tag{54}\]
where \(\hat{H}^{\prime}_{\rm C}=\hat{H}_{\rm C}+v\,\hat{N}_{\rm C}\), \(\hat{H}_{\rm B}\) is the Hamiltonian of the conduction electron bath in the metallic substrate and the hybridization term \(\hat{V}_{\rm hyb}\) describes the coupling between bath and the impurity space. The coupling to the bath leads to charge fluctuations on the impurity and defines a chemical potential which we set to zero. The single-particle potential \(v\) is set such that the coupled molecule is approximately charge neutral, i.e., \(N_{\rm C}\approx 4\). This is achieved by taking the mean-field approximation of the interaction part \(\hat{W}\) for \(N_{\rm C}=4\), averaging over the four MOs, i.e. \(v=-\langle\hat{W}\rangle_{\rm MFA}\), and fine-tuning until the good agreement with the experimental spectra is reached. This yields \(v=-3.65\)eV in the case of NaCl and \(v=-1.91\) eV in the case of Au.
Integrating out the bath degrees of freedom we obtain the so-called hybridization function for each MO included in the CAS \(\Delta_{k}(\omega)=\sum_{b}|V_{k,b}|^{2}/(\omega^{+}-\epsilon_{b})\). Its negative imaginary part \(\Gamma_{k}(\omega)\) yields the single-particle broadening of the impurity orbitals due to the coupling to the bath. Here we will assume the wide-band limit, i.e. constant broadening \(\Gamma_{k}\) and vanishing real part of the hybridization function. For the Au case the \(\Gamma_{k}\) are obtained from _ab initio_ DFT calculations described above. This yields \(\Gamma_{\rm homo-1}\sim 54\,{\rm meV}\), \(\Gamma_{\rm homo}\sim 81\,{\rm meV}\), \(\Gamma_{\rm lumo}\sim 35\,{\rm meV}\) and \(\Gamma_{\rm lumo+1}\sim 49\,{\rm meV}\). Expectedly, for the NaCl monolayer the hybridization functions obtained from DFT are smaller by roughly an order of magnitude (~5-10meV). Unfortunately, this leads to numerical problems in the OCA calculations. We thus assume somewhat larger value, \(\Gamma_{k}\sim 15\,{\rm meV}\), and the same for all orbitals. Thus for the NaCl case the OCA spectra are somewhat more broadened than is to be expected. But the renormalization of the spin excitation energy by Kondo exchange is still negligible.
## 4 One-crossing approximation
In order to solve the AIM, we use the one-crossing approximation (OCA) which expands the propagators corresponding to the eigenstates of the _isolated_ impurity (i.e. the CAS) in the coupling to the bath [2]. The first step, is an exact diagonalization of the impurity Hamiltonian, i.e. \(\hat{H}^{\prime}_{\rm C}|\Psi_{m}\rangle=E_{m}|\Psi_{m}\rangle\). The eigenstates are simultaneous eigenstates of the total number of electrons \(N\) and the total spin \(S^{2}\). Here we consider the system at half-filling (\(N_{\rm C}=4\)). Coupling to the substrate leads to fluctuations of electrons in the impurity, i.e. to excitations to the charged sectors with \(N_{\rm C}\pm 1\) electrons. OCA consists in a diagrammatic expansion of the propagators \(G_{m}(\omega)\) associated with the many-body eigenstates \(|\Psi_{m}\rangle\) of the isolated impurity Hamiltonian \(\hat{H}^{\prime}_{\rm C}\) in terms of the hybridization functions \(\Delta_{k}(\omega)\), summing only the
first and second order diagrams (only those where conduction electron lines cross at most twice) but to infinite order (i.e. self-consistently). The spectral functions \(A_{k}(\omega)\) projected on individual molecular orbitals \(|\psi_{k}\rangle\) are obtained from convolutions of the propagators \(G_{m}(\omega)\) with the hybridization functions \(\Delta_{k}(\omega)\). Further details of the OCA method and its application to the description of spin excitations in adatoms and nanographenes can be found in previous works [3, 4, 5].
## References
* [1] Jacob, D.; Fernandez-Rossier, J. Phys. Rev. B **2022**, 106 (20), 205405.
* [2] Haule, K.; Kirchner, S.; Kroha, J.; Wolfle, P. Phys. Rev. B **2001**, 64 (15), 155111.
* [3] Jacob, D.; Fernandez-Rossier, J. Eur. Phys. J. B **2016**, 89 (10), 210.
* [4] Jacob, D. Phys. Rev. B **2018**, 97 (7), 075428.
* [5] Jacob, D.; Ortiz, R.; Fernandez-Rossier, J. Phys. Rev. B 2021, **104** (7), 075404.
## 8 Copies of NMR and HRMS spectra
1,3-Dibromonapthalene (3). \({}^{1}\)H NMR / 400 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
260 250 260 270 270 270 280 190 190 170 190 140 130 170 190 200 30 30 310 10 100 200 30 310 100 30 320 320 330 340 350 360 370 380 390 40 410 420 430 440 450 460 470 480 490 50 510 520 530 540 540 550 560 570 580 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 590 5900 59
## 5 5-Bromo-1\(H\)-phenalen-1-one (7).
\({}^{1}\)H NMR / 600 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
**5-(4,4,5,5-Tetramethyl-1,3,2-dioxaborolan-2-yl)-1\(H\)-phenalen-1-one (8).**
\({}^{1}\)H NMR / 600 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
\({}^{13}\)C NMR / 151 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
\({}^{1}\)H NMR / 600 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
\({}^{1}\)C NMR / 151 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
\({}^{1}\)H-\({}^{13}\)C HSQC NMR / 600 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C
\({}^{1}\)H-\({}^{13}\)C HSQC NMR / 600 MHz / CDCl\({}_{3}\) / 25 \({}^{\circ}\)C |
2310.12061 | A long-range contact process in a random environment | We study survival and extinction of a long-range infection process on a
diluted one-dimensional lattice in discrete time. The infection can spread to
distant vertices according to a Pareto distribution, however spreading is also
prohibited at random times. We prove a phase transition in the recovery
parameter via block arguments. This contributes to a line of research on
directed percolation with long-range correlations in nonstabilizing random
environments. | Benedikt Jahnel, Anh Duc Vu | 2023-10-18T15:50:10Z | http://arxiv.org/abs/2310.12061v1 | # A long-range contact process in a random environment
###### Abstract
We study survival and extinction of a long-range infection process on a diluted one-dimensional lattice in discrete time. The infection can spread to distant vertices according to a Pareto distribution, however spreading is also prohibited at random times. We prove a phase transition in the recovery parameter via block arguments. This contributes to a line of research on directed percolation with long-range correlations in nonstabilizing random environments.
## Introduction
The contact process is a classical model for the spread of an infection through a spatially distributed population, where individuals may spontaneously lose the infection and become susceptible again. First introduced in [1], the model and its multiple generalisations still attract a tremendous amount of interest coming from a great variety of fields, see e.g., [11, 12, 13] for rather recent contributions and, important in view of this manuscript, [14, 15, 16], where random environments are considered. Focussing on the discrete-time version on lattices, the contact process is equivalent to certain models in oriented percolation. In particular, the key question of survival and extinction of the infection in the contact process is in one-to-one correspondence to the existence and absence of an infinite directed path in the associated percolation model.
The arguably simplest nontrivial undirected percolation model is the \(\mathbb{Z}^{2}\)-lattice with either vertices or edges being open with some probability \(p\) independently from each other. The models are then called site (respectively bond) percolation models and the modeling idea is usually that of water flowing through open connected components, i.e., cluster. Now the standard question is whether water can flow all the way through, i.e., whether the origin lies in an infinite cluster with positive probability. If so, we are in the socalled supercritical percolation phase and in the subcritical phase otherwise. In the particular example just mentioned, the percolation phase transition for the bond model happens at \(p_{c}=1/2\)[13].
However, water can only flow in the direction of gravity, so it is natural to consider directed edges. A simple directed model is the north-east model on \(\mathbb{Z}^{2}\) where connections only form in the north and east direction introduced in [14]. As pointed out in [15], the directed models may have to be handled quite differently compared to their undirected counterparts. While results are often similar, the proofs differ greatly.
As mentioned before, we want to consider contact processes, i.e., infections in space-time rather than the flow of water under gravity. As seen in the past pandemic, a multitude of different factors
influence this evolution. We want to focus on the following three aspects: range of infection, sparse environments and lockdowns. More precisely, in our model we assume that the infection can spread to distant vertices with polynomial decay in the probability. Additionally, we permanently remove lattice points via iid Bernoulli random variables, thereby diluting the lattice. Similarly but now on the time axis, we independently mark time points at which the transmission of the infection to other vertices is prohibited. Based on this random environment, we build our directed bond-site percolation model.
Let us mention that spatial stretches have already been considered in [1]. There, a vertex \((t,x)\) is only open with probability \(p(x)\in\{p_{\mathrm{bad}},p_{\mathrm{good}}\}\) where \(p(x)\) does not depend on time. It is shown that survival occurs if \(p_{\mathrm{good}}\) occurs sufficiently often and \(p_{\mathrm{good}}\) is sufficiently large. On the other hand, in [13], the case of temporal stretches (on the bonds) has been studied. Here, survival holds even for any \(p_{\mathrm{good}}>p_{c}\) given that \(p_{\mathrm{good}}\) occurs sufficiently often, where \(p_{c}\) is the critical parameter for directed bond percolation. The strategy behind both results is to consider environment groupings and employ a multiscale analysis, i.e., \(\mathbb{Z}^{2}\) is grouped into boxes at different levels and boxes are combined to form boxes on higher levels. We will follow this general idea as well and base our construction on [15] - which we have already used in [11] and further extend in this paper - where percolation of the randomly stretched (undirected) lattice on \(\mathbb{Z}^{2}\) has been proven. Let us note that this result has recently been refined in [13] all the way to the critical parameter \(p_{\mathrm{good}}>p_{c}=1/2\).
Simultanously considering temporal and spatial stretches has its own challenges. For example in [14], the authors were able to link the existence of a nontrivial phase transition on the (undirected) \(\mathbb{Z}^{2}\)-lattice to the moments of the stretches. As mentioned there, their current method only works with one-dimensional stretches. The problem in our setting is that spreading in space takes time - time which might not be available due to lockdowns. We alleviate this issue by allowing long-range infections. Let us note that considering a discrete-time process is not a restriction as a simple discretisation scheme yields also the continuous time case.
The paper is organised as follows:
* In Section 1, we introduce the model as well as the main result, that is, the phase transition of survival and extinction. We also give the general idea of the proof in Section 1.3.
* Section 2 introduces the core definitions and lemmas which allow us to prove the main theorem. Details and their proofs are given in Section 3 and 4.
* Section 3 deals solely with the environment grouping framework while Section 4 applies said framework. In particular, this section deals with so called "drilling" (Section 4.5) for the multiscale-renormalisation argument.
## 1 A long-range contact process (LoRaC)
The model is given as a bond-site percolation model. We consider a very long street \(\mathbb{Z}\) where each \(x\in\mathbb{Z}\) represents a location. Normally, \(x\) contains a house with residents (probability \(1-q^{(x)}\)), i.e., a potential host for infections. On the other hand, \(x\) might also just be empty (with probability \(q^{(x)}\)). Now, assume that there is an infection starting in house \(y\). During the day, the infection might spread to other houses due to people travelling to other houses. While trips to far-away destinations are rare, they still happen considerably often via e.g. airplanes (probability \((1+|y-x|)^{-\alpha}\)). Each night, all residents of a house recover with probability \(1-p\). In this setting, the survival of an infection corresponds to a bond-site percolation problem on \(\mathbb{Z}\times\mathbb{Z}\) (with vertices \((t,x)\)).
During the pandemic, governments have enforced lockdowns during which people cannot leave their houses. Therefore, no new infections occur in that time. We mimick this in our model also:
Each morning, a global lockdown is imposed with probability \(q^{(t)}\). An illustration of the model is given in Figure 1.
### Constructing the LoRaC model
After this verbal description, let us now give a proper definition of our model. We highlight that, as mentioned already in the introduction, contact processes are closely linked to certain directed percolation problems where the directionality reflects the passing of time.
**Definition 1** (The LoRaC).: Let \(q^{(t)},q^{(x)},p\in(0,1)\) as well as \(\alpha>1\) be given. We consider sequences of iid Bernoulli random variables \((\mathrm{T}_{t})_{t\in\mathbb{Z}}\) and \((\mathbb{X}_{x})_{x\in\mathbb{Z}}\) with parameters \(q^{(t)}\) respectively \(q^{(x)}\). We call \(t\) good if \(\mathds{T}_{t}=1\) and bad otherwise. Analogously, we call \(x\) good if \(\mathbb{X}_{x}=1\).
Consider the graph \(G=(\mathbb{Z}\times\mathbb{Z},E)\) where \(E\) consists of directed edges of the form \((t,x)\to(t+1,y)\) with \(t,x,y\in\mathbb{Z}\). We study a mixed bond-site-percolation model on \(G\) where all vertices and edges are open (respectively closed) independently from each other with probability
\[\mathbb{P}\{(t,x)\text{ is open}\,|\,x\text{ is good}\}=p\qquad\text{and} \qquad\mathbb{P}\{(t,x)\text{ is open}\,|\,x\text{ is bad}\}=0\,,\]
and for an edge \(e=\big{(}(t,x)\to(t+1,y)\big{)}\)
\[\mathbb{P}\{e\text{ is open}\,|\,t\text{ is good}\}=(1+|y-x|)^{-\alpha}\qquad \text{and}\qquad\mathbb{P}\{e\text{ is open}\,|\,t\text{ is bad}\}=\delta_{xy}\]
where \(\delta_{xy}=1\) iff \(x=y\) and \(0\) otherwise. We call the model LoRaC for _long-range contact process_.
**Definition 2** (Percolation).: We say that the model **percolates** if there exists an infinite sequence of open vertices and edges such that
\[(t_{0},x_{0})\to(t_{0}+1,x_{1})\to(t_{0}+2,x_{2})\to\ldots\]
almost surely. In this setting, an infection starting in \(x_{0}\) at time \(t_{0}\) will spread through open edges and vertices and therefore survive forever.
If \(\alpha\leq 1\), then each vertex has infinitely many outgoing edges and therefore we already have an infinite number of infected houses in the first step as well as all subsequent steps. Therefore this case is trivial. If \(\alpha>1\) however, the infection may die out in certain regimes.
Figure 1: We start with an infection in the origin which starts infecting other houses – preferably close ones. Lockdowns happen at \(t=4\) and \(5\), so no spreading occurs during this time. Infections still recover at any time with probability \(1-p\). Note that, here and elsewhere, we always assume time to flow **from top to bottom**.
**Proposition 3** (Extinction).:
1. _Let_ \(q^{(t)},q^{(x)}\in(0,1)\) _and_ \(\alpha>1\) _be given. Then, there exists_ \(p_{c}\in(0,1)\) _such that for every_ \(p<p_{c}\)_, the model does not percolate._
2. _Let_ \(q^{(t)},q^{(x)},p\in(0,1)\) _be given. Then, there exists_ \(\alpha_{c}>1\) _such that for every_ \(\alpha>\alpha_{c}\)_, the model does not percolate._
Proof.: Point 1 and 2 follow from a simple branching process argument. In these cases, we completely ignore the environment since it benefits extinction. \(\alpha>1\) implies that the number of potential offsprings has expectation at most \(2\zeta(\alpha)-1\) where
\[\zeta(\alpha):=\sum_{k=1}^{\infty}k^{-\alpha}\,.\]
Since each offspring only survives with probability \(p\), the actual number of offsprings is just \((2\zeta(\alpha)-1)\cdot p\), so the process dies out if we choose \(p<(\zeta(\alpha)+1)^{-1}\). (Note that \(\zeta(\alpha)\to 1\) as \(\alpha\to\infty\).)
The question then becomes whether survival is actually possible. We prove a phase transition in the \(p\) parameter:
**Theorem 4** (Survival via low recovery).: _Let \(q^{(t)},q^{(x)}\in(0,1)\) and \(\alpha>1\) be given. Then, there exists \(p_{c}\in(0,1)\) such that for all \(p>p_{c}\), the LoRaC percolates._
_Remark_ (Continuous time).: Let us note that this result also holds for the continuous-time analogue of our model and the proof can be performed via discretisation arguments.
All results also apply for higher dimensions. Survival in \(\mathbb{Z}\times\mathbb{Z}\) implies survival in higher dimensions, i.e., \(\mathbb{Z}\times\mathbb{Z}^{d}\). The proof for extinction works analogously as well with \(\alpha>d\).
### Open questions
Our main theorem is essentially a phase transition in the recovery of single infections. However, we may also ask ourselves if the process can survive not by houses staying sick long enough, but rather just infecting many houses instead. Maybe some clever renormalisation argument would already do the trick?
**Conjecture 5** (Survival via long spread).: _Given \(q^{(t)},q^{(x)},p\in(0,1)\), there exists \(\alpha_{c}>1\) such that for every \(\alpha\in(1,\alpha_{c})\), the LoRaC percolates._
A different epidemiological concern is the effectiveness of lockdowns and sparse environments. The comparison of the LoRaC to a Galton-Watson process with time-dependent offspring distribution tells us that sufficiently long lockdowns (i.e. \(q^{(t)}\) close to 1) will kill off the infections in the long run. Unfortunately, the effect of the sparse environment is more complicated to handle.
**Conjecture 6** (Extinction due to sparse environment).: _Given \(q^{(t)},p\in(0,1)\) and \(\alpha>1\), there exists \(q^{(x)}_{c}\in(0,1)\) such that for every \(q^{(x)}>q^{(x)}_{c}\), the LoRaC does not percolate._
We see that infinitely long edges are definitely required for the model to percolate. If the length of the edges was bounded, then the whole infection would be confined to a finite region since the infection is not able to cross over large gaps. However, the exact asymptotic decay of the edges is crucial and we are currently unable to deal with the case of exponential decay.
**Conjecture 7** (Fewer edges).: _The LoRaC has a phase transition even if edges are only present with probability_
\[\exp(-\alpha|y-x|)\,.\]
This case would be related to the actual "randomly stretched directed lattice" with stretches in both the temporal [10] and spatial component [1].
Unfortunately, both ideas cannot be directly combined to prove percolation. In [1], one considers extremely thin boxes where the height is an exponential of the width. While the multiscale estimates would still work, the frameworks in [11, 10] restrict ourselves to boxes which do not permit the same extreme scaling.
### Idea of proof
The setup for the proof of Theorem 4 is quite long and it is easy to get lost in details. While - as always - the main difficulty lies in those details, they are not as insightful to the general idea and have already been dealt with in other works. We will not reinvent the wheel, but building a cart from it has merit in itself. The procedure is as follows:
1. We move away from Bernoulli random variables in the LoRaC and use geometric ones instead. Both model formulations are equivalent in terms of percolation, but the latter is much more convenient to use.
2. The next step lies in dividing both the time and space random environments into bands.
3. From there, we will use these bands to define \(n\) boxes: rectangular subsets in \(\mathbb{Z}\times\mathbb{Z}\). These boxes are roughly exponentially large in \(n\) and consist of \(n-1\) boxes.
4. Each \(n\) box has some special vertices on the boundary which we will call (horizontal/vertical) inputs and outputs. There are exponentially many of those vertices.
Figure 2: The environment is divided into boxes at different levels. An orange vertex starts infecting everything on its way down. Boxes are well connected since \(p\) is large. The infection uses special vertices (outputs/inputs depicted as circles) to spread to other neighbouring boxes. The environment between boxes is hostile, so usually only few connections are found.
5. With high probability, \(n\) boxes are "good" which means that the aforementioned inputs and outputs are well connected. Also with high probability, the output of an \(n\) box will connect to the input of a neighbouring \(n\) box (restricted by directionality). This is graphically represented in Figure 2.
6. As \(n\to\infty\), the \(n\) boxes will always be good which yields an infinite cluster.
We make this procedure rigorous in the next section.
## 2 Proof skeleton
In the following, we will give the bare proof skeleton leading up to the main result of phase transition. We try to keep the main ideas while omitting most details and proofs.
### Alternative model construction and coupling
We use an alternative, more convenient description of the model. Instead of considering Bernoulli random variables with parameters \(q^{(t)}\) and \(q^{(x)}\), we directly condense consecutive Bernoulli failures into geometric random variables. Therefore, we will look at the total duration of consecutive lockdowns instead of their existence at a given time. Similarly, we consider distances between houses. The transition from \(\mathbb{X}_{x}\) to \(N_{x}^{(\mathbb{X})}\) is sketched in Figure 3. In terms of percolation, both constructions are equivalent. One just loses information at which time step exactly a house recovers.
**Definition 8** (Alternative construction).: Let \(q^{(t)},q^{(x)},p\in(0,1)\) as well as \(\alpha>1\) be given. We consider independent sequences of independent geometric random variables \(N^{(\mathbb{T})}:=(N_{t}^{(\mathbb{T})})_{t\in\mathbb{Z}}\) and \(N^{(\mathbb{X})}:=(N_{x}^{(\mathbb{X})})_{x\in\mathbb{Z}}\) with parameters \(q^{(t)}\) respectively \(q^{(x)}\).
Consider the graph \(G=(\mathbb{Z}\times\mathbb{Z},E)\) where \(E\) consists of directed edges of the form \((t,x)\to(t+1,y)\) with \(t,x,y\in\mathbb{Z}\). We consider a mixed bond-site-percolation model on \(G\) where - given \(N^{(\mathbb{T})}\) and \(N^{(\mathbb{X})}\) - all vertices and edges are open (respectively closed) independently from each other with probability
\[\mathbb{P}\{(t,x)\text{ is open}\,|\,N^{(\mathbb{T})}\}=p^{N^{(\mathbb{T})}_{t}} \tag{1}\]
and
\[\mathbb{P}\{(t,x)\to(t+1,y)\text{ is open}\,|\,N^{(\mathbb{X})}\}=(1+d[x,y,N ^{(\mathbb{X})}])^{-\alpha}\]
Figure 3: We fix the first existing house starting from \(0\) as the new \(x=0\) and line up all subsequent houses. Only the distance between houses matters. The same can be analogously done for the lockdowns where we only care about the total duration.
where \(d[x,y,N^{(\mathbb{X})}]\) is the distance between the \(x\)-th and \(y\)-th house
\[d[x,y,N^{(\mathbb{X})}]:=\sum_{i=\min(x,y)}^{\max(x,y)-1}N_{i}^{(\mathbb{X})}\,.\]
One realisation of the condensed model is given in Figure 4.
_Remark_ (Beyond geometric random variables).: Note that for the alternative construction to make sense, we do not actually need \(N^{(\mathbb{X})},N^{(\mathbb{T})}\) to be geometric random variables or even to be \(\mathbb{N}-\)valued. In fact, it is perfectly reasonable to assume \(N^{(\mathbb{X})},N^{(\mathbb{T})}\in\mathbb{R}_{>0}^{\mathbb{Z}}\) (which we will actually do in the following rescaling lemmas).
The following two coupling lemmas allow us to freely choose the values \(q^{(t)}\) and \(q^{(x)}\). We will be able to handle arbitrary \(\alpha\) by choosing \(p\) sufficiently large, so out of the four parameters \(q^{(t)},q^{(x)},\alpha,p\), we only need to focus on \(p\).
**Lemma 9** (Compensate \(q^{(t)}\) by \(p\)).: _Let \(\gamma>0\). Then, the LoRaC with parameters \(\gamma N^{(\mathbb{T})}\) and \(p^{1/\gamma}\) (with all other values being unchanged) has the same distribution as the one with parameters \(N^{(\mathbb{T})},p\). In particular, we may assume that \(q^{(t)}\) is arbitrarily small by choosing \(p\) accordingly close to \(1\)._
Proof.: This follows immediately from Equation (1).
**Lemma 10** (Compensate \(q^{(x)}\) via \(\alpha\)).: _Let \(\gamma\geq 1\) and consider some finite index set \(J\subset\mathbb{Z}\). Then,_
\[\Big{(}1+\sum_{i\in J}N_{i}^{(\mathbb{X})}\Big{)}^{-\alpha}\leq\Big{(}1+\sum_{ i\in J}\lceil\gamma^{-1}N_{i}^{(\mathbb{X})}\rceil\Big{)}^{-\gamma\alpha}\,.\]
Figure 4: Simulation for \(q^{(x)}=q^{(t)}=0.4\), \(\alpha=3\) and \(p=0.95\) starting with an infected vertex in the origin. One can see distant infections emerging due to long edges. Areas with large \(N_{x}^{(\mathbb{X})}\) are easily visible by the vertical gaps, but one can also see thin horizontal gaps where \(N_{t}^{(\mathbb{T})}\) is large. As the infection spreads in space, it also seems to accelerate albeit often getting stuck at spatial barriers.
i.e., the LoRaC with parameters \(N^{(\mathbb{X})},\alpha\) is stochastically dominated by the process with \(\lceil\gamma^{-1}N^{(\mathbb{X})}\rceil,\gamma\alpha\). In particular, we may choose \(q^{(t)}\) arbitrarily small by taking \(\alpha\) correspondingly large in order to show percolation._
Proof.: For every \(a\geq 0\), we prove \((1+a)^{\gamma}\geq 1+\gamma a\). The statement is true for \(\gamma=1\). Differentiating in \(\gamma\) at \(\gamma\geq 1\) yields
\[(1+a)^{\gamma}\cdot\log(1+a)\geq(1+a)\cdot a/(1+a)=a\,,\]
so the statement holds for all \(\gamma\geq 1\). Finally,
\[\left(1+\sum_{i\in J}\lceil\gamma^{-1}N_{i}^{(\mathbb{X})}\rceil\right)^{ \gamma}\geq 1+\gamma\cdot\sum_{i\in J}\lceil\gamma^{-1}N_{i}^{(\mathbb{X})} \rceil\geq 1+\sum_{i\in J}N_{i}^{(\mathbb{X})}\]
which shows the claim after taking both sides to the power \(-\alpha\).
### Environment grouping scheme
Next up is the grouping scheme for the random time and space environments. Due to familiarity, we use the framework of [10] rather than [11]. We extend it for more general values \(\mathfrak{s},\mathfrak{d}\) and add extra details to the existing procedure.
We fix two parameters
\[\mathfrak{s}\geq 32\qquad\text{and}\qquad\mathfrak{d}<1/11\,.\]
Consider stretches \(N:=(N_{i})_{i\in\mathbb{Z}}\) with \(N_{i}\in\mathbb{N}_{\geq 1}\cup\{\infty\}\) where \(N_{i}=\infty\) for at most one \(i\).
The bottom line is that, if the \(N_{i}\) are generated by extremely light-tailed iid geometric random variables, then the grouping scheme terminates almost surely. As a reference, in [10] we have \(\mathbb{P}(N_{i}\geq l+1)=(2^{-1000})^{l}\).
Notation.: From now on, \([m,n]\) will be an interval of integers, i.e.,
\[[m,n] :=\{m,\,m+1,\ldots,\,n-1,\,n\}\,,\] \[(m,n) :=[m,n]\backslash\{m,n\}\,.\]
We group indices into bands depending on how "bad" they are. An index \(i\in\mathbb{Z}\) is bad if \(N_{i}\) is large. These merge into bands which are even "worse". We do so in a way such that bad bands end up exponentially far apart. Unfortunately, a discount (depending on the distance between far apart bands) has to be introduced for the merging scheme to locally terminate almost surely for geometric \(N_{i}\).
We will consecutively define the \(k\) bands of \(N\), see Figure 5 for a rough illustration.
**Definition 11** (\(k\) bands and \(k\) labels).: The \(k\) bands and \(k\) labels are defined inductively. A \(1\)**band** is \(\{i\}\) for \(i\in\mathbb{Z}\). The \(1\)**label** of \(\{i\}\) is
\[f_{1}(i):=N_{i}\,.\]
For indices \(i,j\in\mathbb{Z}\), we set
\[D_{k}(i,j):=\#\{k\text{ bands between }i\text{ and }j\text{ not containing either}\},\]
e.g., at the current step \(k=1\), we have \(1+D_{1}(i,j)=|i-j|\).
Given a partition of \(\mathbb{Z}\) into \(k\) bands together with their \(k\) labels, the \(k+1\) bands and \(k+1\) labels are defined in the following way: First, we pick specific _merging indices_\(i,j\) satisfying
\[\min\left(f_{k}(i),f_{k}(j)\right)-\log_{\mathfrak{s}}\left(1+D_{k}(i,j)\right) >1\,. \tag{2}\]
The exact procedure for picking these is given in Algorithm 12. If no such pair exists, we terminate the merging scheme and set all \(k+1\) bands and labels to be the same as their \(k\) counterpart. Otherwise, using these \(i,j\), we update as follows:
1. Let \([m_{i},n_{i}]\) be the \(k\) band containing \(i\) and \([m_{j},n_{j}]\) the \(k\) band containing \(j\). Then, \([\tilde{m},\tilde{n}]\) is a \(k+1\) band with \(\tilde{m}:=\min\{m_{i},m_{j}\}\) and \(\tilde{n}:=\max\{n_{i},n_{j}\}\). In this case, all \(s\in[\tilde{m},\tilde{n}]\) have the \(k+1\)**label** \[f_{k+1}(s):=f_{k}(i)+f_{k}(j)-\left\lfloor\mathfrak{d}\log_{\mathfrak{s}} \left(1+D_{k}(i,j)\right)\right\rfloor\,.\] (3) Note that \(f_{k+1}(s)\geq\max\{f_{k}(i),f_{k}(j)\}+2\).
2. Let \([\tilde{m},\tilde{n}]\) as above. If \([m,n]\) is a \(k\) band with \([m,n]\cap[\tilde{m},\tilde{n}]=\emptyset\), then it is also a \(k+1\) band. In this case, all \(s\in[m,n]\) retain their label \(f_{k+1}(s):=f_{k}(s)\). Note that this condition is equivalent to \([m,n]\not\subset[\tilde{m},\tilde{n}]\).
_Remark_ (Short summary).: Each \(k\) band is an interval of integers. At each step, two \(k\) bands and everything inbetween merge into a bigger \(k+1\) band of larger label. In Algorithm 12, we see that \(k\) bands close to the origin are preferred. For iid geometric \(N_{i}\), the merging procedure never terminates globally since there is always something to merge.
Now, let us specify how exactly the merging indices in Definition 11 are chosen.
**Algorithm 12** (Finding merging indices).: _Consider candidates \(i,j\in\mathbb{Z}\) not belonging to the same \(k\) band and satisfying Equation 2._
1. First, look for the smallest candidate pair \(i,j\), that is, the \(i\in\mathbb{Z}\) with the smallest \(|i+0.1|\) (i.e., \(-|i|\) is preferred over \(|i|\)) such that \(|j|\leq|i|\).
2. If \(1+D_{k}(i,j)<(12\mathfrak{s})^{2}\), we choose \(i,j\) as our merging indices.
3. If not, we try to look for "better" candidates that are close to \(i,j\):
Figure 5: \(k\) bands and labels for \(k=1,2,5,6\). The base height for labels in the diagrams is \(1\). Curly brackets show the merging order of the \(k\) bands. After \(k=6\), the merging stops locally.
* Search for candidates with \(i^{\prime},j^{\prime}\) satisfying \(1+D_{k}(i^{\prime},j^{\prime})<(12\mathfrak{s})^{2}\) as well as \[1+D_{k}(i,j^{\prime})<(12\mathfrak{s})^{2}\qquad\text{or}\qquad 1+D_{k}(j,j^{\prime}) <(12\mathfrak{s})^{2}\,,\] i.e. \(j^{\prime}\) is not too far away from \(i\) or \(j\), then continue with \(i^{\prime},j^{\prime}\) instead of \(i,j\). (Note that \(j^{\prime}\) may coincide with \(i\) or \(j\).)
* If there are multiple candidates in the previous Step (a), take the \(j^{\prime}\) minimizing \(|j^{\prime}+0.1|\) and then the \(i^{\prime}\) minimizing \(D_{k}(i^{\prime},j^{\prime})\). These are our merging indices.
* If no such pair \(i^{\prime},j^{\prime}\in\mathbb{Z}\) exists, take \(i,j\) as the merging indices.
_Remark_ (Better candidates).: The "finding better candidates"-part is new compared to [14] and changes the order of merges. It is relevant for the proof of Theorem 24 Point 3 in the base case of simple bands (Definition 37).
Two things are worth mentioning: First, if two \(k\) bands with label \(\geq l\) are not at least \(\mathfrak{s}^{l-1}\) apart, then they will merge at some point. Second, the size of a \(k\) band (in terms of the indices it contains) is limited by its label as seen in the following.
**Lemma 13** (Band size limit, [14, Lemma 3.1]).: _If \([m,n]\) is a \(k\) band with \(f_{k}(m)=l\), then \(|n-m+1|\leq(\mathfrak{s}/2)^{l-1}\)._
An indicated key result is the local termination of the merging scheme for light-tailed \(N_{i}\).
**Lemma 14** (Exponential decay of band labels, [14, Lemma 3.4]).: _Assume that \(N=(N_{i})_{i\in\mathbb{Z}}\) is a sequence of iid geometric random variables with \(\mathbb{P}(N_{1}\geq l+1)=\mathfrak{q}^{l}\). For any \(J\in\mathbb{Z}\), \(l\in\mathbb{N}\), and decay \(\mathfrak{p}\in(0,1)\), there exists a geometric parameter \(\mathfrak{q}:=\mathfrak{q}(\mathfrak{s},\mathfrak{d},\mathfrak{p})\in(0,1)\), such that we have_
\[\mathbb{P}\big{(}\exists k\text{ s.t. }J\text{ lies in a $k$ band with label }\geq l\big{)}\leq\mathfrak{p}^{l-1}\,.\]
_In particular, the following holds almost surely: For each \(J\in\mathbb{Z}\), there exists a \(K\in\mathbb{N}\) such that for all \(k\geq K\), all the \(n\) bands containing \(J\) are identical._
Since the \(k\) bands are static at some point, we may now define the "\(k=\infty\)" bands.
**Definition 15** (Bands and labels).:
1. An (integer) interval \([m,n]\) is called a **band** (without \(k\) in front) if there exists some \(K\in\mathbb{N}\) such that \([m,n]\) is a \(k\) band for all \(k\geq K\). For \(j\in\mathbb{Z}\), the label of \(j\) is \(f(j):=\lim_{k}f_{k}(j)\). The label of a band \([m,n]\) is \(f(m)\).
2. If \(N=(N_{i})_{i\in\mathbb{Z}}\) is such that \(\mathbb{Z}\) decomposes into bands that are finite, then we call \(N\)**good**.
Note that bands and their labels are always finite, i.e., \(f(m)<\infty\), except for the (potential) band containing \(N_{i}=\infty\).
From now on, we only deal with good \(N=(N_{i})_{i\in\mathbb{Z}}\).
**Corollary 16**.: _In the setting of Lemma 14, we may with positive probability set \(N_{0}=\infty\) without changing the bands of \(N\) and only changing the label of the band containing \(0\) to \(\infty\)._
Setting \(N_{0}^{(\mathbf{T})}=\infty\) means that we consider all vertices of the form \((0,x)\) to be closed. In this way, Corollary 16 allows us to fix \(0\) as a "base height" and therefore restrict ourselves to a half space.
**Definition 17** (Neighbouring bands and regularity).: We enumerate bands as \(B_{m}^{N},m\in\mathbb{Z}\) where \(B_{0}^{N}\) is the band containing \(0\) and \(B_{1}^{N}\) is the band to the right of \(B_{0}^{N}\).
at least \(\lceil\mathfrak{s}/12\rceil\) and up to \(12\mathfrak{s}\) many
* Two bands \(B_{m}^{N}\) and \(B_{m^{\prime}}^{N}\) are called **neighbouring bands with labels**\(\geq l\) if they both have labels \(\geq l\) and there is no band with label \(\geq l\) inbetween.
* The good sequence \(N=(N_{i})_{i\in\mathbb{Z}}\) is called **regular** if for all \(l\) and all neighbouring bands \(B_{m}^{N}\) and \(B_{m^{\prime}}^{N}\) with labels \(\geq l\), we have \(|m-m^{\prime}|\in[\mathfrak{s}^{l-1},\,12\cdot\mathfrak{s}^{l-1})\), i.e., there are at least \(\mathfrak{s}^{l-1}-1\) and at most \(12\mathfrak{s}^{l-1}-1\) bands between \(B_{m}^{N}\) and \(B_{m^{\prime}}^{N}\).
A regular sequence is "regular" in the sense that bands with certain labels show up regularly and are not spread too far apart. A good sequence \(N=(N_{i})_{i\in\mathbb{Z}}\) can always be made regular by artificially raising individual \(N_{i}\) (Lemma 33). We omit further details here since they are not needed to phrase the general proof skeleton. The condition of \(|m-m^{\prime}|\geq\mathfrak{s}^{l-1}\) is automatically satisfied:
**Lemma 18** ([11, Lemma 3.6]).: _If \(B_{m}^{N}\) and \(B_{m^{\prime}}^{N}\) have label \(\geq l\), \(m\neq m^{\prime}\), then \(|m-m^{\prime}|\geq\mathfrak{s}^{l-1}\)._
Proof.: If not, these bands would have merged before.
Our next object of interest is "the space between neighbouring bands" since this is where our model will build up its "bulk" before percolating through bands.
**Definition 19** (\(l\) segments).: Let \(N\) be good and \([i_{1},\,i_{2}],\,[i_{3},\,i_{4}]\) be two neighbouring bands of label \(\geq l\) (for \(N\)). Then we call \((i_{2},i_{3})\) an \(l\)**segment**. We refer to Figure 6 for an illustration of bands and segments for regular \(N\).
**Lemma 20** (Number of \(l\) segments between neighbouring bands).: _Let \(N\) be regular and \(B_{m}^{N}\), \(B_{m^{\prime}}^{N}\) be neighbouring bands of label \(\geq l+1\). Let \(\{m_{0},\ldots,m_{k}\}=\{\tilde{m}\in[m,m^{\prime}]\,|\,B_{\tilde{m}}^{N}\) has label \(\geq l\}\). Then, \(\lceil\mathfrak{s}/12\rceil\leq k<12\cdot\mathfrak{s}\). In particular, there are between \(\lceil\mathfrak{s}/12\rceil\) and \(12\cdot\mathfrak{s}\) many \(l\) segments separated by bands of label \(l\) between two neighbouring bands of label \(\geq l+1\)._
Proof.: Since \(m_{i}-m_{i-1}<12\cdot\mathfrak{s}^{l-1}\) by regularity and \(m^{\prime}-m=m_{k}-m_{0}\geq\mathfrak{s}^{l}\), we have \(k\cdot 12\cdot\mathfrak{s}^{l-1}\geq\mathfrak{s}^{l}\) which shows the first inequality. The second follows from \(m_{i}-m_{i-1}\geq\mathfrak{s}^{l-1}\) and \(m^{\prime}-m=m_{k}-m_{0}<12\cdot\mathfrak{s}^{l}\) by the same reasoning.
Apart from the termination of the merging scheme (Lemma 14), the above Lemma 20 is this section's important take-away. It tells us that we always find a minimal amount of segments between two bands. Regularity gives an upper bound.
Figure 6: Bands (green bars) and segments (curly brackets) for regular \(N\). In this picture, there are always at least four \(l\) segments between two neighbouring bands of label \(l\).
### \(n\) boxes in \(\mathbb{Z}\times\mathbb{Z}\)
The framework for the environment grouping has been established. We use it on the temporal environment with parameter \(\mathfrak{s}_{t}\) and the spatial one with \(\mathfrak{s}_{x}\). Moving along our rough proof outline of Section 1.3, we now use this grouping to build boxes. These boxes will be connected using "inputs" and "outputs" which are just vertices in special locations.
**Definition 21** (\(n\) boxes, \((m,n)\) strips and \(n\) gaps).:
* If \([t_{1},t_{2}]\) is a temporal 2 segment and if \(\{x\}\) or \(\{x-1\}\) is a spatial band of label 1, then any rectangle \([t_{1},t_{2}]\times\{x\}\) is a 1 **box**. (Equivalently: if for every spatial band \([x_{1},x_{2}]\), we have that \(x\notin(x_{1},x_{2}]\).)
* Let \(n\in\mathbb{N}_{\geq 2}\). Let \([t_{1},t_{2}]\) be a temporal \(n+1\) segment, i.e. the interval between two neighbouring bands with label \(n+1\) (see Definition 19), and \((x_{1},x_{2})\) be a spatial \(n\) segment. Then, we call \[[t_{1},t_{2}]\times(x_{1},x_{2}]\] an \(n\) **box**. (Yes, \(x_{2}\) included!)
* Let \(n\in\mathbb{N}_{\geq 1}\). Let \((t_{2},t_{1}^{\prime})\) be a temporal \(n+1\) band and \((x_{1},x_{2})\) be a spatial \(m\) segment. Then, we call \[(t_{2},t_{1}^{\prime})\times(x_{1},x_{2}]\] an \((n+1,m)\) **strip**. In other words: A \((n+1,n)\) strip is the temporal interruption separating two vertically neighbouring \(n\) boxes.
* Let \(n\in\mathbb{N}_{\geq 1}\). Let \([t_{1},t_{2}]\) be a temporal \(n+1\) segment and \([x_{2},x_{1}^{\prime}]\) be a spatial \(n\) band. Then, we call \[[t_{1},t_{2}]\times[x_{2},x_{1}^{\prime}]\] an \(n\) **gap**. In other words: An \(n\) gap is the spatial interruption separating two horizontally neighbouring \(n\) boxes (starting at the right-most border of the left box).
An illustration of an \(n+1\) box is given in Figure 7.
_Remark_ (Renormalisation).: We have to use \(n+1\) rather than \(n\) bands in the temporal part because we essentially inserted a renormalisation step there. This unfortunately also introduces a lot of bloat in notation. Lemma 20 tells us that an \(n+1\) box consists of between \(\lceil\mathfrak{s}_{x}/12\rceil+1\) and \(12\mathfrak{s}_{x}+1\) many columns as well as between \(\lceil\mathfrak{s}_{t}/12\rceil\) and \(12\mathfrak{s}_{t}\) many rows of \(n\) boxes. These are separated by \(n\) gaps respectively \((n,n)\) strips.
Next, we want to formally define good boxes as well as their inputs and outputs now. The directed case makes things a bit more complicated, but the multiscale arguments still work in a nice way. We will often need to connect sets of vertices with each other, so it makes sense to first introduce the following notion (slightly different to [1]):
_Notation_ ((Fully) connected sets).: Let \(A,B\subset\mathbb{Z}^{2}\) be two sets of vertices. We write
\[A\leadsto B\]
if there are \(v\in A,w\in B\) such that \(v\leadsto w\), i.e. there exists an open directed path from \(v\) to \(w\). We write
\[A\leadsto_{\text{ffc}}B\]
if for every \(v\in A\) and every \(w\in B\), we have \(v\leadsto w\). Note that
\[A\leadsto_{\text{ffc}}B\leadsto C\leadsto_{\text{ffc}}D\implies A\leadsto_{ \text{ffc}}D\,.\]
_Remark_.: Before directly moving on to the definition of inputs and outputs, let us recall the basic idea first. Each "good" \(n\) box \(B_{n}\) will have four sets of vertices \(\mathtt{In}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{ }^{[{}^{[{}^{[{}^{[{}^{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[ }^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[}^{[{[}^{[}^{[ {[{[}^{[{[}^{[{[}^{[{[}^{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[}^{[ {[{[}^{[{[}^{[{[}^{[}^{[{[}^{[{[}^{[}^{[{[}^{[{[}^{[ {[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[ {[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[}^{[{[ }^{[{[{[}^{[{[}^{[{[}^{[{[}^{[{[{[ }^{[{[{[}^{[{[{[}^{[{[}^{[{ }^{[{[{[}^{[{[{[}^{[{ }^{[{[{[{[}^{[{ }^{[{[{{ }^{[{[{{{ }^{[}^{[{{ }^{[{{{ }^{{ }^{ ({{ }^{ { }^{{ }}^{{{ }^{{ }^{{ }^{ }^{{ }^{ }^{ }^{{ }^{ }^{{ }^{ }^{{ }^{{ }^{ }^{{ }^{{ }^{ }^{{ }^{ }^{{ }^{ }^{{ }^{ }^{{ }^{{ }^{ }^{{ }^{ }^{{ }^{{ }^{ }^{{ }^{ }^{{ }^{ }^{{ }^{{ }^{ }^{ }^{{ }^{ }^{{ }^{{}^{ }^{ }^{{}^{{}^{ }^{{}^{ }^{{}^{ }^{ }^{{}^{ }^{{}^{ }^{{}^{{}^{}^{ }^{{}^{{}^{}^{ }^{ }^{{}^{{}^{}^{ }^{{}^{ }^{{}^{}^{ }^{{}^{}^{{}^{{}^{}^{ }^{{}^{}^{ }^{{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{ }^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{ {}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{ {}^{}^{{}^{}^{}^{{}^{}^{ {}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ {}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{ }}^{{{}^{}^{{}^{{}^{}^{{}}^{{}^{{}^{}^{{}}^{{}^{{}^{ }}^{{{}^{}^{{}}^{{}^{{}}^{{}^{}^{{}^{{}}^{{}^{}^{{}}^{{ }^{{}}^{{{}}^{{}^{{}}^{{}^{{}^{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{ }^{{{}}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{ }^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}^{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{ }}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}}^{{}^{{}^{{}}^{{ }}^{{{}}^{{}^{{}^{{}}^{{}}^{{}^{{}^{{}}^{{}^{{}^{{}}^{{}}^{{}^{{}}^{{ }^{{{}}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{ }}^{{{}^{{}^{{}}^{{{}}^{{}^{{}}^{{}^{{{}}}^{{}^{{}}^{{{}}^{{}}^{{}^{{}}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{{}}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{}}^{{{}^{{}}^{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{}}^{{{}}^{{}^{{}}^{{}}^{{}^{{{}}^{{}}^{{}^{{}}^{{{}}^{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}}^{{{}}^{{}^{{}}^{{}}^{{{}}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{{}}^{{}}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{{}}^{{}^{{}}^{{}}^{{{}}^{{}}^{{{}^{{}}}^{{{}^{{}}^{{{}}^{{}}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{}^{{{}}}^{{{}}^{{}^{{}}}^{{{}^{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}}^{{}^{{{}}^{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}}^{{{{}}^{{{}}^{{{}}}}^{{}^{{{}}^{{{}}^{{{}}}^{{{}^{{{}}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}}^{{{{}}^{{}}^{{{}}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{{}}^{{{}}}^{{}^{{{}}}^{{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}}^{{{}}^{{}^{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{ }^{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{}^{{{}}^{{}}^{{{}}}^{{{}}^{{}}^{{{}}^{{}^{{}}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{{}^{ }^{{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}} {{}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}}^{{}^{{}^{{}}^{{}^{{}^{{}}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{ }^{{}^{{}^{{}^{{}}^{{}^{{}^{{}}^{{{}}^{{}^{{}^{{}}^{{}^{{{}}^{{}}^{{{}}^{{}^{{}}^{{}} {{}^{{}^{{}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{
* We call an \(n+1\) box \(B_{n+1}\)**good** (and otherwise **bad**) if the sum of the number of the following bad objects is at most \(1\): 1. \(n\) boxes inside \(B_{n+1}\), 2. \((n+1,n)\) strips between two \(n\) boxes inside \(B_{n+1}\), 3. \(n\) gaps between two \(n\) boxes inside \(B_{n+1}\).
* In the case of \(B_{n+1}\) being good, we first number its \(n\) boxes \((B_{i,j})_{1\leq i\leq l_{t},1\leq j\leq l_{x}}\) by their location (with \(i=j=1\) being top-left) where \(l_{t}\in[\lceil\mathfrak{s}_{t}/12\rceil,\,12\mathfrak{s}_{t}]\) and analogously \(l_{x}-1\in[\lceil\mathfrak{s}_{x}/12\rceil,\,12\mathfrak{s}_{x}]\). Next, we set (for some \(\kappa[\coloneqq]\in\mathbb{N}\) specified in Equation (4)) \[I:=[0,\kappa[\coloneqq]+4)+12\mathfrak{s}_{x}+2\,.\] Then, we can finally define the **inputs** and **outputs** of the \(n+1\) box. The vertical inputs/outputs are as follows: For \(j\in\{1,\ldots,l_{x}\}\), we set \[\mathtt{In}^{\lceil u\rceil}(B_{n+1}) :=\left\{v\in\mathtt{In}^{\lceil u\rceil}(B_{1,j})\,|\,B_{1,j} \text{ is good}\right\}\] \[\mathtt{Out}^{\lceil u\rceil}(B_{n+1}) :=\left\{v\in\mathtt{Out}^{\lceil u\rceil}(B_{l_{t},j})\,|\,B_{ l_{t},j}\text{ is good }\right\}\,.\] Let \(\partial B_{n+1}\subset\mathbb{Z}\times\mathbb{Z}\) be the boundary, i.e. the set of all vertices in \(B_{n+1}\) having a neighbour outside of it. Then, \[\mathtt{In}^{[\coloneqq]}(B_{n+1}) :=\left\{v\in\mathtt{In}^{[\coloneqq]}(B_{i,j})\cap\partial B_{n+ 1}\,|\,B_{i,j},B_{i+1,j}\text{ are valid, }j\in\{1,l_{x}\},i\in I\right\}\] \[\mathtt{Out}^{[\coloneqq]}(B_{n+1}) :=\left\{v\in\mathtt{Out}^{[\coloneqq]}(B_{i,j})\cap\partial B_{n+ 1}\,|\,B_{i-1,j}B_{i,j}\text{ are valid, }j\in\{1,l_{x}\},i\in I\right\}\] where we say that \(n\) boxes \(B_{n},B_{n}^{\prime}\) are **valid** if both are good and \(\mathtt{Out}^{\lceil u\rceil}(B_{n})\leadsto\mathtt{In}^{\lceil u\rceil}(B_{ n}^{\prime})\).
We refer to Figure 8 for an illustration.
The parameters \(\mathfrak{s}_{x}\) and \(\mathfrak{s}_{t}\) roughly correspond to the width respectively height of the given boxes. Thus, they also influence the number of connectors between boxes: The larger \(\mathfrak{s}_{x}\), the larger the number of vertical connectors between vertically neighboured boxes (since the boxes are wider). The same holds for \(\mathfrak{s}_{t}\). We will capture the minimal amount of vertical (respectively horizontal) connectors via the parameters \(\kappa[\u]\) and \(\kappa[\coloneqq]\).
We set the following parameters:
\[\kappa[\u]:=\lceil\mathfrak{s}_{x}/12\rceil-2\qquad\text{and}\qquad\kappa[ \coloneqq]:=\lceil\mathfrak{s}_{t}/12\rceil-2\cdot(12\mathfrak{s}_{x}+1)-4 \tag{4}\]
and assume \(\kappa[\u]\geq 64\) (additional conditions on \(\kappa[\coloneqq]\) are specified later).
_Remark_.: The spatial parameter \(\mathfrak{s}_{x}\) can just be fixed to \(12\cdot 66\) to ensure \(\kappa[\u]\geq 64\). The value of \(\mathfrak{s}_{t}\) (equivalently \(\kappa[\coloneqq]\)) will however depend \(\alpha\) and a small parameter \(\mathtt{p}\) which governs the probability of bad boxes introduced later in Lemma 24. Also, for a rough estimate on the values: We already have
\[\mathfrak{s}_{x}\geq 11\cdot 66+1\geq 700\qquad\text{and}\qquad\mathfrak{s}_{t} \geq 17^{\prime}000\,,\]
so this is quantitively unfeasible.
### Towards proving percolation
The parameters \(\kappa[\![\upharpoonright]\!],\kappa[\!=\!]\) had to be set in such a convoluted way to ensure the following connectivity inside good \(n\) boxes:
**Lemma 23** (Connecting inputs and outputs inside).: _Let \(n\in\mathbb{N}\). Let \(B_{n}\) be a good \(n\) box. Then,_
\[\mathtt{In}^{[\upharpoonright]}(B_{n})\leadsto_{\mathtt{ffc}} \mathtt{Out}^{[\upharpoonright]}(B_{n})\] \[\mathtt{In}^{[\upharpoonright]}(B_{n})\leadsto_{\mathtt{ffc}} \mathtt{Out}^{[\upharpoonright]}(B_{n})\] \[\mathtt{In}^{[\upharpoonright]}(B_{n})\leadsto_{\mathtt{ffc}} \mathtt{Out}^{[\upharpoonright]}(B_{n})\,.\]
_In particular, if \(B^{\prime}_{n}\) is a horizontally neighbouring good \(n\) box with the \(n\) gap inbetween being good as well, then_
\[\mathtt{In}^{[\upharpoonright]}(B_{n})\leadsto_{\mathtt{ffc}} \mathtt{Out}^{[\upharpoonright]}(B^{\prime}_{n})\,.\]
The application of this lemma can be retrospectively seen in Figure 2.
We may finally state the main auxilliary theorem for the multiscale argument. Using the probability of good \(n\) boxes, we are then in the state to prove the main theorem on the survival of the infection (Theorem 4).
**Lemma 24** (Main auxilliary lemma, [15, Lemma 4.3]).: _Let \(\mathrm{p}\in(0,1)\) and \(\kappa[\![\upharpoonright]\!]\geq 64\). For all sufficiently large \(\kappa[\!=\!]\in\mathbb{N}\) (depending on \(\mathrm{p},\kappa[\![\upharpoonright]\!]\)), there exists \(p_{c}\in(0,1)\) such that in the LoRaC model for any \(p\geq p_{c}\)_
1. \(\mathbb{P}(B_{n}\text{ is good})\geq 1-\mathrm{p}^{\,n+1}\) _for any_ \(n\) _box_ \(B_{n}\)_._
2. _Let_ \(G_{n}\) _be a temporal_ \(n\) _gap (between two neighbouring_ \(n\) _boxes). Then,_ \[\mathbb{P}(G_{n}\text{ is good})\geq 1-\mathrm{p}^{\,n+1}\,.\]
3. _For an_ \((n+1,n)\) _strip_ \(\bar{S}\) _between two_ \(n\) _boxes_ \(B_{n},B_{n}^{\prime}\)_, we have_ \[\mathbb{P}(\bar{S}\text{ is good})\geq 1-\mathbb{p}^{\,n+1}\,.\]
Proof outline.: For the reader's convenience, we will give a brief overview over the main steps. The complete proof will be given in Section 4.
* Point 1 follows from combinatorial estimates (Lemma 45) and induction after proving Point 2 and 3.
* \(n\) gaps are exponentially large in \(n\) (Lemma 39). Since we have long-range edges, we can guarantee Point 2 by choosing \(\kappa[=]\) large depending on \(\alpha\) (Lemma 47). We do so by crossing the whole \(n\) gap in a single jump.
* Point 3 follows from the main difficulty of the whole procedure: the "drilling" (Proposition 48). Luckily, the proof in [11] still works here.
Taking Lemma 24 Point 1, we can finally prove the existence of an infinite directed path and in particular the phase transition of the LoRaC in the parameter \(p\).
Proof of Theorem 4.: Puzzling everything together is still something.
1. We first take \(\mathfrak{s}_{x}=12\cdot 66\), \(\mathbb{p}=1/4\).
2. Using Lemma 10, we may assume at the cost of \(\alpha\) that \(q^{(x)}\) is sufficiently small such that Lemma 14 holds for \(\mathfrak{d}=1/12\), in particular we may use the whole framework of Section 3.
3. Lemma 24 gives us some \(\mathfrak{s}_{t}\) and \(p_{c}\) for which it holds.
4. Using Lemma 9, we may assume at the cost of \(p\) that \(q^{(t)}\) is sufficiently small such that Section 3 can be used for that \(\mathfrak{s}_{t}\) and \(\mathfrak{d}\).
5. Corollary 16 lets us fix base height \(0\) for a positive fraction of temporal environments, i.e. \(N_{0}^{(\mathbb{T})}=\infty\).
6. Next, choose \(u=(1,42)\in\mathbb{Z}_{>0}\times\mathbb{Z}\). This lies in some \(n\) box for \(n\) large enough. By Lemma 24 Point 1 and Borel-Cantelli, there exists some \(N_{0}\) such that all the \(n\) boxes \(B_{n}\) with \(n\geq N_{0}\) containing \(u\) are good.
7. Now, take any \(v\in\texttt{In}^{[\,n\,]}(B_{N_{0}})\). Since \(N_{0}^{(\mathbb{T})}=\infty\), we have \(v\in\texttt{In}^{[\,n\,]}(B_{n})\) for every \(n\geq N_{0}\), in particular \(v\leadsto_{\text{ffc Out}}(B_{n})\). Therefore, \(v\leadsto w\) for infinitely many \(w\). This already yields us an infinite directed path: Set \(v_{0}:=v\). Since \(v_{0}\) only has finitely many direct successors, we may choose any of these successors \(v_{1}\) that has infinitely many \(w\) with \(v_{0}\to v_{1}\leadsto w\). Inductively continuing this scheme, we obtain an infinite path \(v_{0}\to v_{1}\to v_{2}\to\dots\).
8. \(N^{(\mathbb{T})}=(N_{i}^{(\mathbb{T})})_{i\in\mathbb{Z}}\) is an iid sequence, in particular ergodic. So \[\mathbb{P}\{N^{(\mathbb{T})}\operatorname{s.t.}\mathbb{P}(\exists v\in \mathbb{Z}^{2},\,v\leadsto\infty\,|N^{(\mathbb{T})})=1\}\in\{0,1\}\,.\] Since we have proven percolation on a positive fraction of environments, it has to hold for almost all of them.
Details: environment grouping
Now, that the rigorous roadmap has been laid out in Section 2, it is time to flesh it out. The main goals in the current sections are:
* Showing Lemma 14, i.e., local termination of the merging scheme.
* Showing how good sequences can always be made regular and even "very regular" (Lemmas 33, 36).
* Introducing the notion of simple bands (Definition 37) and how they are well-behaved (Lemma 39).
* Splitting up very regular bands (Lemma 40).
### Local termination of merging scheme
We start by quantifying the maximal "size" of bands, i.e., giving the proof for Lemma 13.
Proof of Lemma 13.: The statement is true for \(l\leq 3\) since it implies \(m=n\). Now suppose \([m,n]\) is a \(k\) band with \(m\neq n\) and \(f_{k}(j)=l>3\). Then, there must be some \(k^{\prime}<k\) and \(m^{\prime},n^{\prime}\) such that the \(k^{\prime}\) bands \([m,m^{\prime}]\) and \([n^{\prime},n]\) merge into \([m,n]\). We denote \(\underline{l}:=f_{k^{\prime}-1}(m)\), \(\bar{l}=f_{k^{\prime}-1}(n)\) and \(\mathbf{N}:=D_{k^{\prime}}(m^{\prime},n^{\prime})\). Then, there are at most \(\mathbf{N}/\mathfrak{s}^{L-1}\) many \(k^{\prime}\) bands with labels \(L\) between \([m,m^{\prime}]\) and \([n^{\prime},n]\) (otherwise some would have merged). Using the induction hypothesis on the \(k^{\prime}\) bands of labels \(L\)
\[|n- m+1|\leq|m^{\prime}-m+1|+|n^{\prime}-m^{\prime}-1|+|n-n^{\prime}+1|\] \[\leq (\tfrac{\mathfrak{s}}{2})^{\underline{l}-1}+(\tfrac{\mathfrak{s} }{2})^{\bar{l}-1}+\sum_{L=1}\sum\left\{|b^{\prime}-b+1|\,\Big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|} \,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|} \,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|} \,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|} \,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|} \,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|} \,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\, \big{|}\,\big{|}\,\big{|}\,\big{|}\,\big{|}\,\big
Proof.: Let \(\mathbf{N}_{l}\) be the number of \(k\) bands in \((m^{\prime},n^{\prime})\) with label \(l\). Then \(\mathbf{N}_{l}\leq\mathbf{N}/(\mathfrak{s}^{l-1})\) since otherwise some would have merged first. Furthermore \(\mathbf{N}=\sum_{l}\mathbf{N}_{l}\), so
\[n^{\prime}-m^{\prime}-1=\sum\big{\{}b^{\prime}-b+1\,|\,[b,b^{ \prime}]\text{ is a $k$ band in $(m^{\prime},n^{\prime})$}\big{\}}\] \[\qquad\leq\mathbf{N}_{1}+\mathbf{N}_{2}+\sum_{l\geq 3}\mathbf{N}/( \mathfrak{s}^{l-1})(\mathfrak{s}/2)^{l-1}\leq\mathbf{N}_{1}+\mathbf{N}_{2}+ \mathbf{N}/2\leq\tfrac{3}{2}\mathbf{N},\]
which shows the claim.
Next we consider generators. They are relevant in this subsection as well as in Section 3.2. Generators of a \(k\) band are, loosely speaking, the boundary points of \(<k\) bands that are merged to form the final \(k\) band.
**Definition 26** (Generators of a \(k\) band).: Let \([m,n]\) be a \(k\) band.
* The \(k\)**generators** of \([m,n]\) are \(m\) and \(n\).
* For \(k^{\prime}<k\), the \(k^{\prime}\)**generators** of \([m,n]\) are the \(k^{\prime}\) generators of the \(k^{\prime}\) bands containing a \(k^{\prime}+1\) generator of \([m,n]\).
* For \(1\) generators, we will omit the \(1\) and just call them **generators**.
* We call a generator \(g\) a **maximal generator** if it satisfies the following: If the \(k^{\prime}<k\) bands \([m_{1},n_{1}]\), \([m_{2},n_{2}]\) with \(g\in[m_{1},n_{1}]\) combine, then \(f_{k^{\prime}}(m_{1})\geq f_{k^{\prime}}(m_{2})\).
* One verifies that \(k\) bands \(B\) always have a maximal generator. Pick the smallest one \(\mathfrak{g}_{k}(B)\).
The next lemma limits the possibilities of generators to be spread apart.
**Lemma 27** ([11, Lemma 3.3]).: _Let \(i_{1}<i_{2}<\dots<i_{n}\) be the generators of an \(k\) band with_
\[\sum_{j=1}^{n}f_{1}(i_{j})=m\,.\]
_Then, there exists \(C(\mathfrak{s},\mathfrak{d})>0\) such that_
\[\sum_{j=2}^{n}\lfloor\log_{2}(i_{j}-i_{j-1}+1)\rfloor\leq C(\mathfrak{s}, \mathfrak{d})m\,.\]
Proof.: If \(n=1\), we are done. For each \(j\in[2,n]\), let \(k_{j}\) be the value such that there exists \(a\) and \(b\) such that the \(k_{j}\) bands \([i_{a},i_{j-1}]\) and \([i_{j},i_{b}]\) merge into \([i_{a},i_{b}]\). Let \(\mathbf{N}_{j}\) be number of \(k_{j}\) bands between \([i_{a},i_{j-1}]\) and \([i_{j},i_{b}]\). By the previous Corollary 25
\[\tfrac{2}{3}(i_{j}-i_{j-1}+1)\leq 1+\mathbf{N}_{j}.\]
We have that \(n\leq m/2\) as well as
\[\sum_{j=2}^{n}\lfloor\mathfrak{d}\log_{\mathfrak{s}}(1+\mathbf{N}_{j})\rfloor \leq m\,, \tag{5}\]
whose proof will be given right after. \(\lfloor xy\rfloor\geq x\lfloor y\rfloor-1\) yields the following chain of implication
\[m\geq\sum_{j=2}^{n}\lfloor\mathfrak{d}\log_{\mathfrak{s}}(\tfrac{2}{3}(i_{j}-i_{ j-1}+1))\rfloor\geq\sum_{j=2}^{n}\mathfrak{d}\log_{\mathfrak{s}}2\cdot\lfloor\log_{2}( \tfrac{2}{3}(i_{j}-i_{j-1}+1))\rfloor-n\]
\[m\frac{3\log_{2}\mathfrak{s}}{2\mathfrak{d}}\geq\sum_{j=2}^{n}\lfloor\log_{2}( \tfrac{2}{3}(i_{j}-i_{j-1}+1))\rfloor\geq\sum_{j=2}^{n}\lfloor\log_{2}(i_{j}-i_ {j-1}+1)\rfloor-n\]
\[m\big{(}\frac{3\log_{2}\mathfrak{s}}{2\mathfrak{d}}+\frac{1}{2}\big{)}\geq\sum _{j=2}^{n}\lfloor\log_{2}(i_{j}-i_{j-1}+1)\rfloor\,.\]
The claim follows from choosing \(C(\mathfrak{s},\mathfrak{d}):=\big{(}\frac{3\log_{2}\mathfrak{s}}{2\mathfrak{ d}}+\frac{1}{2}\big{)}\).
Proof of Inequality (5).: By the assumption \(\sum_{j=1}^{n}f_{1}(i_{j})=m\), we first see that \(n\leq m/2\) since \(f_{1}(i_{j^{\prime}})\geq 2\) for generators. Equation (3) gives
\[f_{k_{j}+1}(i_{j})=f_{k_{j}}(i_{j-1})+f_{k_{j}}(i_{j})-\lfloor\mathfrak{d}\log _{\mathfrak{s}}(1+\mathbf{N}_{j})\rfloor\,.\]
If for example \(\min_{j}k_{j}=k_{j^{\prime}}\) is the smallest, i.e., we first combine \([i_{j^{\prime}-1}]\) and \([i_{j^{\prime}}]\), this would yield
\[m=\sum_{j=1}^{n}f_{1}(i_{j})=\sum_{j=1}^{n}f_{k_{j^{\prime}}}(i_{j})=\sum_{j=2 }^{n}f_{k_{j^{\prime}}+1}(i_{j^{\prime}})+\lfloor\mathfrak{d}\log_{\mathfrak{ s}}(1+\mathbf{N}_{j^{\prime}})\rfloor\,.\]
Continuing iteratively with \(f_{k_{j^{\prime}}+1}\) now instead of \(f_{1}\) yields
\[m=f_{k^{\prime}}(i_{j^{\prime}})+\sum_{j=2}^{n}\lfloor\mathfrak{d}\log_{ \mathfrak{s}}(1+\mathbf{N}_{j})\rfloor\geq\sum_{j=2}^{n}\lfloor\mathfrak{d} \log_{\mathfrak{s}}(1+\mathbf{N}_{j})\rfloor\,,\]
which finishes the calculation. (Even \(f_{k^{\prime}}(i_{j^{\prime}})\geq 2n\) since merges raise labels by at least \(2\).)
We are finally in the spot to prove the first milestone: local termination of the merging scheme.
Proof of Lemma 14.: Let \(\mathfrak{q}^{1/2}\leq\mathfrak{p}\cdot\mathfrak{s}^{-3C(\mathfrak{s}, \mathfrak{d})}/2\) and \(l>1\). Assume that \(J\) actually lies in a \(k\) band with label \(\geq\)l and generators \(i_{1}<\dots<i_{n}\). In the case that \(i_{1}=i_{n}=J\), the claim follows
\[\mathbb{P}(f_{1}(J)\geq l)=\mathbb{P}(N_{J}\geq l)=\mathfrak{q}^{l-1}\leq \mathfrak{p}^{l-1}\,.\]
Otherwise, we continue. The \(i_{j}\) satisfy by the label updating procedure in Definition 11
\[m:=\sum_{j}^{n}f_{1}(i_{j})\geq l\,.\]
By Lemma 27 above and Lemma 28 below, we have at most \(2^{C(\mathfrak{s},\mathfrak{d})m}\) choices for \(\lfloor\log_{2}(i_{2}-i_{1}+1)\rfloor,\dots,\lfloor\log_{2}(i_{n}-i_{n-1}+1)\rfloor\).
Given one such choice, we yet again have \(2^{(C(\mathfrak{s},\mathfrak{d})+\frac{1}{2})m}\) choices for \((i_{2}-i_{1}),\dots(i_{n}-i_{n-1})\):
Set \(a_{j}:=\lfloor\log_{2}(i_{j}-i_{j-1}+1)\rfloor\), in particular \(i_{j}-i_{j-1}\leq 2^{a_{j}+1}\). There are \(2^{a_{j}+1}\) possibilities for each individual \(j\), so in total for the whole ensemble
\[\prod_{j=2}^{n}2^{a_{j}+1}\leq 2^{C(\mathfrak{s},\mathfrak{d})m+n}\leq 2^{ \{C(\mathfrak{s},\mathfrak{d})+1/2\}m}\,.\]
Furthermore, there are at most \((\mathfrak{s}/2)^{l-1}\) possible starting locations for \(i_{1}\) since by Lemma 13
\[i_{1}\leq J\leq i_{1}+(\mathfrak{s}/2)^{l-1}-1\,.\]
So in total, we have at most \((\mathfrak{s}/2)^{l-1}\cdot 2^{(2C(\mathfrak{s},\mathfrak{d})+1)m}\) choices for \(i_{1},\ldots i_{n}\). For each choice of \(i_{1},\ldots,i_{n}\), there are at most \(2^{m}\) choices for \(f_{1}(i_{1}),\ldots f_{1}(i_{n})\) (by Lemma 28 below), so we have at most
\[(\mathfrak{s}/2)^{l-1}\cdot 2^{(2C(\mathfrak{s},\mathfrak{d})+1)m}\cdot 2^{m} \leq\mathfrak{s}^{3C(\mathfrak{s},\mathfrak{d})m}\]
choices for the combined \(i_{j}\) and \(f_{1}(i_{j})\). Each such choice has probability \(\mathfrak{q}^{m-n}\leq\mathfrak{q}^{m/2}\) since \(\mathbb{P}\{f_{1}(i_{j})=s\}\leq\mathfrak{q}^{s-1}\). Therefore, for \(\mathfrak{q}^{1/2}\leq\mathfrak{p}\cdot\mathfrak{s}^{-3C(\mathfrak{s}, \mathfrak{d})}/2\) (in particular \(\mathfrak{q}\leq\mathfrak{p}/2\))
\[\mathbb{P}\big{(}\exists k\text{ s.t. }j\text{ lies in an $k$ band with label }\geq l\big{)}\leq\sum_{m\geq l}\big{[}\mathfrak{s}^{3C(\mathfrak{s}, \mathfrak{d})}\mathfrak{q}^{1/2}\big{]}^{m}+\mathfrak{q}^{l-1}\] \[\leq\sum_{m\geq l}(\mathfrak{p}/2)^{m}+(\mathfrak{p}/2)^{l-1}= \mathfrak{p}^{l}\frac{1}{2^{l}(1-\mathfrak{p}/2)}+(\mathfrak{p}/2)^{l-1}\leq 2 ^{-(l-1)}\cdot\big{[}\mathfrak{p}^{l}+\mathfrak{p}^{l-1}\big{]}\leq\mathfrak{p }^{l-1},\]
as desired.
Here is the auxiliary lemma we previously used.
**Lemma 28** (Combinations of sums).: _Let \(S\in\mathbb{N}\). Then_
\[N(S) :=\#\big{\{}(a_{1},\ldots,a_{k})\,|\,a_{j}\geq 1,\sum a_{j}=S \big{\}}=2^{S-1}\,,\] \[\tilde{N}(S) :=\#\big{\{}(a_{1},\ldots,a_{k})\,|\,a_{j}\geq 1,\sum a_{j}\leq S \big{\}}\leq 2^{S}-1\,.\]
Proof.: For \(S=1\), we have \(N(S)=1\). Assume the claim is true for \(S\). Then for \(S+1\):
\[\tilde{N}(S+1) =\#\big{\{}(a_{1},\ldots,a_{k})\,|\,a_{j}\geq 1,\sum a_{j}\leq S+1 \big{\}}\] \[=\#\bigcup_{R\leq S}\big{\{}(a_{1},\ldots,a_{k},S+1-R)\,|\,a_{j} \geq 1,\sum a_{j}=R\big{\}}\cup\{(S+1)\}\] \[(\text{induction}) =1+\sum_{R\leq S}2^{R-1}=2^{S}-1\]
On the other hand
\[N(S+1)=\tilde{N}(S+1)-\tilde{N}(S)=2^{S+1}-2^{S}=2^{S}\]
proves the claim.
We have seen in Lemma 13 that the "size" of a band is limited by its label \(l\). To cross \(n\) gaps in our percolation model, we are more interested in the actual consecutive stretch. It turns out that this is also just an exponential in \(l\).
**Lemma 29** (Total weight of a band).: _Let \([a,b]\) be a band of label \(l\). Then,_
\[\sum_{i=a}^{b}f_{1}(i)\leq\mathfrak{s}^{l-1}\,.\]
Proof.: We have \(f_{1}(i)\leq l\) for every \(i\in[a,b]\). Using Lemma 13, we have \(|b-a|\leq(\frac{\mathfrak{s}}{2})^{l-1}\), so
\[\sum_{i=a}^{b}f_{1}(i)\leq l\cdot(\tfrac{\mathfrak{s}}{2})^{l-1}\leq\mathfrak{ s}^{l-1}\]
since \(l\leq 2^{l-1}\).
Recall from Definition 17 that we may always enumerate the \((k)\) bands. The exponential decay in Lemma 14 shows that it is quite rare to encounter bands with high labels close to the origin. This is the reason why we may set \(N_{0}:=\infty\) for a positive fraction of environments \(N=(N_{i})_{i\in\mathbb{Z}}\).
**Lemma 30** (High labels near origin).: _Consider the parameter regime of Lemma 14 for \(\mathfrak{p}\) small enough such that \(24\sum_{l\geq 1}(\mathfrak{sp})^{l}<1\). Consider the event_
\[A_{l}:=\left\{\forall\text{bands }B_{m}^{N}\text{ with }1\leq|m|\leq 12\cdot \mathfrak{s}^{l},\text{ their labels are }\leq l\right\}.\]
_Then,_
\[\mathbb{P}(A_{l})\geq 1-24\cdot(\mathfrak{sp})^{l}\qquad\text{and}\qquad\mathbb{P} (\cap A_{l})>0\,,\]
_in particular, almost surely \(A_{l}\) happens infinitely often._
Proof.: By Lemma 14, we have
\[\mathbb{P}\big{(}A_{l}\big{)}\geq 1-\sum_{|m|=1}^{12\cdot\mathfrak{s}^{l}}\mathbb{P}(B_{m}^{N} \text{ has label }>l)\geq 1-24\cdot(\mathfrak{sp})^{l}\quad\text{and}\quad \mathbb{P}\big{(}\cap A_{l}\big{)}\geq 1-24\sum_{l=1}^{\infty}(\mathfrak{sp})^{l}>0\,.\]
The last statement follows from the Borel-Cantelli lemma.
Proof of Corollary 16.: This follows from Lemma 30 and noting that all other bands are sufficiently far away from \(0\) so that they do not merge.
### Regular bands
The next point on the bucket list is making \(N\) regular. \(N\) being unbounded guarantees the existence of bands of labels \(\geq l\) for all \(l\in\mathbb{N}\) and that each such band has exactly \(2\) neighbours. We omit most proofs since they are identical to the ones in [10].
**Lemma 31** (Raising labels of maximal generators, [10, Lemma 3.7]).: _Let \(N=(N_{i})_{i\in\mathbb{Z}}\) be good. Let \(B_{m}^{N}\) be a band of label \(l\) and \(i^{\prime}\in\mathbb{Z}\) be a maximal generator of \(B_{m}^{N}\). If for all bands \(B_{m^{\prime}}^{N}\) of label \(>l\), we have that \(|m-m^{\prime}|\geq\mathfrak{s}^{l}\), then the sequence_
\[\bar{N}_{i}=\begin{cases}N_{i}&i\neq i^{\prime}\\ N_{i}+1&i=i^{\prime}\end{cases}\]
_satisfies the following properties:_
1. \(B_{n,k}^{N}=B_{n,k}^{\tilde{N}}\,\forall n\in\mathbb{Z},k\in\mathbb{N}\)_, i.e. all_ \(k\) _bands are identical and_ \(\tilde{N}\) _is also good._
2. _If the_ \(k\) _label of_ \(B_{n,k}^{N}\) _is_ \(t\)_, then the_ \(k\) _label of_ \(B_{n,k}^{\tilde{N}}\) _is_ \(t+\mathds{1}\{i^{\prime}\in B_{n,k}^{N}\}\)_._
_In particular, \(i^{\prime}\) is still a maximal generator of \(B_{m}^{\tilde{N}}\)._
**Lemma 32** (Making \(N\) more regular, [10, Lemma 3.8]).: _Let \(N\) be good. For each \(L\geq 1\), there exists \(N^{L}=(N_{i}^{L})_{i\in\mathbb{Z}}\) such that_
1. \(N\leq N^{L}\leq N^{L+1}\)_,_
2. \(B_{m,k}^{N}=B_{m,k}^{N^{L}}\) _for all_ \(m\in\mathbb{Z},k\in\mathbb{N}\)_, and_
3. _if_ \(B_{m}^{N^{L}}\) _and_ \(B_{m^{\prime}}^{N^{L}}\) _are neighbouring bands with label_ \(\geq l\) _and if_ \(l\leq L\)_, then_ \[|m-m^{\prime}|\in[\mathfrak{s}^{l-1},\,3\cdot\mathfrak{s}^{l-1})\,.\]
_Furthermore, \(N^{L}\) can be chosen such that \((N^{L}_{i})_{L\in\mathbb{N}}\) is unbounded for at most one \(i\)._
**Lemma 33** (Making sequences regular, [10, Lemma 3.9]).: _Let \(N\) be good._
1. _There exists a sequence_ \(\tilde{N}\geq N\) _such that all the_ \(k\) _bands for_ \(\tilde{N}\) _are identical to the_ \(k\) _bands for_ \(N\) _and such that for neighbouring bands_ \(B_{m},\,B_{m^{\prime}}\) _of label_ \(\geq l\)_, we have_ \[|m-m^{\prime}|\in[\mathfrak{s}^{l-1},\,3\cdot\mathfrak{s}^{l-1})\,,\] _in particular,_ \(\tilde{N}\) _is regular. In this case, we have_ \(\tilde{N}=(N_{i})_{i\in\mathbb{Z}}\) _with_ \(\tilde{N}_{i}\in\mathbb{N}\cup\{\infty\}\) _with at most one_ \(\tilde{N}_{i}=\infty\)_. (The labels may differ between_ \(N\) _and_ \(N\)_.)_
2. _There exists a sequence_ \(\tilde{N}\geq N\) _such that all the_ \(k\) _bands for_ \(\tilde{N}\) _are identical to the_ \(k\) _bands for_ \(N\) _and such that for neighbouring bands_ \(B_{m},\,B_{m^{\prime}}\) _of label_ \(\geq l\)_, we have_ \[|m-m^{\prime}|\in[\mathfrak{s}^{l-1},\,6\cdot\mathfrak{s}^{l-1})\,,\] _in particular,_ \(\tilde{N}\) _is regular. In this case, we have_ \(\tilde{N}=(N_{i})_{i\in\mathbb{Z}}\) _with_ \(N_{i}\in\mathbb{N}\)_. (The labels may differ between_ \(N\) _and_ \(\tilde{N}\)_.)_
Proof.: With \(N^{L}\) from Lemma 32, we consider
\[N^{\infty}_{i}:=\lim_{L\to\infty}N^{L}_{i}\in\mathbb{N}\cup\{\infty\}\,.\]
We make the following observations:
1. If \(N^{\infty}_{i}=\infty\), then \(i\) must be the maximal generator of some band \(B^{N}_{m}\).
2. \(N^{\infty}_{i}=\infty\) for at most one \(i\). Otherwise, we would find two separate bands \(B^{N}_{m}\ni i\) and \(B^{N}_{m^{\prime}}\ni i^{\prime}\). The label of \(B^{N^{L}}_{m}\) is bounded from below by \(N^{L}_{i}\), respectively \(N^{L}_{i^{\prime}}\) for \(B^{N^{L}}_{m^{\prime}}\). So for \(l>0\) such that \(|m-m^{\prime}|<\mathfrak{s}^{l}\) and \(L\) such that \(\min(N^{L}_{i},\,N^{L}_{i^{\prime}})\geq l\), we would violate Lemma 32 Condition 3, on the minimal distance between bands.
Let \(i^{\infty}\) be the value with \(N^{\infty}_{i^{\infty}}=\infty\). We set
\[\tilde{N}_{i}=\begin{cases}\lim_{L\to\infty}N^{L}_{i}&i\neq i^{\infty}\\ \infty&i=i^{\infty}\end{cases}\,.\]
By construction, we have that neighbouring bands \(B^{\tilde{N}}_{m},\,B^{\tilde{N}}_{m^{\prime}}\) always satisfy
\[|m-m^{\prime}|\in[\mathfrak{s}^{l-1},3\cdot\mathfrak{s}^{l-1})\,,\]
showing the first statement. The second claim follows from choosing \(\tilde{N}_{i^{\infty}}:=N_{i}\) instead of \(\infty\).
_Remark_ (Manipulations).: The explicit construction to make bands regular as well as Lemma 30 allow us various manipulations on the environment and locations of bands as well as segments.
* In [11], we tweak the construction such that the origin lies not on one of the "border segments", but rather on the actual inside with at least two \(l\) segments distance to the bands of label \(\geq l\). Later, this ensures the existence of a circuit around the origin. This is why we always use \(12\mathfrak{s}\) for compatibility rather than just \(6\mathfrak{s}\).
* In our case here, we will do quite the opposite: On a positive fraction of environments \(N^{(\mathrm{T})}\), we may set \(N^{(\mathrm{T})}_{0}:=\infty\) without changing any bands (Corollary 16), effectively considering percolation on the half-plane \(\mathbb{Z}_{>0}\times\mathbb{Z}\). Ergodicity then yields the almost-sure existence of an infinite cluster on \(\mathbb{Z}\times\mathbb{Z}\).
### Very regular bands and simple bands
Lastly, we need a bit more information about the internal structure of bands. This is needed to obtain crossing probabilities of strips since we will break bands apart again. The short summary for being very regular is: If two \(k\) bands combine, then the space between them had to be regular. The \(q\) is a parameter of the distance between those bands and will play quite an important role.
_Remark_ (\(k\) bands and \(n\) segments).: Short reminder that \(k\) band refers to the \(k\)-th merging step while \(n\) segment refers to the segment between to neighbouring (\(k\)) bands of label \(n\).
**Definition 34** (\(l\) segments (2)).: In addition to Definition 17, we will also call \((i_{2},i_{3})\) an \(l\)**segment** if there is a good sequence \(M=(M_{i})_{i\in\mathbb{Z}}\) with
\[M_{i}=N_{i}\quad\forall i\in(i_{2},i_{3})\]
and \((i_{2},i_{3})\) is a \(l\) segment for \(M\). We call the segment **regular** if it is generated by a regular sequence \(M\).
_Remark_.: The situation of the following Definition 35 is similar to Figure 6. But since the "neighbouring" \(n\) bands combine, they segments and bands inbetween do not have "level" \(n-1\) but rather \(q\) with \(q<n-1\).
**Definition 35** (Very regular \(k\) bands and \(n\) segments).: Let a regular sequence \(N\) be given.
1. Any \(k\) band that is a singleton \([i,i]\) is **very regular**.
2. The \(1\) segment \(\emptyset\) is **very regular**.
3. Let \([a,d]\) be a \(k\) band with label \(l\) which was formed by combining the \(\tilde{k}\) bands \([a,\,b]\) and \([c,\,d]\) into the \(\tilde{k}+1\) band \([a,\,d]\). \([a,\,d]\) is called **very regular** if there are \(b_{1}=b,b_{2},\ldots,b_{m}\) as well as \(c_{1},c_{2}\ldots,c_{m-1},c_{m}=c\) with \(m\leq 12\mathfrak{s}\) as well as a \(q\geq 1\) such that 1. All \(\tilde{k}\) bands inside the interval \([a,\,d]\) are very regular \(\tilde{k}\) bands. 2. For all \(s\), we have that \([b_{s},\,c_{s}]\) is a very regular \(q\) segment. 3. For all \(s<m\), we have that \([c_{s},\,b_{s+1}-1]\) is a very regular \(\tilde{k}\) band with label \(q\).
4. An \(n\) segment \(\mathcal{S}\) is called **very regular** if 1. \(\mathcal{S}\) is a regular \(n\) segment. (For \(n=2\) and \(\mathcal{S}=[a,\,b]\), this implies \(\mathfrak{s}\leq(b-a)+2<12\mathfrak{s}\).) 2. All \(k\) bands with labels \(n-1\) inside \(\mathcal{S}\) are very regular. 3. All \(n-1\) segments inside \(\mathcal{S}\) are very regular.
5. A band is called very regular if it is a **very regular**\(k\) band for some \(k\).
6. A regular sequence \(N\) is called **very regular** if all the bands generated by \(N\) are very regular.
The notion of "very regular" allows us to split bands into smaller parts - enabling the induction step in Proposition 48. As in Lemma 33, we make sequences very regular without changing the final band structure.
**Lemma 36** (Very regular sequences, [14, Lemma 3.12]).: _Let \(N\) be good and regular. Then, there exists \(\overline{N}\geq N\) such that \(\overline{N}\) is very regular and all bands and labels are identical under both \(\overline{N}\) and \(N\). In particular, we may always replace a regular sequence with a very regular sequence without changing its band structure nor labels._
Proof.: This is an analogon to Lemma 32 and is proven similarly (by establishing a variant of Lemma 31). The labels of the final bands being unchanged follows from the construction: To make bands very regular, one only needs to change the labels of the \(k\) bands on the "inside". But these labels do not contribute to the label of the final combined band.
There is one edge case that we have to worry about due to technical issues: We want to combine bands that are close to each other first. This led to the quite cumbersome merging scheme in Definition 11/Algorithm 12 as well as the following:
**Definition 37** (Simple \(k\) bands).:
1. Any \(k\) band that is a singleton \([i,i]\) is **simple**.
2. Let \([a,d]\) be a \(k\) band with label \(l\) which was formed by combining the \(\tilde{k}\) bands \([a,\,b]\) and \([c,\,d]\) into the \(\tilde{k}+1\) band \([a,\,d]\). \([a,\,d]\) is called **simple** if both \([a,b]\) and \([c,d]\) are simple \(\tilde{k}\) bands as well as \[1+D_{\tilde{k}}(b,c)<(12\mathfrak{s})^{2}\] (see Definition 11, Algorithm 12).
_Remark_ (\(q\) in simple bands).: By Definition 37 and Algorithm 12, we see that simple bands satisfy \(q\leq 2\) with \(q\) as in Definition 35 above. Furthermore, if
\[\mathfrak{s}\geq 72=(12)^{2}/2\,,\]
then this is even an equivalence since for \(q=3\), we would automatically have
\[1+D_{\tilde{k}}(b,c)\geq 2\cdot\mathfrak{s}^{3}>(12\mathfrak{s})^{2}\,.\]
(Using that the minimal size of a \(3\) segment is \(\mathfrak{s}^{2}\).) This allows for an easy characterisation.
**Lemma 38** (Sufficient criterion for simple bands).: _Let \([a,d]\) be a \(k\) band with label \(l\) which was formed by combining the \(\tilde{k}\) bands \([a,\,b]\) and \([c,\,d]\) into the \(\tilde{k}+1\) band \([a,\,d]\). If_
\[1+D_{\tilde{k}}(b,c)<(12\mathfrak{s})^{2}\,,\]
_then \([a,b]\) and \([c,d]\) also had to be simple \(\tilde{k}\) bands. In particular, if \(\mathfrak{s}\geq 72\) and \(q\leq 2\), then \([a,d]\) is simple._
Proof.: One checks that if either \([a,b]\) or \([c,d]\) have been non-simple, then it would contradict with Step 2 in the construction of \(k\) bands in Definition 11. The last statement follows from the previous remark.
The nice thing about simple bands - and the sole reason we need to look at them - is that their "stretch" grows at most linearly in \(l\) (rather than the extremely crude exponential estimate in Lemma 29):
**Lemma 39** (Maximal stretch of simple bands).: _Let \(\mathfrak{s}\geq 72\). Let \([a,d]\) be a simple \(k\) band with label \(l\geq 2\). Then,_
\[\sum_{i\in[a,d]}f_{k}(i)\leq l+(13\mathfrak{s})^{2}\cdot(l-2)/2\,.\]
Proof.: In the case of a singleton \([a,d]=[a,a]\), this is true since \(f_{k}(a)=l\). Now, assume that the claim is true for all \(k\) bands with labels \(<l\). If the \(k\) band is not a singleton, we split it up into the simple \(\tilde{k}\) bands \([a,\,b]\) and \([c,\,d]\) as before with labels \(l_{1}\) respectively \(l_{2}\), where \(l_{1}+l_{2}=l\). Since \([a,d]\) is simple and \(\mathfrak{s}\geq 72\), it is very regular with \(q\leq 2\). Therefore, there can at most be \(12\mathfrak{s}\) bands of label \(2\) between \([a,b]\) and \([c,d]\) with the rest being bands of label \(1\). Now by the induction hypothesis
\[\sum_{i\in[a,d]}f_{k}(i)= \sum_{i\in[a,b]}f_{\tilde{k}}(i)+\sum_{i\in[c,d]}f_{\tilde{k}}(i )+\sum_{i\in(b,c)}f_{k}(i)\] \[\leq\left\{l_{1}+(13\mathfrak{s})^{2}\cdot(l_{1}-2)/2\right\}+ \left\{l_{2}+(13\mathfrak{s})^{2}\cdot(l_{2}-2)/2\right\}+\left\{2\cdot 12 \mathfrak{s}+(12\mathfrak{s})^{2}\right\}\] \[\leq l+(13\mathfrak{s})^{2}\cdot(l-4)/2+(13\mathfrak{s})^{2}=l+( 13\mathfrak{s})^{2}\cdot(l-2)/2\,,\]
which shows the claim.
We conclude the section with parameter estimates on very regular bands. These turn out to be quite crucial, in particular the upper bound for \(q\).
**Lemma 40** (Estimates for \(m,r,q\) on very regular bands).: _Assume that we have split the very regular band into bands with labels \(m,r\) and have the space inbetween with parameter \(q\). Then,_
\[m+r=n\quad\forall q\leq 8 \tag{6}\]
_as well as_
\[m+r-\lfloor\mathfrak{d}q\rfloor=n+\sigma \tag{7}\]
_with \(\sigma\in\{-1,0,1\}\). Furthermore,_
\[q\leq\lfloor(2-\mathfrak{d})^{-1}n\rfloor=:\flat(n)\,. \tag{8}\]
_Note that since \(\mathfrak{d}<1/11\), we have \(\flat(n)\leq\lfloor\frac{11}{21}n\rfloor\)._
Proof.: We get to return to the label generation again (Definition 11):
\[n=m+r-\lfloor\mathfrak{d}\log_{\mathfrak{s}}(1+D)\rfloor\]
where \(D\) is the number of bands between the bands of label \(m,r\) right before combining. Since the bands are very regular, we have at most \(12\cdot\mathfrak{s}-1\) many bands of label \(q\) between them with corresponding \(q\) segments. Each \(q\) segment contains at least \(\mathfrak{s}^{q-1}\) and at most \(12\cdot\mathfrak{s}^{q-1}\) many bands. Therefore
\[\mathfrak{s}^{q-1} \leq D\leq 12\cdot\mathfrak{s}^{q-1}\cdot 12\mathfrak{s}+(12 \mathfrak{s}-1)\] \[\mathfrak{s}^{q-1} \leq 1+D\leq\mathfrak{s}^{q}\cdot 13^{2}\] \[q-1 \leq\log_{\mathfrak{s}}\left(1+D\right)\leq q+2\log_{\mathfrak{s }}13\] \[\mathfrak{d}q-\mathfrak{0} \leq\mathfrak{d}\log_{\mathfrak{s}}\left(1+D\right)\leq\mathfrak{d }q+\mathfrak{d}2\log_{\mathfrak{s}}13\,.\]
If \(q\leq 8\), then \(\mathfrak{d}q+\mathfrak{d}2\log_{\mathfrak{s}}13<1\), in particular \(\lfloor\mathfrak{d}\log_{\mathfrak{s}}(1+D)\rfloor=0\). This proves Equation (6). Furthermore, since \(\mathfrak{d}(2\log_{\mathfrak{s}}13+1)\leq\frac{1}{11}(1+1)<1\), we have
\[\left|\lfloor\mathfrak{d}q\rfloor-\lfloor\mathfrak{d}\log_{\mathfrak{s}}\left( 1+D\right)\rfloor\right|\leq 1\]
which yields Equation (7). Since \(m,r>q\), we have
\[n+\text{either }0\text{ or }1\geq 2q-\lfloor\mathfrak{d}q\rfloor+2\] \[n\geq 2q-\mathfrak{d}q=q(2-\mathfrak{d})\] \[(2-\mathfrak{d})^{-1}n\geq q\,,\]
i.e., Inequality (8) since \(q\) is an integer.
_Remark_ (Final remarks).: As alluded to early on, we will use the whole "segment-band" framework for both the temporal rows as well as spatial columns. In the case of the spatial columns, we will attempt to cross bad bands in a single jump, so not much of the inner structure is needed.
The temporal columns are much harder to handle. We will need to exploit that bands are very regular in order use induction. Lemma 40 will also play a crucial role throughout Section 4.5 as it limits us in how thin we can make strips. The notion of simple bands is needed for the base case of \(q\leq 2\).
## 4 Details: proving percolation
We employ the band/segment grouping scheme for the time/space stretches \((N_{t}^{(\mathtt{T})})_{t\in\mathbb{Z}}\), \((N_{x}^{(\mathbb{X})})_{x\in\mathbb{Z}}\) with parameters \(\mathfrak{s}_{t},\mathfrak{s}_{x}\) and \(\mathfrak{d}=1/12\). We may assume without loss of generality that these stretches are very regular (Lemma 33, 36).
### Connectivity inside/between good boxes
The usual idea with multiscale/block arguments is to connect boxes of different levels with each other. Directionality adds bloat to the proofs, but the principle behind is actually simple and graphical:
**Lemma 41** (Reachable boxes).: _Let a rectangular area of \(l_{x}\geq 2\) columns and \(l_{t}\) rows of \(n\) boxes be given, which are separated by \(n\) gaps and \((n+1,n)\) strips. Number them by_
\[(B_{i,j})_{1\leq i\leq l_{t},1\leq j\leq l_{x}}\,.\]
_Assume that Lemma 23 is true for \(n\). If at most one of the \(n\) boxes, \(n\) gaps or \((n+1,n)\) strips is bad, then for any good \(n\) boxes \(B_{i,j}\) and \(B_{i^{\prime},j^{\prime}}\) with \((i^{\prime}-i)\geq l_{x}\), we have_
\[\mathtt{In}^{[^{n}]}(B_{i,j})\rightsquigarrow_{\mathtt{ffc}}\mathtt{Out}^{[^{n }]}(B_{i^{\prime},j^{\prime}})\,.\]
Proof.: Without loss of generality, we assume that \(j\leq j^{\prime}\), otherwise we mirror the whole procedure. We sketch the connecting procedure in Figure 9 where horizontal connections are made via
\[\mathtt{In}^{[^{n}]}(B_{k,l})\rightsquigarrow_{\mathtt{ffc}}\mathtt{Out}^{[^{ \Rightarrow}]}(B_{k,l})\rightsquigarrow_{\mathtt{In}^{[^{\Rightarrow}]}}(B_{k,l +1})\rightsquigarrow_{\mathtt{ffc}}\mathtt{Out}^{[^{n}]}(B_{k,l+1})\]
and vertical ones via \(\mathtt{Out}(B_{k,l})\rightsquigarrow_{\mathtt{In}}(B_{k+1,l})\). First of all, it suffices to only look at the case with at most one bad box: If the gap between \(B_{k,l}\) and \(B_{k,l+1}\) is bad, we simply declare \(B_{k,l}\) to be bad (if it is not the starter box, otherwise take \(B_{k,l+1}\)). The same works for strips. We distinguish two cases.
Figure 9: Depicted are the schemes by which we connect boxes. In the best case, we just go to the target column and move straight down. Otherwise, we have to dodge bad boxes/connections.
1. The procedure is straight-forward. If we are currently in \(\mathtt{In}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{ }^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{}^{[{[{}^{[{}^{[{[}^{[{[{}^{[{ }^{[{[{}^{[{[{}^{[{[{}^{[{[{}^{[{[}^{[{[{[}^{[{[ }^{[{[{[{}^{[{[}^{[{[{[}^{[{[ }^{[{[{[}^{[{[{[}^{[{[{[}^{[ {[{[}^{[{[{[}^{[{[}^{[{[ }^{[{[{[{[}^{[{[{[}^{[ {[{[}^{[{[{[}^{[{[ }^{[{[{[}^{[{[{[ }^{[{[{[}^{[{[ }^{[{[{[}^{{[ }^{[{[{[}^{[{[{[ }^{[{[{[}^{ {[{[}^{[{[{[}^{ {[}^{[{[{[} {[{[{[}^{{[}^{ {[}^{[{[{[}^{{ }^{[{[}^{{ }^{[{[}^{{[}^{ {[}^{{[}^{{[}^{{ }^{[}^{{[}^{{ [}^{{[}^{{({[}^{ {[}^{{({[}^{ }^{{({[}^{ }^{{({( }^{ }^{{ }^{ ({({({({ }^{ }^{ }^{ ({ }^{ }^{ ({ }^{{({ }^{ }^{ ({ }^{ }^{ ({ }^{ ({ }^{ }^{ ({ }^{ (}^{{ }^{ ({ }^{ }^{ (}^{{ }^{ ({ }^{{ }^{ }^{ (}^{{ }^{{ }^{ (}^{{ }^{ (}}^{{ }^{{ }^{{ }^{ (}^{{ }^{{ }^{ (}}^{{{ }^{{ }^{ (}}^{{{ }^{{ }^{ (}}^{{ }^{{ }^{{ }^{{ (}^{ }^{{ }^{{ }^{ (}^{{ }^{ }^{{ }^{ (}^{{ }^{ (}}^{{{ }^{{ }^{ (}}^{{{ }^{{ }^{ (}}^{{{ }^{{ }^{ (}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} {\}}}}}}}}}}}}} \}} {\}}}}\}}\}\\}} {\} {\\}}\\}} {\\\\}}\\}\{\\\\\\\\\\}} \\}{\\\\\\\\\\\\\\\\\\\\\\\}\}{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\}}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\}\\\\\\\}\\\\\\\\\\\\\}}\\\\\\\\\\\\\\\\\\\\\\\}}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\}\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\}\\\\\\}\\\\\\\}\\\\\\\\}\\\\\\}\\\\\\\\\}\\\\\\}\\\\\\}\\\\\\\\}\\\\\\\}\\\\\\\}\\\\\\\\\}\\\\\}\\\\\\}\\\\\\\\\}\\\\\\}\\\\\\\}\\\\\\}\\\\\\\\}\\\\\\\}\\\\\\}\\\\\\}\\\\}\\\\\\\\\}\\\\\\\\}\\\\\\}\\\\\\\\}\\\\\\}\\\\\\\}\\\\\\\\\}\\\\}\\\\\\\\}\\\\\\\\}\\\\\\\\}\\\\\}\\\\\\\\\\\}\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\
Proof.: The proof is by induction. In the case a 1 box \(B_{n}=[t_{1},t_{2}]\times\{x\}\) we take \(T=\{(t_{2},x)\}\). For general \(n\), we know that in each row of these at least \(\kappa[\![i]\!]+2\) many \(n-1\) boxes, at most one of these boxes is bad. Therefore, there are at least \(\kappa[\![i]\!]\) many pairs of good vertically neighboured \(n-1\) boxes. By the induction hypothesis, these define \(\kappa[\![i]\!]\) many \((\kappa[\![i]\!],m-1)\) trees which satisfy Condition 2 of Definition 42 above since they lie in \(B_{n}\). Therefore, we obtain a \((\kappa[\![i]\!],n)\) tree as claimed.
This covers the case of vertical connectors. We set up the same framework analogously for horizontal connectors, but actually keep things straight and explicit here:
**Lemma 44** (Number of horizontal connectors between good \(n\) boxes).: _Let \(B_{n},B^{\prime}_{n}\) be neighbouring good \(n\) boxes. Then, there are at least \(\kappa[\![=]\!]^{n}\) many edges from \(\mathtt{Out}^{[\![=]\!]}(B_{n})\) to \(\mathtt{In}^{[\![=]\!]}(B^{\prime}_{n})\) crossing exactly over the \(n\) gap inbetween.._
Proof.: In the case of 1 boxes \(B_{1}=[t_{1},t_{2}]\times\{x\}\), \(B^{\prime}_{1}=[t_{1},t_{2}]\times\{x^{\prime}\}\), every \((t,x)\in\mathtt{Out}^{[\![=]\!]}(B_{1})\) has an outgoing edge to \((t+1,x^{\prime})\in\mathtt{In}^{[\![=]\!]}(B_{2})\) for every \(t\in[t_{1},t_{2})\). This makes \(|t_{2}-t_{1}|\geq\lceil\mathfrak{s}_{t}/12\rceil-1\geq\kappa[\![=]\!]\) many different edges.
For the case of the \(n+1\) boxes \(B_{n+1},B^{\prime}_{n+1}\), we see by the definition of inputs/outputs (in Definition 22) that \(\mathtt{Out}^{[\![=]\!]}(B_{n+1})\) and \(\mathtt{In}^{[\![=]\!]}(B^{\prime}_{n+1})\) consist of \(\kappa[\![=]\!]+4\) many opposing \(n\) boxes if they were all valid. Since \(B_{n+1},B^{\prime}_{n+1}\) are good, at most 2 of the the boxes in \(\mathtt{Out}^{[\![=]\!]}(B_{n+1})\) might not be valid, same for \(\mathtt{In}^{[\![=]\!]}(B^{\prime}_{n+1})\). Therefore, we have \(\kappa[\![=]\!]\) many opposing \(n\) boxes that may connect with each other. By the induction hypothesis, each of these contribute at least \(\kappa[\![=]\!]^{n}\) many edges, so we have \(\kappa[\![=]\!]\cdot\kappa[\![=]\!]^{n}=\kappa[=]\!]^{n+1}\) in total which proves the claim.
With this, we have guaranteed that there are exponentially many potential connectors for both the vertical strips as well as horizontal gaps. This is important since we want to use the following estimate:
**Lemma 45** (Combinatorial estimate).: _Assume there is a collection of at most \(C\) "objects" that are each good with probability at least \(P_{n}\) independently from each other. Furthermore, assume that a certain object of level \(n+1\) is good if at most one of the \(C\) prior objects is bad. Then, for any \(\mathtt{p}\in(0,1)\) with \(\mathtt{p}^{n+1}\leq C^{-6}\), if \(n\geq 1\) and_
\[1-P_{n}\leq\mathtt{p}^{n+1}\,,\]
_then also_
\[1-P_{n+1}\leq\mathtt{p}^{n+2}\,.\]
Proof.: We first write \(1+k_{n}:=(1-P_{n})^{-1}\). The level \(n+1\) object is good with probability at least
\[P_{n+1}\geq(P_{n})^{C}+C\cdot(P_{n})^{C-1}\cdot(1-P_{n})\,.\]
Therefore
\[1-P_{n+1}\leq 1-\left[\left(\frac{k_{n}}{1+k_{n}}\right)^{C}+C\cdot\left( \frac{k_{n}}{1+k_{n}}\right)^{C-1}\cdot\frac{1}{1+k_{n}}\right]=\frac{(k_{n}+1 )^{C}-(k_{n})^{C}-C\cdot(k_{n})^{C-1}}{(1+k_{n})^{C}}\,.\]
The subtrahends are exactly the first two terms in this binomial expression. Therefore,
\[1-P_{n+1}=(1+ k_{n})^{-C}\cdot\sum_{i=0}^{C-2}\binom{C}{i}(k_{n})^{i}\leq\frac{ C^{3}\cdot(1+k_{n})^{C-2}}{(1+k_{n})^{C}}\] \[\leq(1+k_{n})^{-1.5}\leq(1-P_{n})^{1.5}\leq\mathtt{p}^{\,(n+1) \cdot 1.5}\leq\mathtt{p}^{\,n+2}\,,\]
where we also used \(1+k_{n}=(1-P_{n})^{-1}\geq\mathtt{p}^{\,-(n+1)}\geq C^{6}\).
_Remark_.: The lemma can be generalised to allow for \(C[n]=a_{1}\mathrm{e}^{a_{2}n}\) instead of a constant \(C\).
In our case, the "level \(n+1\)" object will be an \(n+1\) box containing up to \(C\) many \(n\) boxes, \((n+1,n)\) strips as well as \(n\) gaps inbetween. By construction, each \(n+1\) box will then contain at most \((12\mathfrak{s}_{x}+1)\cdot 12\mathfrak{s}_{t}\) many \(n\) boxes, so the total number of level \(n\) objects is
\[C\leq(12\mathfrak{s}_{x}+1)\cdot 12\mathfrak{s}_{t}\cdot(1+1+1)\leq 450\cdot \mathfrak{s}_{x}\mathfrak{s}_{t}\,. \tag{9}\]
There is a small technical issue in using Lemma 45: In order to ensure a high probability for \(n\) gap crossings, we need a large amount of connectors, i.e. \(\mathfrak{s}_{t}\) to be large. But this also results in a larger constant \(C\), so the gap crossing probability has to grow accordingly. The next two lemmas ensure that this circular dependency is not a problem.
**Lemma 46** (Horizontal strip crossing).: _Let \(B_{n}\) and \(B_{n}^{{}^{\prime}}\) be neighbouring good \(n\) boxes. Then_
\[\mathbb{P}\big{\{}\mathtt{Out}^{[=]}(B_{n})\not\sim\mathtt{In}^{[=]}(B_{n}^{ \prime})\big{\}}\leq\exp\big{(}-\{(1+\mathfrak{s}_{x})^{-\alpha}\kappa[=]\}^{ n}\big{)}\,.\]
Proof.: By Lemma 44, there are at least \(\kappa[=]^{n}\) suitable edges that would connect \(\mathtt{Out}^{[=]}(B_{n})\) with \(\mathtt{In}^{[=]}(B_{n}^{\prime})\) if they were open. By Lemma 29, these edges have length at most \(\mathfrak{s}_{x}^{n}\). Therefore
\[\mathbb{P}\big{\{}\mathtt{Out}^{[=]}(B_{n})\not\sim\mathtt{In}^{[=]}(B_{n}^{ \prime})\big{\}}\leq\big{(}1-\{1+\mathfrak{s}_{x}^{n}\}^{-\alpha}\big{)}^{( \kappa[=]^{n})}\leq\exp\big{(}-\{1+\mathfrak{s}_{x}\}^{-n\alpha}\cdot\kappa[=] ^{n}\big{)}\]
which yields the claim.
**Lemma 47** (Ensuring high probability of horizontal strip crossings).: _Given fixed \(\mathrm{p}\), \(\mathfrak{s}_{x}\) and \(\alpha\), then we have for \(\mathfrak{s}_{t}\) large enough (equivalently \(\kappa[=]\) large enough): For any \(n\) gap \(G\), we have_
\[1-\mathbb{P}(G\text{ is good})\leq\min\Big{\{}\mathtt{p}^{n+1},(450\cdot \mathfrak{s}_{x}\mathfrak{s}_{t})^{-6}\Big{\}}\.\]
_In particular, we may ensure that both Theorem 24 Point 2 as well as the requirements of Lemma 45 hold for horizontal gaps._
Proof.: Using the previous lemma, we see that we only need to show
\[2\exp\big{(}-\{(1+\mathfrak{s}_{x})^{-\alpha}\kappa[=]\}^{n}\big{)}\leq\min \Big{\{}\mathtt{p}^{\,n+1},(450\cdot\mathfrak{s}_{x}\mathfrak{s}_{t})^{-6} \Big{\}}\.\]
First, by Equation (4)
\[\kappa[=]\geq\mathfrak{s}_{t}/12-25\mathfrak{s}_{x}=\mathfrak{s}_{t}/12-c\,,\]
so the requirements on horizontal crossings are met if both
\[2\exp\big{(}-\big{\{}(1+\mathfrak{s}_{x})^{-\alpha}(\mathfrak{s}_{t}/12-c) \big{\}}^{n}\big{)}\leq\exp\big{(}-\big{\{}(1+\mathfrak{s}_{x})^{-\alpha} \kappa[=]\big{\}}^{n}\big{)}\leq(450\cdot\mathfrak{s}_{x}\mathfrak{s}_{t})^{ -6}\]
and
\[\exp\big{(}-\big{\{}(1+\mathfrak{s}_{x})^{-\alpha}\kappa[=]\big{\}}^ {n}\big{)} \leq\mathrm{p}^{\,n+1}\] \[\big{\{}(1+\mathfrak{s}_{x})^{-\alpha}\kappa[=]\big{\}}^{n} \geq(n+1)\log\tfrac{1}{\mathrm{p}}\]
are satisfied, which is true for \(\mathfrak{s}_{t}\) (equivalently \(\kappa[=]\)) large enough.
### Proof of Lemma 24
We can now prove Lemma 24 provided that Proposition 48 below holds for \(n+1\). The setting is depicted in Figure 10.
**Proposition 48** (Drilling).: _Let \(S\) be a \((n,\flat(n))\) strip with \(n\geq 1\). Let \(T^{\prime}\) be a collection of \((\kappa[\mathaccent 866{[i]}],k)\) trees on top of \(S\) and \(T\) be a \((\kappa[\mathaccent 866{[i]}],\flat(n))\) tree on the bottom of \(S\) with \(\pi_{x}(T^{\prime})\subset\pi_{x}(T)\) where \(\pi_{x}\) is the projection onto the \(x\)-coordinate. Then,_
\[\mathbb{P}(\exists\text{ a crossing of $S$ intersecting both $T$ and $T^{\prime}$})\geq\kappa[\mathaccent 866{[i]}]^{-\flat(n)}\cdot\#T^{ \prime}\,.\]
Proof of Theorem 24.: Using Lemma 47, Point 2 is ensured by fixing some large \(\kappa[\mathaccent 866{=}]\) (or equivalently \(\mathfrak{s}_{t}\)). WLOG, we assume \(\text{p}\leq(450\cdot\mathfrak{s}_{\mathfrak{z}}\mathfrak{s}_{t})^{-6}\) (for Lemma 45). Then, we choose \(p\) large enough such that Lemma 24 holds for every \(n\leq\mathcal{N}\), where \(\mathcal{N}\) comes from Equation (10) below. We also require \(p^{100\mathfrak{s}_{t}{}^{2}}(1-e^{-1})\geq\kappa[\mathaccent 866{[i]}]^{-1/2}\) in Equation (11).
1. We show that Point 3 holds for \(n\) given that Proposition 48 holds for \(n+1\). Let \(\mathcal{N}\in\mathbb{N}\) large enough such that \[\kappa[\mathaccent 866{[i]}]^{n-\flat(n)-2}\geq(n+1)\log\tfrac{1}{ \text{p}}\] (10) for every \(n\geq\mathcal{N}\). We use Lemma 43 to first get a \((\kappa[\mathaccent 866{[i]}],n)\) tree \(\tilde{T}\subset\mathtt{Out}^{[\mathaccent 866{[n]}]}(B_{n})\) with \(\pi_{x}(\tilde{T})\subset\pi_{x}(\mathtt{In}^{[\mathaccent 866{[n]}]}(B_{n}^{ \prime}))\). Now, the \((n+1,n)\) strip can be divided into \((2+\kappa[\mathaccent 866{[i]}])^{n-\flat(n+1)}\) many \((n+1,\flat(n+1))\) strips. We will choose (exactly) \(\kappa[\mathaccent 866{[i]}]^{n-\flat(n+1)}\) disjoint \((n+1,\flat(n+1))\) strips \(S\) such that they have a \((\kappa[\mathaccent 866{[i]}],\flat(n+1))\) tree \(T^{\prime}\) on top satisfying \(T^{\prime}\subset\tilde{T}\). By Proposition 48, the probability of crossing \(S\) is at least \(\kappa[\mathaccent 866{[i]}]^{-\flat(n+1)}\cdot\#T^{\prime}=1/\kappa[\mathaccent 866 {[i]}]\). Since all those strips are disjoint, these events are independent. Therefore, we have \[\mathbb{P}\big{\{}\nexists\text{ a crossing of $\bar{S}$ intersecting }\mathtt{Out}^{[\mathaccent 866{[n]}]}(B_{n}),\mathtt{In}^{[\mathaccent 866 {[n]}]}(B_{n}^{\prime})\big{\}}\] \[\leq \,(1-1/\kappa[\mathaccent 866{[i]}])^{\kappa[\mathaccent 866{[i]}]^{n- \flat(n+1)}}\leq\exp\big{(}-\kappa[\mathaccent 866{[i]}]^{n-\flat(n)-2}\big{)}\leq \text{p}^{\,n+1},\] which shows Point 3.
Figure 10: As always, curly brackets indicate bands with the square brackets indicating bands. Whenever we consider a crossing of a \((n+1,n)\) strip, we actually try to do so in disjoint \((n+1,\flat(n+1))\) strips. Since these are “thin” objects, they may be broken down further, so we pay special attention to \((n,\flat(n))\) strips.
1. Showing Point 1 for \(n+1\) is a straight-forward application of Lemma 45 after using all the estimates on \(n\) boxes, \((n+1,n)\) strips and \(n\) gaps.
Judging by the remaining pages, one can guess that Proposition 48, i.e., **drilling**, is the most difficult part. Also the fact that we have yet to use that \(N^{(\mathsf{T})}\) is very regular. The good news is that we can already prove the case of simple bands.
Proof of Proposition 48 for simple bands.: The case of simple bands is equivalent to \(q\leq 2\) (see Lemma 38). We assume that the temporal \(n\) band generating the \((n,k)\) strip is simple with \(k\geq n/2\). We generate crossings by going straight through a column. By Lemma 39 (and using that \(\mathfrak{s}_{t}>17^{\prime}000\)), this probability is at least
\[p^{n+(13\mathfrak{s}_{t})^{2}(n-2)/2}\geq p^{100\mathfrak{s}_{t} ^{2}n}.\]
There are \(\#T^{\prime}\) vertices (or rather columns) which potentially form an appropriate crossing if they were open. Thus, using our assumption of
\[p^{100\mathfrak{s}_{t}^{2}n}(1-e^{-1})\geq\kappa[\mathrel{=}]^{-n/2}\geq\kappa[ \mathrel{=}]^{-k} \tag{11}\]
as well as Lemma 49 below
\[\mathbb{P}(\exists\text{ a cluster in }S\text{ connecting }T\text{ and }T^{\prime})\geq 1-(1-p^{100\mathfrak{s}_{t} ^{2}n})^{\#T^{\prime}}\] \[\geq \min\Big{\{}1-e^{-1},\,\#T^{\prime}\cdot p^{100\mathfrak{s}_{t} ^{2}n}(1-e^{-1})\Big{\}}\geq\kappa[\mathrel{=}]^{-k}\#T^{\prime}\,,\]
which proves the case of simple bands. (Note that \(\#T^{\prime}\leq\kappa[\mathrel{=}]^{k-1}\).)
Here is the auxiliary lemma we previously used and will continue to use in the future.
**Lemma 49** ([11, Lemma 4.2]).: _For any \(c,p_{1},\ldots,p_{n}\) with \(0<p_{i}<1\) and \(a:=\sum_{1}^{n}p_{i}\), we have_
\[1-\prod_{i=1}^{n}(1-p_{i})\geq\min\big{\{}1-e^{-c},\,\tfrac{a}{c}(1-e^{-c}) \big{\}}\.\]
### Drilling: preparation
Now comes the tough part. Assume that Lemma 24 holds until \(\flat(n)\leq n-2\). We want to see that we can **drill** through arbitrary \((n,\flat(n))\) strips \(S\), i.e., Proposition 48 holding even for \(q\geq 3\). We will use that the temporal stretches \(N^{(\mathsf{T})}\) are very regular to break up \(S\) into three smaller parts, see Figure 11 with the other variables being introduced during the course of this section. On the top, we have a \((m,\flat(n))\) strip \(S_{m}\). On the bottom, we have a \((r,\flat(n))\) strip \(S_{r}\). In the middle, there are up to \(12\mathfrak{s}_{t}\) rows of \(q-1\) boxes separated by \((q,q-1)\) strips. Lemma 40 will be crucial in our endeavour.
The outline of the remaining proof is as follows. If
1. there are "enough" crossings of \(S_{m}\) which intersect \(T^{\prime}\) (Equation (12), Lemma 51),
2. these crossings survive through the column of \(q-1\) boxes to \(S_{r}\) (Lemma 53),
3. one of these survivors connects in \(S_{r}\) to \(T\) (Proposition 48),
then there exists a crossing of \(S\) intersecting \(T^{\prime}\) and \(T\). For Event \(B\), a single crossing survives with probability at least \(0.99\) (Lemma 53). This is a rather simple calculation. As for the rest, the technicalities are more difficult than the actual proof.
In Lemma 50, we pool together small strips and estimate the probability of a crossing happening for at least one of them. Then, we estimate the probability of Event \(A\) in Lemma 51. We do so by pooling several \((m,\flat(m))\) strips together so that each such collection has a sufficiently high probability of crossing \(S_{m}\). Lemma 50 allows us to pool together all survivors from Event \(A\) to obtain a lower bound on the probability of Event \(C\). Finally, Proposition 48 follows from combining all of the previous calculations.
Let us briefly consider a general \((j,J)\) strip \(S^{*}\) with \(j<n\) and \(J\geq\flat(j)\). Let \(\hat{S}:=\cup\hat{S}_{i}\) be a disjoint union of \((j,\flat(j))\) strips in \(S^{*}\). Let \(T^{*}\) (the target) be a \((\kappa[i],J)\)-tree on the bottom of \(S^{*}\) which intersects each \(\hat{S}_{i}\) in a \((\kappa[i],\flat(j))\)-tree. Let \(\hat{T}\) be a union of \(l\)-trees on top of \(\hat{S}\) where \(l\leq\flat(j)\), all lying in the columns of \(T^{*}\).
**Lemma 50** (Pooling together strips for crossings, [10, Lemma 4.4]).: _Suppose Proposition 48 holds for \(j\leq n-2\). Then,_
\[\mathbb{P}(\exists\text{ a crossing of $\hat{S}$ intersecting $\hat{T}$ and $T^{*}$})\geq\min\left\{0.9,\,\tfrac{1}{3}\kappa[i]^{\flat(j)}\cdot\#\hat{T} \right\}\,.\]
_Each such crossing is confined to its respective \((j,\flat(j))\) strip._
Proof.: \(\hat{T}\) is a union of \((\kappa[i],l)\) trees. Let \(\hat{T}=\cup\hat{T}_{i}\) where \(\hat{T}_{i}\) consists of the \((\kappa[i],l)\) trees belonging to \(\hat{T}\) that lie inside the \((j,\flat(j))\) strip \(\hat{S}_{i}\) (recall \(l\leq\flat(j)\)). By the induction hypothesis, we have
\[\mathbb{P}(\exists\text{ a crossing of $\hat{S}_{i}$ intersecting $\hat{T}_{i}$ and $T^{*}$})\geq\kappa[i]^{\flat(j)}\cdot\#\hat{T}_{i}\,.\]
Figure 11: Drilling/generating a crossing of \(S\). The Events \(A,B,C\) together yield the crossing (bold black path) with a minimal probability depending on \(\#T^{\prime}\).
These are independent events since all the \(\hat{S}_{i}\) are disjoint. Lemma 49 with \(c=2.31\) yields
\[\mathbb{P}(\exists\text{ a crossing of }\hat{S}\text{ intersecting }\hat{T}\text{ and }T^{*})\] \[\geq 1-\prod_{i}\left(1-\mathbb{P}\{\exists\text{ a crossing of }\hat{S}_{i}\text{ intersecting }\hat{T}_{i}\text{ and }T^{*}\}\right)\] \[\geq(1-e^{-2.31})\min\left\{1,\tfrac{1}{2.31}\kappa_{[i]} \right]^{-\flat(j)}\sum_{i}\#\hat{T}_{i}\right\}\geq\min\left\{0.9,\tfrac{1}{ 3}\kappa_{[i]}\right]^{-\flat(j)}\cdot\#\hat{T}\right\}\]
which shows the claim. Furthermore, the crossing happens in one of the \(\hat{S}_{i}\).
Let us return to our \((n,\flat(n))\) strip \(S\). On the bottom of it, there is a target \((\kappa_{[i]},\flat(n))\) tree \(T\), while on top of it, there is a union of \((\kappa_{[i]},k)\) trees \(T^{\prime}\) with \(\pi_{x}(T^{\prime})\subset\pi_{x}(T)\). We also recall the parameters \(q,m\) and \(k\). Let
\[M:=\max\left\{\flat(m),\,q-1\right\}\qquad k^{\prime}:=\min\left\{k,\flat(m) \right\}.\]
and \(\underline{T}\) be a \((\kappa_{[i]},\flat(n))\) tree on the bottom of \(S_{m}\) with \(\pi_{x}(\underline{T})=\pi_{x}(T)\). This tree will act as the target for the survivors of Event \(A\). Next, we have to count the survivors.
Define \(\tilde{T}\) to be the union of \((\kappa_{[i]},q-1)\) trees in \(\underline{T}\) satisfying the following: Let \(\tilde{T}_{i}\) be a \((\kappa_{[i]},q-1)\) tree inside a \((m,M)\) strip. Then \(\tilde{T}_{i}\subset\tilde{T}\) if there are \(v_{i}^{\prime}\in T^{\prime}\) and \(\tilde{v}_{i}\in\tilde{T}_{i}\) such that \(v_{i}^{\prime}\sim\tilde{v}_{i}\) inside \(S_{m}\). Define the event
\[\mathfrak{X}:=\left\{\#\tilde{T}\geq\max\left\{\tfrac{\kappa_{[i]}|q^{-2} \cdot\#T^{\prime}}{8\cdot\kappa_{[i]}|^{M}},\,\kappa_{[i]}|^{q-2}\right\} \right\}. \tag{12}\]
_Remark_ (On \(M,k^{\prime}\)).: We have to consider \((m,M)\) strips rather than \((m,\flat(m))\) strips because multiple \((m,\flat(m))\) strips might connect to the same \(q-1\) box in the case of \(q-1>\flat(m)\). This would result in double counting for \(\tilde{T}\). On the other hand, introducing \(k^{\prime}\) basically just means that we break up \((\kappa_{[i]},k)\) trees into smaller \((\kappa_{[i]},k^{\prime})=(\kappa_{[i]},\flat(m))\) trees so that they act as proper inputs for the \((m,\flat(m))\) strips.
We only count hits of \((\kappa_{[i]},q-1)\) trees since each (single) connection will yield a full tree after passing through a \(q-1\) box (or rather a \(q-1\) column in Event B later).
**Lemma 51** (Probability of "sufficiently many" crossings, [10, Lemma 4.5]).: _Suppose Proposition 48 holds for \(j\leq n-2\). Then_
\[\mathbb{P}(\mathfrak{X})\geq\min\left\{0.9,\tfrac{1}{8}\kappa_{[i]}\right]^{- \flat(m)}\cdot\#T^{\prime}\right\}\,.\]
Proof.: Since \(\tilde{T}\) consists of \((\kappa_{[i]},q-1)\) trees and each such tree has \(\kappa_{[i]}|^{q-2}\) many vertices, we have \(\#\tilde{T}\geq\kappa_{[i]}|^{q-2}\) if and only if \(\tilde{T}\neq\emptyset\). In order to show \(\#\tilde{T}\geq\kappa_{[i]}|^{q-2}\), it therefore suffices to show \(T^{\prime}\leadsto\underline{T}\). The proof is broken up into cases based on the size of \(\#T^{\prime}\) and the value of \(M\).
1. \(\#T^{\prime}\leq 8\cdot\kappa_{[i]}|^{\flat(m)}\) and \(M=\flat(m)\). In particular, \(\flat(m)\geq q-1\). Therefore, by Lemma 50 with \(S^{\prime}=S_{m}\), \(\hat{S}\) to be a union of \((m,\flat(m))\) strips, \(T^{*}=\underline{T}\) and \(\hat{T}=T^{\prime}\) \[\mathbb{P}(\mathfrak{X})=\mathbb{P}(\#\tilde{T}\geq\kappa_{[i]}|^{q-2})\geq \mathbb{P}(\exists\text{ a crossing }T^{\prime}\leadsto\underline{T}\text{ inside }S_{m})\geq\min \left\{0.9,\tfrac{1}{3}\kappa_{[i]}|^{-\flat(m)}\cdot\#T^{\prime}\right\}.\]
2. \(\#T^{\prime}\leq 8\cdot\kappa_{[i]}|^{M}\) and \(M=q-1\). Again \[\mathbb{P}(\mathfrak{X})=\mathbb{P}(\#\tilde{T}\geq\kappa_{[i]}|^{M-1})= \mathbb{P}(\#\tilde{T}\geq\kappa_{[i]}|^{q-2})\,.\]
Write \(T^{\prime}=\cup_{i=1}^{N}T_{i}\) where each \(T_{i}\) is a union of \((\kappa,k^{\prime})\) trees in a \((m,\flat(m))\) strip. Then, for all \(i\) by Lemma 50 \[\mathbb{P}\big{\{}T_{i}\leadsto\underline{T}\text{ inside a }(m,\flat(m))\text{ strip}\big{\}}\geq\min\Big{\{}0.9,\,\tfrac{1}{3} \kappa[[i]]^{-\flat(m)}\cdot\#T_{i}\Big{\}}\.\] We are done if the minimum for one of the \(i\) is \(0.9\). Otherwise, Lemma 50 concludes \[\mathbb{P}\big{\{}T^{\prime}\leadsto\underline{T}\text{ inside some }(m,\flat(m))\text{ strip}\big{\}}\geq\min\Big{\{}0.9,\,\tfrac{1}{3} \kappa[[i]]^{-\flat(m)}\cdot\#T^{\prime}\Big{\}}\.\]
3. \(\#T^{\prime}>8\cdot\kappa[[i]]^{M}\). This is the case where we actually have to establish multiple crossings in disjoint regions. Write \(T^{\prime}=\cup_{i=1}^{N^{\prime}}T^{\prime}_{i}\) where each \(T^{\prime}_{i}\) is now a union of \(k^{\prime}\) trees that belong to a union of \((m,M)\) strips \(\tilde{S}_{i}\). Do this in a way such that for each \(i\) \[3\cdot\kappa[[i]]^{M}\leq\#T^{\prime}_{i}\leq 4\cdot\kappa[[i]]^{M}\] and such that if \(i\neq j\), then the corresponding unions of \((m,M)\) strips \(\tilde{S}_{i}\) and \(\tilde{S}_{j}\) are disjoint. This is possible since each \(k^{\prime}\) tree has \(\kappa[[i]]^{k^{\prime}-1}\) vertices and \(M\geq\flat(m)\geq k^{\prime}\). Thus, \(N^{\prime}\) satisfies \[N^{\prime}\geq\frac{\#T^{\prime}}{4\cdot\kappa[[i]]^{M}}\geq\frac{8\cdot\kappa [[i]]^{M}}{4\cdot\kappa[[i]]^{M}}=2\,.\] By Lemma 50, we have with \(\#T^{\prime}_{i}\geq 3\cdot\kappa[[i]]^{M}\) \[\mathbb{P}\big{\{}T^{\prime}_{i}\leadsto\bar{T}\text{ inside some }(m,M)\text{ strip}\big{\}}\geq\min\Big{\{}0.9,\,\tfrac{1}{3} \kappa[[i]]^{-\flat(m)}\cdot\#T^{\prime}_{i}\Big{\}}=0.9\,.\] Therefore, we have \(N^{\prime}\) independent events with probability greater or equal to \(0.9\). The probability of at least \(\lceil N^{\prime}/2\rceil\) of these happening is \(\geq 0.9\). Each such event gives us a contribution of \(\kappa[[i]]^{q-2}\) to \(\#\tilde{T}\), so we see that under the event of at least \(\lceil N^{\prime}/2\rceil\) crossings happening \[\#\tilde{T}\geq\frac{N^{\prime}}{2}\cdot\kappa[[i]]^{q-2}\geq\frac{\#T^{\prime }\cdot\kappa[[i]]^{q-2}}{8\cdot\kappa[[i]]^{M}}\,.\] Therefore \[\mathbb{P}(\mathfrak{X})\geq\mathbb{P}\Big{(}\#\tilde{T}\geq\frac{\#T^{\prime }\cdot\kappa[[i]]^{q-2}}{8\cdot\kappa[[i]]^{M}}\Big{)}\geq 0.9=\min\Big{\{}0.9,\,\tfrac{1}{8} \kappa[[i]]^{-\flat(m)}\cdot\#T^{\prime}\Big{\}}\.\]
With this, all cases have been covered.
This covers event \(A\). Next up is event \(B\). Take a column of \(q-1\) boxes including the \((q,q-1)\) strips inbetween. Let us fix a survivor \(v\in\underline{T}\) from Event \(A\), that is, \(v\) satisfies \(T^{\prime}\leadsto v\). We now formalise what is meant by event \(B\):
**Definition 52** (Good \(q-1\) columns ).: Let a column of up to \(12\mathfrak{s}_{t}\) many \(q-1\) boxes be given including their \((q,q-1)\) strips inbetween. We call it a \(q-1\)**column** and we call it **good for \(v,w\in G\)** if \(v\leadsto w\) inside \(G\).
**Lemma 53** (Probability of good \(q-1\) columns [11, Lemma 4.6]).: _Suppose Lemma 24 holds for \(q-1\leq n-2\). Consider a \(q-1\) column \(G\) and \(v,w\in G\) where \(v\) is a a vertex on the top and \(w\) on the bottom of \(G\). Then,_
\[\mathbb{P}(G\text{ is good for }v,w)\geq 0.99\,.\]
Proof.: First, we see that \(G\) is good for \(v\) and \(w\) if
1. all the corresponding \(q-1\) boxes and \((q,q-1)\) strips are good and
2. \(v\in\mathtt{In}(\bar{B}_{q-1})\) with \(\bar{B}_{q-1}\) being the topmost \(q-1\) box in \(G\).
3. \(w\in\mathtt{Out}(\underline{B}_{q-1})\) with \(\underline{B}_{q-1}\) being the bottommost \(q-1\) box in \(G\).
By the induction hypothesis
\[\mathbb{P}(\text{all of the $q-1$ boxes are good})\geq(1-\mathtt{p})^{12\mathfrak{ s}_{t}}\geq 1-12\mathfrak{s}_{t}\cdot\mathtt{p}\,,\]
and
\[\mathbb{P}(\text{all of the $(q,q-1)$ strips are good})\geq(1-\mathtt{p})^{12 \mathfrak{s}_{t}}\geq 1-12\mathfrak{s}_{t}\cdot\mathtt{p}\,.\]
Next, \(v\in\mathtt{In}(\bar{B}_{q-1})\) if \(v\) lies in good \(j\) boxes for all \(j\leq q-1\). The probability of this happening is at least
\[\mathbb{P}(v\in\mathtt{In}(\bar{B}_{q-1}))\geq 1-\sum_{j\geq 1}\mathtt{p}^{j}= \frac{1-2\mathtt{p}}{1-\mathtt{p}}\geq 1-2\mathtt{p}\,.\]
The same holds for \(w\). Using \(\mathtt{p}\leq(450\mathfrak{s}_{t}\cdot\mathfrak{s}_{x})^{-6}\) yields
\[\mathbb{P}(G\text{ is good for }v,w)\geq 1-25\mathfrak{s}_{t}\cdot\mathtt{p} \geq 0.99\,,\]
which finishes the proof.
Event \(C\) corresponds to Lemma 50.
### Drilling: proof of Proposition 48
We have gathered all the parts, so it is time to combine them. Unfortunately, we have to deal with quite a lot of case distinctions.
Proof of Proposition 48.: We have already shown the case of \(q\leq 2\) which also includes the case of \(\min\{m,r\}\leq 3\). Now, we may always assume that \(m\geq 4\) as well as \(q\geq 3\).We employ our strategy of linking together the Events \(A\), \(B\) and \(C\), that is,
1. \(\mathfrak{X}\) happens on \(S_{1}\). This gives us a collection of \((\kappa[\![\cdot]\!],q-1)\) trees \(\tilde{T}\subset\bar{T}\) on the bottom of \(S_{m}\). Each such tree has some \(v\in\tilde{T}\) with \(T^{\prime}\leadsto v\).
2. Consider \(T^{*}\) on the top of \(S_{r}\) with \(\pi_{x}(\tilde{T})=\pi_{x}(T^{*})\). There exists a crossing of \(S_{2}\) intersecting \(T^{*}\) and \(T\), i.e., some \(T^{*}\ni w\leadsto T\).
3. The \(q-1\) column of \(w\in T^{*},v[w]\in\tilde{T}\) is good.
If all these events hold, then there exists a crossing of \(S\) from \(T^{\prime}\) to \(T^{\prime}\) via
\[T^{\prime}\leadsto^{A}\tilde{T}\ni v[w]\leadsto^{B}w\in T^{*}\leadsto^{C}T\,.\]
By Lemma 51
\[\mathbb{P}(A)=\mathbb{P}(\mathfrak{X})\geq\min\left\{0.9,\,\tfrac{1}{8} \kappa[\![\cdot]\!]^{\to(m)}\#T^{\prime}\right\}\,.\]
Under \(\mathfrak{X}\), we have
\[\#\tilde{T}\geq\max\left\{\kappa[\![\cdot]\!]^{q-2},\,\frac{\kappa[\![\cdot]\! ]^{q-2}\cdot\#T^{\prime}}{8\cdot\kappa[\![\cdot]\!]^{M}}\right\}\,.\]
* If now \(\#T^{\prime}\leq 8\cdot\kappa_{[i]}{}^{M}\), then \(\#T^{*}=\#T\geq\kappa_{[i]}{}^{q-2}\) and by the Lemmas 51, 53 \[\mathbb{P}(B,C\,|\,A) \geq\mathbb{P}(\exists\text{ a crossing of }S_{2}\text{ intersecting }T^{*}\text{ and }T\,|\,\#T^{*}=\kappa_{[i]}{}^{q-2})\cdot 0.99\] \[\geq 0.99\cdot\min\left\{0.9,\,\tfrac{1}{3}\kappa_{[i]}{}^{-\flat(r) }\kappa_{[i]}{}^{q-2}\right\}\,.\] If \(M=\flat(m)\), then using Equation (13) from Lemma 54 below yields \[\mathbb{P}(\exists\text{ a cluster in }S\text{ connecting }T\text{ and }T^{\prime})\geq\mathbb{P}(A)\cdot\mathbb{P}(B,C\,|\,A)\] \[\geq 0.9\cdot\tfrac{1}{8}\kappa_{[i]}{}^{-\flat(m)}\#T^{\prime}\, \cdot\,0.99\cdot\min\left\{0.9,\,\tfrac{1}{3}\kappa_{[i]}{}^{-\flat(r)} \kappa_{[i]}{}^{q-2}\right\}\] \[\geq \#T^{\prime}\cdot\tfrac{1}{27}\kappa_{[i]}{}^{-\flat(n)}+{}^{ \flat(2)}\geq\kappa_{[i]}{}^{-\flat(n)}\cdot\#T^{\prime}\,.\] For the case of \(M=q-1\), i.e. \(8\cdot\kappa_{[i]}{}^{\flat(m)}\leq\#T^{\prime}\leq 8\cdot\kappa_{[i]}{}^{M}\), using Equation (13) of Lemma 54 and \(\flat(m)>m/2\geq\lceil q/2\rceil\) yields \[\mathbb{P}(\exists\text{ a cluster in }S\text{ connecting }T\text{ and }T^{\prime})\geq\mathbb{P}(A)\cdot\mathbb{P}(B,C\,|\,A)\] \[\geq 0.9\,\cdot\,0.99\cdot\min\left\{0.9,\,\tfrac{1}{3}\kappa_{[i]}{ }^{q-2\to(r)}\right\}\geq\min\left\{0.5,\,\tfrac{1}{4}\tfrac{\kappa_{[i]}{}^{ M-1}\cdot\kappa_{[i]}{}^{\flat(m)}}{\kappa_{[i]}{}^{\flat(m)}+\flat(r)}\right\}\] \[\geq \min\left\{0.5,\,\tfrac{1}{4}\tfrac{\kappa_{[i]}{}^{M-1}\cdot \kappa_{[i]}{}^{\flat(n)}}{\kappa_{[i]}{}^{\flat(n)}-\lceil q/2\rceil-2} \right\}\geq\min\left\{0.5,\,\tfrac{\kappa_{[i]}{}^{M}\cdot 8}{\kappa_{[i]}{}^{ \flat(n)}}\right\}\geq\frac{\#T^{\prime}}{\kappa_{[i]}{}^{\flat(n)}}\,.\]
* If instead \(\#T^{\prime}\geq 8\cdot\kappa_{[i]}{}^{M}\), then using \[\#\tilde{T}=\#T^{*}\geq\frac{\#T^{\prime}\cdot\kappa_{[i]}{}^{q-2}}{8\cdot \kappa_{[i]}{}^{M}}\] and Lemma 50 and Equation (14) gives \[\mathbb{P}(C\,|\,A) \geq\mathbb{P}\Big{\{}\exists\text{ a crossing of }S_{2}\text{ intersecting }T^{*}\text{ and }T\,|\,\#T^{*}\geq\frac{\#T^{\prime}\cdot\kappa_{[i]}{}^{q-2}}{8\cdot \kappa_{[i]}{}^{M}}\Big{\}}\] \[\geq\min\left\{0.9,\,\frac{\#T^{\prime}\cdot\kappa_{[i]}{}^{q-2} }{8\cdot\kappa_{[i]}{}^{M}}\cdot\frac{1}{3\cdot\kappa_{[i]}{}^{\flat(r)}} \right\}\geq\min\left\{0.9,\,\frac{\#T^{\prime}}{24\cdot\kappa_{[i]}{}^{\flat (n)}-1}\right\}\geq 2\frac{\#T^{\prime}}{\kappa_{[i]}{}^{\flat(n)}}\,,\] where the minimum disappears again from \(\#T^{\prime}\leq\#T\leq\kappa_{[i]}{}^{\flat(n)-1}\). Lemma 53 yields \[\mathbb{P}(B\,|\,A,C) \geq 0.99\,.\] Putting everything together, we conclude the \(\#T^{\prime}\geq 8\cdot\kappa_{[i]}{}^{M}\) case: \[\mathbb{P}(\exists\text{ a cluster in }S\text{ connecting }T\text{ and }T^{\prime})\geq\mathbb{P}(A)\cdot\mathbb{P}(C\,|\,A)\cdot\mathbb{P}(B\,|\,A,C)\] \[\geq 0.9\cdot 2\cdot\frac{\#T^{\prime}}{\kappa_{[i]}{}^{\flat(n)}} \cdot 0.99\geq\frac{\#T^{\prime}}{\kappa_{[i]}{}^{\flat(n)}}\,.\]
This finishes the proof of Proposition 48.
**Lemma 54** (Extra estimates for final proof).: _Let \(m\geq 4,q\geq 3\) and \(M=\max(\flat(m),q-1)\). We have_
\[\flat(m)+\flat(r)-\lceil q/2\rceil\leq\flat(n)-2\,. \tag{13}\]
_Furthermore, we have_
\[M+\flat(r)-q\leq\flat(n)-3\,. \tag{14}\]
Proof.: If \(3\leq q\leq 8\), then \(m+r=n\) by Equation (6) in Lemma 40. In particular,
\[\flat(m)+\flat(r)\leq\flat(n)\implies\flat(m)+\flat(r)-\lceil q/2\rceil\leq \flat(n)-2\,.\]
If \(q\geq 9\), then we use
\[\lceil(2-\mathfrak{d})^{-1}(\lfloor\mathfrak{d}q\rfloor+1)\rceil\leq q/21+2 \leq\lceil q/2\rceil-2\]
to also obtain Equation (13) via Equation (7) in Lemma 40
\[m+r-\lfloor\mathfrak{d}q\rfloor \leq n+1\] \[\flat(m)+\flat(r)-\lceil(2-\mathfrak{d})^{-1}(\lfloor\mathfrak{d }q\rfloor+1)\rceil \leq\flat(n)\] \[\flat(m)+\flat(r)-\lceil q/2\rceil \leq\flat(n)-2\,.\]
For Equation (14), we need another case distinction: If \(M=\flat(m)\), then
\[M+\flat(r)-q=\left\{\flat(m)+\flat(r)-\lfloor q/2\rfloor\right\}-\lceil q/2 \rceil\leq\flat(n)-2-1\]
Else, we have \(M=q-1\), which yields
\[M+\flat(r)-q=\flat(r)-1=\flat(n)-2-\left\{\flat(m)-\lfloor q/2\rfloor\right\}.\]
Since \(\flat(m)>m/2>\lfloor(m-1)/2\rfloor\geq\lfloor q/2\rfloor\) and \(M+\flat(r)-q\) is an integer, this case also implies \(M+\flat(r)-q\leq\flat(n)-3\), i.e., Equation (14).
_Acknowledgement_.: This work was supported by the German Research Foundation under Germany's Excellence Strategy MATH+: The Berlin Mathematics Research Center, EXC-2046/1 project ID: 390685689, and the Leibniz Association within the Leibniz Junior Research Group on Probabilistic Methods for Dynamic Communication Networks as part of the Leibniz Competition.
|
2303.15636 | Completely realisable groups | Given a construction $f$ on groups, we say that a group $G$ is
\textit{$f$-realisable} if there is a group $H$ such that $G\cong f(H)$, and
\textit{completely $f$-realisable} if there is a group $H$ such that $G\cong
f(H)$ and every subgroup of $G$ is isomorphic to $f(H_1)$ for some subgroup
$H_1$ of $H$ and vice versa.
In this paper, we determine completely ${\rm Aut}$-realisable groups. We also
study $f$-realisable groups for $f=Z,F,M,D,\Phi$, where $Z(H)$, $F(H)$, $M(H)$,
$D(H)$ and $\Phi(H)$ denote the center, the Fitting subgroup, the
Chermak-Delgado subgroup, the derived subgroup and the Frattini subgroup of the
group $H$, respectively. | Georgiana Fasolă, Marius Tărnăuceanu | 2023-03-27T23:33:01Z | http://arxiv.org/abs/2303.15636v1 | # Completely realisable groups
###### Abstract
Given a construction \(f\) on groups, we say that a group \(G\) is \(f\)_-realisable_ if there is a group \(H\) such that \(G\cong f(H)\), and _completely \(f\)-realisable_ if there is a group \(H\) such that \(G\cong f(H)\) and every subgroup of \(G\) is isomorphic to \(f(H_{1})\) for some subgroup \(H_{1}\) of \(H\) and vice versa.
In this paper, we determine completely Aut-realisable groups. We also study \(f\)-realisable groups for \(f=Z,F,M,D,\Phi\), where \(Z(H)\), \(F(H)\), \(M(H)\), \(D(H)\) and \(\Phi(H)\) denote the center, the Fitting subgroup, the Chermak-Delgado subgroup, the derived subgroup and the Frattini subgroup of the group \(H\), respectively.
**MSC2020 :** Primary 20D30; Secondary 20D45, 20D25.
**Key words :** inverse group theory, (completely) \(f\)-realisable groups, automorphism groups, integrals of groups.
## 1 Introduction
In group theory, there are many constructions \(f\) which start from a group \(H\) and produce another group \(f(H)\). Examples of such group-theoretical constructions are: center, central quotient, derived quotient, Frattini subgroup, Fitting subgroup, Chermak-Delgado subgroup, automorphism group, Schur multiplier, other cohomology groups, and various constructions from permutation groups. For each of these constructions, there is an inverse problem:
Given a group \(G\), is there a group \(H\) such that \(G\cong f(H)\)? (1)
Several new results related to this problem have been obtained in [1, 2, 6, 9, 10] for \(f(H)=D(H)\), the derived subgroup of \(H\). Note that in these papers the group \(H\) with the property \(G\cong D(H)\) has been called an _integral_ of \(G\) by analogy with calculus. Moreover, we recall Problem 10.19 in [1] that asks to classify the groups in which all subgroups are integrable. It constitutes the starting point for our discussion.
Other results of the same type are given by [7, 12, 20, 21, 22] for \(f(H)=\Phi(H)\), the Frattini subgroup of \(H\). In this case, there is a precise characterization of finite groups \(G\) for which (1) has solutions, namely
\[G\cong\Phi(H)\mbox{ for some group }H\mbox{ if and only if }\mbox{Inn}(G)\subseteq\Phi(\mbox{Aut}(G))\]
(see [7]).
The same problem for \(f(H)=\mbox{Aut}(H)\), the automorphism group of \(H\), has been studied in [16, 17]. We also recall the well-known class of _capable groups_, i.e. the groups \(G\) such that (1) has solutions for \(f(H)=\mbox{Inn}(H)\), the inner automorphism group of \(H\). Their study was initiated by R. Baer [3] and continued in many other papers (see e.g. [5, 8]).
Inspired by these studies, we introduce the following two notions.
Given a construction \(f\) on groups, we say that a group \(G\) is
* \(f\)_-realisable_ if there is a group \(H\) such that \(G\cong f(H)\)
and
* _completely \(f\)-realisable_ if there is a group \(H\) such that:
* \(G\cong f(H)\);
* \(\forall\,G_{1}\leq G,\exists\,H_{1}\leq H\) such that \(G_{1}\cong f(H_{1})\);
* \(\forall\,H_{1}\leq H,\exists\,G_{1}\leq G\) such that \(f(H_{1})\cong G_{1}\).
Clearly, if \(f\) is monotone, then i) follows from ii) and iii). Clearly, any subgroup of an \(f\)-realisable group is itself \(f\)-realisable. This holds, for example, for \(f=D\). Also, we observe that completely \(D\)-realisable groups1 are solutions of Problem 10.19 in [1].
Footnote 1: Another suitable name for these groups would be _completely integrable groups_. For such a group \(G\), a group \(H\) satisfying i)-iii) is called a _complete integral_ of \(G\).
Throughout this paper, we assume that the above groups \(G\) and \(H\) are both finite. In Section 2 we will determine completely
while in Section 3 we will present several results concerning completely \(f\)-realisable groups for \(f=Z,F,M,D,\Phi\), where \(Z(H)\), \(F(H)\), \(D(H)\) and \(\Phi(H)\) denote the center, the Fitting subgroup, the Chermak-Delgado subgroup, the derived subgroup and the Frattini subgroup of the group \(H\), respectively.
Most of our notation is standard and will usually not be repeated here. Elementary notions and results on groups can be found in [13, 19]. For subgroup lattice concepts we refer the reader to [18].
## 2 Completely \(\mathrm{Aut}\)-realisable groups
First of all, we recall some results of MacHale [16, 17] concerning groups \(H\) with a particular automorphism group.
**Theorem 2.1**.: _The following hold:_
1. _There is no group_ \(H\) _such that_ \(\mathrm{Aut}(H)\cong\mathbb{Z}_{m}\) _for any odd number_ \(m>1\)_,_
2. _There exists a group_ \(H\) _such that_ \(\mathrm{Aut}(H)\cong\mathbb{Z}_{p^{2}}\) _for a prime_ \(p\) _if and only if_ \(p=2\) _and_ \(H\cong\mathbb{Z}_{5}\) _or_ \(H\cong\mathbb{Z}_{10}\)_,_
3. _There exists a group_ \(H\) _such that_ \(\mathrm{Aut}(H)\cong\mathbb{Z}_{p}^{3}\) _for a prime_ \(p\) _if and only if_ \(p=2\) _and_ \(H\cong\mathbb{Z}_{24}\)_,_
4. _There exists a group_ \(H\) _such that_ \(\mathrm{Aut}(H)\cong\mathbb{Z}_{p}\times\mathbb{Z}_{p^{2}}\) _for a prime_ \(p\) _if and only if_ \(p=2\) _and_ \(H\cong\mathbb{Z}_{15}\) _or_ \(H\cong\mathbb{Z}_{20}\) _or_ \(H\cong\mathbb{Z}_{30}\)_,_
5. _There is no group_ \(H\) _such that_ \(\mathrm{Aut}(H)\cong\mathbb{Z}_{2}^{4}\)_._
Proof.: Parts (a) and (b), respectively, follow from parts (i) and (iv)(c) of Theorem 1 in [16]. Parts (c) and (d) are given by parts (iii) and (iv) of Theorem 2 in [16]. Part (e) is a summarized version of Theorem 2 from [17].
In [14], the solutions \(H\) of \(\mathrm{Aut}(H)\cong G\) have been also determined for other important classes of groups \(G\) (see e.g. Theorem 4.2 for \(G=A_{n}\), Theorem 4.4 for \(G=S_{n}\) or Theorem 6.3 for \(G=D_{2n}\)). Also, we point out another interesting result of Ledermann and Neumann [15] which states that for every \(n>0\), there exists a bound \(f(n)\) such that if \(G\) is a finite group with \(|G|\geq f(n)\), then \(|\mathrm{Aut}(G)|\geq n\).
We are now able to give a description of completely \(\mathrm{Aut}\)-realisable groups.
**Theorem 2.2**.: _A group is completely \(\operatorname{Aut}\)-realisable if and only if it is an elementary abelian \(2\)-group of rank at most \(3\)._
Proof.: Let \(G\) be a completely \(\operatorname{Aut}\)-realisable group. Then there is a group \(H\) such that:
* \(G\cong\operatorname{Aut}(H)\);
* \(\forall\,G_{1}\leq G,\exists\,H_{1}\leq H\) such that \(G_{1}\cong\operatorname{Aut}(H_{1})\);
* \(\forall\,H_{1}\leq H,\exists\,G_{1}\leq G\) such that \(\operatorname{Aut}(H_{1})\cong G_{1}\).
Since the groups \(\mathbb{Z}_{p}\) with \(p\) an odd prime are not \(\operatorname{Aut}\)-realisable by Theorem 2.1 (a), it follows that \(G\cong\operatorname{Aut}(H)\) is a \(2\)-group. Then so is \(\operatorname{Inn}(H)\cong\frac{H}{Z(H)}\cdot\) Let
\[|H|=p_{1}^{n_{1}}\cdots p_{k}^{n_{k}},\]
where \(p_{1}=2\) and \(p_{2}\),..., \(p_{k}\) are odd primes, and denote by \(P_{i}\) a Sylow \(p_{i}\)-subgroup of \(H\), \(\forall\,i=1,\ldots,k\). Then \(P_{2},\ldots,P_{k}\subseteq Z(H)\) and therefore
\[H=A\rtimes P_{1},\text{ where }A=\prod_{i=2}^{k}P_{i}.\]
On the other hand, we have \(n_{2}=\cdots=n_{k}=1\) because \(p^{2}\mid|H|\) implies \(p\mid|\operatorname{Aut}(H)|\) for any prime \(p\) (see e.g. [11]). Consequently,
\[A\cong\mathbb{Z}_{p_{2}\cdots p_{k}}.\]
Assume that \(H\) is not the direct product of \(A\) and \(P_{1}\). Then \(H\) contains a subgroup \(H_{1}\cong D_{2p_{j}}\) for some \(j=2,\ldots,k\). By iii), there exists \(G_{1}\leq G\) such that \(\operatorname{Aut}(H_{1})\cong G_{1}\). This implies that \(|\operatorname{Aut}(H_{1})|=p_{j}(p_{j}-1)\) divides \(|G|\) and so \(p_{j}\) divides \(|G|\), a contradiction. Thus we have \(H=A\times P_{1}\).
Since \(A\) and \(P_{1}\) are of coprime orders, we get
\[G\cong\operatorname{Aut}(H)\cong\operatorname{Aut}(A)\times\operatorname{Aut} (P_{1})\cong\left(\prod_{i=2}^{k}\mathbb{Z}_{p_{i}}^{\times}\right)\times \operatorname{Aut}(P_{1}),\]
which shows that
\[|G|=\prod_{i=2}^{k}(p_{i}-1)|\operatorname{Aut}(P_{1})|.\]
Since \(G\) is a 2-group, then \(p_{2},\ldots,p_{k}\) are Fermat primes2 and \(P_{1}\) is a 2-group whose automorphism group is also a 2-group.
Footnote 2: Recall that a Fermat prime is a prime number of the form \(2^{2^{n}}+1\), \(n\in\mathbb{N}\).
Let \(K\) be an abelian subgroup of \(P_{1}\). If \(K\) is not cyclic, then it contains a subgroup \(K_{1}\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). It follows that \(\operatorname{Aut}(K_{1})\) is isomorphic to a subgroup of \(G\) and therefore \(|\operatorname{Aut}(K_{1})|=6\) is a power of 2, a contradiction. Thus all abelian subgroups of \(P_{1}\) are cyclic, implying that \(P_{1}\) is either cyclic or a generalized quaternion 2-group (see e.g. (4.4) of [19], II). Since \(Q_{2^{n_{1}}}\) possesses subgroups isomorphic to \(Q_{8}\) and \(|\operatorname{Aut}(Q_{8})|=24\) is not a power of 2, we infer that \(P_{1}\) is cyclic, i.e. \(P_{1}\cong\mathbb{Z}_{2^{n_{1}}}\). Then
\[H\cong\mathbb{Z}_{p_{2}\cdots p_{k}}\times\mathbb{Z}_{2^{n_{1}}}\cong \mathbb{Z}_{2^{n_{1}}p_{2}\cdots p_{k}}\]
and so
\[G\cong\mathbb{Z}_{2^{n_{1}}p_{2}\cdots p_{k}}^{\times}.\]
Note that \(\mathbb{Z}_{2^{n_{1}}}^{\times}\) is trivial for \(n_{1}=0,1\) and \(\mathbb{Z}_{2^{n_{1}}}^{\times}\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2^{n_{1}- 2}}\) for \(n_{1}\geq 2\).
Assume first that \(k=1\), i.e. \(H\cong\mathbb{Z}_{2^{n_{1}}}\) and \(G\cong\mathbb{Z}_{2^{n_{1}}}^{\times}\). If \(n_{1}\geq 4\), then \(G\) contains a subgroup \(G_{1}\cong\mathbb{Z}_{2^{2}}\) and so, by Theorem 2.1 (b), we know that \(G_{1}\cong\operatorname{Aut}(H_{1})\) where \(H_{1}\cong\mathbb{Z}_{5}\) or \(H_{1}\cong\mathbb{Z}_{10}\). In both these cases, we have that 5 divides the order of \(H_{1}\) and thus 5 divides the order of \(H\), which is impossible because \(H\) is a 2-group. Thus \(n_{1}\leq 3\) and we get \(G\cong\mathbb{Z}_{2}^{r}\), where \(r=0,1,2\).
Assume now that \(k\geq 3\). Then \(G\) contains a subgroup \(G_{1}\cong\mathbb{Z}_{2}^{4}\). It follows that \(G_{1}\cong\operatorname{Aut}(H_{1})\) for some subgroup \(H_{1}\leq H\), contradicting Theorem 2.1 (e). Thus \(k=2\) and by Theorem 2.1 (c) we have that 3 divides \(|H|\) and so \(p_{2}=3\). If \(n_{1}\geq 4\), then \(G\) contains a subgroup \(G_{1}\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2^{2}}\) and Theorem 2.1 (d) implies that 5 divides \(|H|\), contradicting the fact that \(H\) is a 2-group. Hence \(n_{1}\leq 3\), leading to \(G\cong\mathbb{Z}_{2}^{r}\) with \(r=1,2,3\).
Conversely, if \(G\cong\mathbb{Z}_{2}^{r}\) with \(r\leq 3\), then it suffices to take \(H=\mathbb{Z}_{2}\) for \(r=0\), \(H=\mathbb{Z}_{4}\) for \(r=1\), \(H=\mathbb{Z}_{8}\) for \(r=2\) and \(H=\mathbb{Z}_{8}\times\mathbb{Z}_{3}\) for \(r=3\). This completes the proof.
## 3 Completely \(f\)-realisable groups, where \(f=z,f,m,d,\Phi\)
The problem of determining completely \(f\)-realisable groups for \(f=Z\) and \(f=F\) is trivial, namely:
* A group is completely \(Z\)-realisable if and only if it is abelian.
* A group is completely \(F\)-realisable if and only if it is nilpotent.
The same thing can be also said for \(f=M\). Recall that, given a finite group \(H\), the _Chermak-Delgado measure_ of a subgroup \(K\) of \(H\) is defined by
\[m_{H}(K)=|K||C_{H}(K)|.\]
Let
\[m^{*}(H)=\max\{m_{H}(K)\mid K\leq H\}\text{ and }\mathcal{CD}(H)=\{K\leq G \mid m_{H}(K)=m^{*}(H)\}.\]
Then the set \(\mathcal{CD}(H)\) forms a modular, self-dual sublattice of the subgroup lattice of \(H\), which is called the _Chermak-Delgado lattice_ of \(H\) (see Theorem 1.44 in [13]). The minimal member \(M(H)\) of \(\mathcal{CD}(H)\) is called the _Chermak-Delgado subgroup_ of \(H\). Note that \(M(H)\) is characteristic, abelian and contains \(Z(H)\) by Corollary 1.45 in [13]. So, a (completely) \(M\)-realisable group is abelian. Conversely, it is clear that every subgroup of an abelian group is its own Chermak-Delgado subgroup. Thus we have:
* A group is completely \(M\)-realisable if and only if it is abelian.
In what follows we will focus on the other two cases. Recall that a group \(G\) is completely \(D\)-realisable if there is a group \(H\) such that, up to isomorphism, we have
\[L(G)=\{H_{1}^{\prime}\mid H_{1}\leq H\}, \tag{2}\]
where \(L(G)\) denotes the subgroup lattice of \(G\). An important class of completely \(D\)-realisable groups is given by the following theorem.
**Theorem 3.1**.: _All abelian groups are completely \(D\)-realisable._
Proof.: Guralnick [10] showed that if \(A\) is an abelian group of order \(n\), then the group \(H=A\wr\mathbb{Z}_{2}\) is an integral of \(A\). Clearly, we have \(H_{1}^{\prime}\leq A\), for all \(H_{1}\leq H\). Conversely, we observe that if \(A_{1}\leq A\), then \(H\) contains a subgroup \(H_{1}\cong A_{1}\wr\mathbb{Z}_{2}\) which is an integral of \(A_{1}\).
Note that there exist completely \(D\)-realisable non-abelian groups, e.g. \(G=A_{4}\) for which it suffices to take \(H=S_{4}\), and even completely \(D\)-realisable non-abelian \(p\)-groups, e.g. \(G=\mathrm{He}_{3}\), the Heisenberg group of order \(3^{3}\), for which it suffices to take \(H=SmallGroup(216,36)\).
**Remark.** We conjecture that \(A_{4}\) is the smallest completely \(D\)-realisable non-abelian group. The dihedral groups \(D_{2n}\) with \(n=3,4,5,6\) and the dicyclic group \(\operatorname{Dic}_{3}\) are not \(D\)-realisable because each of them has a characteristic cyclic subgroup which is not contained in center (see Proposition 3.1 of [1]). The quaternion group \(Q_{8}\) is \(D\)-realisable, a group \(H\) with \(Q_{8}\cong D(H)\) being necessarily a proper semidirect product \(P\rtimes A\), where \(P\) is a 2-group containing a normal subgroup isomorphic to \(Q_{8}\) and having \(D(P)\) cyclic of order 2 or 4, and \(A\) is an abelian group of odd order3.
Footnote 3: The smallest integral of \(Q_{8}\) is \(\operatorname{SL}(2,3)\cong Q_{8}\rtimes\mathbb{Z}_{3}\) (see [2]).
Indeed, if \(H\) is a finite group with \(Q_{8}\cong D(H)\) and \(P\) is a Sylow 2-subgroup of \(H\) including \(D(H)\), then \(P\) is normal in \(H\) and \(H/P\) is abelian. Since \(P\) and \(H/P\) are of coprime orders, the Schur-Zassenhaus theorem leads to \(H\cong P\rtimes A\), where \(A\cong H/P\) is abelian of odd order. Moreover, \(D(P)\) is a proper subgroup of \(D(H)\) because \(Q_{8}\) is not \(p\)-integrable (see Theorem 4.2 of [2]) and so it is cyclic of order 2 or 4.
We note that a GAP search over all groups of order less or equal that 500 gives no complete integral of \(Q_{8}\).
We also note that the class of completely \(D\)-realisable groups is properly contained in the class of \(D\)-realisable groups: \(A_{5}\) is \(D\)-realisable - we have \(A_{5}=D(S_{5})\), but not completely \(D\)-realisable - it has a subgroup isomorphic to \(D_{10}\) which is not \(D\)-realisable. Since \(A_{n}\) have subgroups of type \(A_{5}\), for all \(n\geq 5\), we get:
**Theorem 3.2**.: _Alternating groups \(A_{n}\) with \(n\geq 5\) are \(D\)-realisable, but not completely \(D\)-realisable._
Next we will focus on completely \(\Phi\)-realisable groups. Recall that such a group \(G\) is nilpotent and satisfies \(\operatorname{Inn}(G)\subseteq\Phi(\operatorname{Aut}(G))\). Again, we have a result similar with Theorem 3.1.
**Theorem 3.3**.: _All abelian groups are completely \(\Phi\)-realisable._
Proof.: Given an abelian group \(G\), we have to prove that there exists a group \(H\) such that:
1. \(G\cong\Phi(H)\);
2. \(\forall\,G_{1}\leq G,\exists\,H_{1}\leq H\) such that \(G_{1}\cong\Phi(H_{1})\);
3. \(\forall\,H_{1}\leq H,\exists\,G_{1}\leq G\) such that \(\Phi(H_{1})\cong G_{1}\).
Since \(\Phi\) is completely multiplicative, that is
\[\Phi(\prod_{i=1}^{m}G_{i})\cong\prod_{i=1}^{m}\Phi(G_{i})\text{ for all groups }G_{i},\,i=1,...,m,\]
it suffices to assume that \(G\) is an abelian \(p\)-group and to prove that there exists a \(p\)-group \(H\) with the above properties. This is clear because for \(G\cong\prod_{i=1}^{k}\mathbb{Z}_{p^{n_{i}}}\) we can choose \(H\cong\prod_{i=1}^{k}\mathbb{Z}_{p^{n_{i}+1}}\).
Note that all \(\Phi\)-realisable \(p\)-groups of order \(p^{3}\) are abelian (see e.g. Lemma 1 of [12]). The same thing can be also said about \(\Phi\)-realisable \(2\)-groups of order \(2^{4}\) (see e.g. Theorem 1 of [20]). An example of a \(\Phi\)-realisable non-abelian \(5\)-group of order \(5^{4}\) is presented in Remark 5 of [4]:
\[G=\langle x,y,z,t\mid x^{5}=y^{5}=z^{5}=t^{5}=1,[z,t]=x,[x,t]=[y,t]=l\rangle\]
for which we can choose
\[H= \langle u,v,w,x,y,z\mid u^{5}=v^{5}=w^{5}=x^{5}=y^{5}=z^{5}=1,[v, w]=[v,x]=[v,z]\] \[=[x,y]=1,[v,y]=[x,w]=[w,y]=u,[w,z]=v,[x,z]=w,[y,z]=x\rangle.\]
Finally, we remark that there exist \(\Phi\)-realisable groups that are not completely \(\Phi\)-realisable, for example \(G=\mathbb{Z}_{2}\times Q_{8}\) - we have \(G=\Phi(H)\), where \(H=SmallGroup(64,9)\), but \(G\) contains a subgroup isomorphic to \(Q_{8}\) which is the Frattini subgroup of no group.
We end this paper by proposing the following two open problems:
**Problem 1.** Classify completely \(D\)-realisable and completely \(\Phi\)-realisable groups.
**Problem 2.** Study (completely) \(f\)-realisable groups for other constructions \(f\) on groups, e.g. when \(f(H)\) is the Carter subgroup of the finite solvable group \(H\).
**Acknowledgement.** The authors are grateful to the reviewer for remarks which improve the previous version of the paper. |
2310.02514 | Closed-Loop Until Further Notice: Comparing Predictive Control Methods
in Closed-Loop | For future extremely large telescopes, error in extreme adaptive optics
systems at small angular separations will be highly impacted by the lag time of
the correction, which is typically on millisecond timescales; one solution is
to apply a predictive correction to catch up with the system delay. Predictive
control leads to significant RMS error reductions in simulation (on the order
of 5-10x improvement in RMS error compared with a standard integral
controller), but shows only modest improvement on-sky (less than 2x in RMS
error). This performance limitation is likely impacted by elements of pseudo
open loop (POL) reconstruction, which requires assumptions about the response
of the deformable mirror and accuracy of the wavefront measurements that are
difficult to verify in practice. In this work, we explore a closed-loop method
for data-driven prediction using a reformulated empirical orthogonal functions
(EOF). We examine the performance of the open and closed-loop methods in
simulation on perfect systems and systems with an inaccurate understanding of
the DM response. | J. Fowler, M. A. M. van Kooten, R. Jensen-Clem | 2023-10-04T01:21:29Z | http://arxiv.org/abs/2310.02514v1 | # Closed-Loop Until Further Notice: Comparing Predictive Control Methods in Closed-Loop
###### Abstract
For future extremely large telescopes, error in extreme adaptive optics systems at small angular separations will be highly impacted by the lag time of the correction, which is typically on millisecond timescales; one solution is to apply a predictive correction to catch up with the system delay. Predictive control leads to significant RMS error reductions in simulation (on the order of 5-10x improvement in RMS error compared with a standard integral controller), but shows only modest improvement on-sky (less than 2x in RMS error). This performance limitation is likely impacted by elements of pseudo open loop (POL) reconstruction, which requires assumptions about the response of the deformable mirror and accuracy of the wavefront measurements that are difficult to verify in practice. In this work, we explore a closed-loop method for data-driven prediction using a reformulated empirical orthogonal functions (EOF). We examine the performance of the open and closed-loop methods in simulation on perfect systems and systems with an inaccurate understanding of the DM response.
predictive control, extreme adaptive optics (exAO), empirical orthogonal functions (EOF) Further author information: (Send correspondence to J.F.) J.F.: E-mail: [email protected]
## 1 Introduction
Dessenne, 1998 [3] was one of the first papers outlining predictive control for adaptive optics (AO) systems. This predictive method was demonstrated on-sky in 1999 [2] at the 1.52 meter observatory in Haute Provence; they cite a relative Strehl increase over a classic integrator of 30% (a max Strehl performance of \(\sim 14\%\)) at their visible central wavelength of 650 nm. Since then, a plethora of predictive control methods have come into the literature, including linear estimators [5, 6, 9, 22], linear quadratic Gaussian controllers [13, 19, 18, 14], model-based updates to a linear quadratic Gaussian [16, 17, 4], subspace control methods [8], and non-linear neural network solvers [11, 23, 10, 20, 12, 7]. Despite nearly 25 years of predictive methods, Empirical Orthogonal Functions (EOF)[5] is the only method to have been run on-sky as the controller for all spatial frequencies on an 8-10 meter class telescope; demonstrated on Subaru/SCExAO[6] and on Keck/NIRC2 [22].
As opposed to more classic methods (e.g., an integral controller or a linear quadratic Gaussian controller) a classic EOF [5] learns a linear relationship in the evolution of the full scale of input turbulence, which means that EOF trains on (and predicts) the open loop wavefront. Because no single conjugate AO system runs in open loop, in practice these methods train on pseudo-open loop (POL) data, where deformable mirror (DM) commands are added to wavefront sensor (WFS) measurements to reconstruct the full state of turbulence. However, this leaves room for error in the reconstruction. Any spatially or temporally evolving mismatches in the calibration (e.g., non-linearities in the sensing and actuator pokes or DM to WFS misregistrations) that are not as impactful in an integral closed-loop controller could lead to incorrect performance prediction in open loop.
However, upon closer inspection of Dessenne's adaptive predictor [3] and Guyon's EOF [5], it becomes clear that they are both methods that build predictive controllers from the same pieces of information. The major departure between the two methods is that Guyon's EOF runs in open loop, and reconstructs full turbulence states, and Dessenne's predictive controller runs in closed-loop, and encapsulates the evolution of both the DM commands and WFS measurements. With this work, we aim to revisit Dessenne's method, update it for comparison with Guyon's EOF, perform preliminary simulations of its feasibility within the context of a modern system, and ultimately begin to compare what is essentially an open and closed-loop implementation of empirical orthogonal functions. From hence forth, we will refer to both methods as EOF, and make the distinction between open and closed-loop implementations.
In Section 2.1, we describe the pseudo-open loop EOF method, in Section 2.2 we describe the closed-loop update for this work, and in Section 2.3 we discuss training conditions for the closed-loop update. In Section 3 we present the results of preliminary performance simulations and the impact of model-mismatch errors on that performance.
## 2 Empirical Orthogonal Functions
Empirical orthogonal functions predicts a future state of a wavefront by building a predictive filter that linearly combines previous states of the wavefront. Figure 1 shows a visual representation. In the following sections we describe the math that represents this process for open and closed-loop implementations.
Figure 1: The future state of the wavefront is predicted as a linear combination of previous states, \(w\) at some time \(n\). The collection of previous states is called the history vector, \(\mathbf{h}\). Phase screens evolve in time (in sea foam) and a weight is applied to each one (the predictive filter \(\mathbf{F}=[f_{1},f_{2},...]\), where each \(f_{n}\) contains m weights for m modes) to estimate the final wavefront prediction in teal. Note that mathematically a history vector is a single flat vector that contains the appended information of all 3 screens to predict a final vector that contains a flattened version of the predicted wavefront.
### Previous Open Loop Implementations
Starting with Guyon, 2017 [5] and a follow up from Jensen-Clem in 2019 [9], we outline our implementation of EOF in (pseudo-)open loop. Given some state of the wavefront (i.e., full scale of the uncorrected turbulence as considered in pseudo open loop), \(w\), with m variables representing wavefront sensor measurements (which for a zonal approach corresponds to the number of deformable mirror actuators) and n associated frames, we build a history vector for each subaperture:
\[\mathbf{h}_{m}(t)=\begin{bmatrix}w(t)\\ w(t-dt)\\ w(t-(n-1)dt)\end{bmatrix} \tag{1}\]
We build a predictive filter for each mode \(\mathbf{F}_{m}\) that will predict the full phase at a given point with:
\[\mathbf{F}_{m}\mathbf{h}_{m}(t)=w(t+dt) \tag{2}\]
Both the filter and history vector can be written to include information across multiple modes, i.e., \(\mathbf{h}=[w_{0}(t),w_{1}(t)...w_{0}(t-dt),w_{1}(t-dt)...]\), but for ease of comparison with the closed-loop implementation we leave these as single wavefront sensor measurement/DM actuator filters and predictions.
To find the predictive filter \(\mathbf{F}\) we minimize an error term that consists of the difference between the output predicted wavefront (in DM space) and the true phase at that time. We collect training data \(\mathbf{D}\), which contains history vectors and \(\mathbf{P}\) which holds the true state (or "future") for each history vector. (I.e., mapping \(t\) to its future state at time \(t+dt\).)
\[\mathrm{min}_{\mathbf{F}}||\mathbf{D}^{T}\mathbf{F}^{T}-\mathbf{P}^{T}||^{2} \tag{3}\]
Solving equation (3) requires a pseudo-inverse; Guyon, 2017 [5] solved this problem with an SVD (singular value decomposition) inversion, but we use a least-squares inversion [9], with regularization constant \(\alpha\) (\(\alpha\) may be set to 1 for simulations, but is found empirically on-sky [22].)
\[\mathbf{F} = ((\mathbf{D}^{T})^{\dagger}\mathbf{P}^{T})^{T} \tag{4}\] \[\mathbf{F} = \mathbf{P}\mathbf{D}^{T}(\mathbf{D}\mathbf{D}^{T}+\alpha\mathbf{ I})^{-1} \tag{5}\]
Finally, our predictive filter \(\mathbf{F}\) holds a coefficient for each previous state, as expressed by a pseudo open loop (POL) reconstruction of the telemetry projected into DM space.
### Our Development of a Closed-Loop Update
Inspired by Dessenne, 1998 [3] and Haffert, 2021 [8], we explore a closed-loop reformulation of the classic empirical orthogonal functions [5]. The fullest realization of this method could:
1. Improve controller stability by running in closed-loop.
2. Avoid error introduced by non-linear wavefront sensing or DM model mismatch when wavefront sensor residuals are converted to psudeo open loop.
3. Track drifts that may impact the DM and wavefront sensor separately by allowing each to evolve with a different set of coefficients.
4. Do robust timekeeping with information into the system.
Following [3], an updated history vector (we refer to this as \(\mathbf{\phi}\) to distinguish from the open loop history vector, as in standard in control theory conventions), contains both wavefront sensor residuals \(\epsilon(t)\) and DM commands \(y(t)\). (While the original work opted to use KL-modes, we work in the DM zonal basis, i.e., one point of information per DM actuator/WFS subaperture.) We consider only a single mode at a time:
\[\mathbf{\phi}(n)=\begin{bmatrix}y(n-1)\\ y(n-2)\\...\\ y(n-p)\\ \epsilon(n-2)\\ \epsilon(n-3)\\...\\ \epsilon(n-p-1)\end{bmatrix} \tag{6}\]
In this way, the new history vector and the corresponding predictive filter have twice as many values per mode pair as previous open loop derivations. However, the output of the filter applied to the history vector \(\mathbf{\theta}\cdot\mathbf{\phi}\) predicts the same piece of information as an open loop implementation: the full turbulence in DM space, which is the DM command needed at a given iteration, in our notation \(y(n)\).
Dessenne's original formulation suggests reconstructing open loop turbulence from the full transfer function of the AO control loop to train the predictive filter, and solving for a steady state recursive least squares solution [3]. In this simulation, we instead apply the same minimization technique used in the open loop implementation. Future work, inspired by that of van Kooten, 2019 [21] will explore a recursive least square implementation that updates the filter with each control iteration. We build a new minimization problem:
\[\text{min}_{\mathbf{\theta}}||\mathbf{D}^{T}\mathbf{\theta}^{T}-\mathbf{P}^{T}||^{2} \tag{7}\]
where \(\mathbf{D}\) contains collections of the history vector \(\mathbf{\phi}\), and \(\mathbf{P}\) contains the future state of the full turbulence.
In practice, the distinction between the open and closed-loop implementation is twofold: (1) the ability of the correction to apply different coefficients for the DM commands and the wavefront sensor information (whereas POL is one command applied to a summed value) and (2) the ability to robustly encapsulate a time delay into the reconstruction. POL reconstruction adds wavefront sensor measurements to DM commands with a single static delay, while closed-loop methods can account for when each piece of information enters the system (with two-step delays or delays with non-uniform steps). See Appendix A for additional information on how timekeeping impacts the construction of \(\mathbf{\phi}\) and \(\mathbf{P}\).
Assuming a classic two-step delay and \(p\) associated frames of information in a history vector, the weighted estimate for a single coefficient for the closed-loop formulation (\(y_{CL}\)) vs. the original open loop formulation (\(y_{OL}\)) takes the form:
\[y_{CL}(n) = b_{1}y(n-1)+...+b_{p}y(n-p)+a_{0}\epsilon(n-2)+...+a_{p-1} \epsilon(n-p-1) \tag{8}\] \[y_{OL}(n) = c_{0}[y(n-1)+\epsilon(n-1)]+...+c_{p}[y(n-p-1)+\epsilon(n-p-1)] \tag{9}\]
### Selecting a Truth Condition for Training Data
The truth condition \(\mathbf{P}\), against which the predictive filter trains, must be the full uncorrected state of the wavefront (i.e., open loop turbulence) at a given iteration. The original work from Dessenne [3] estimated the open loop wavefront from the transfer function of the AO control loop, essentially a higher fidelity pseudo open loop reconstruction than that of Guyon, 2017 [5]. For these simulations, we give the predictor perfect knowledge, using the full scale of turbulence from simulation. We note that this training condition is not realistic to on-sky operation, but acts as a laboratory to test the perfect performance of a closed-loop implementation, and future work will explore more realistic \(\mathbf{P}\) generation. Figure 2 shows the example data used as the truth condition for training the closed-loop implementation.
## 3 Preliminary Implementation of Closed-Loop Predictive Control
We simulate atmospheric phase screens and an idealized AO control system with HCIPy[15]. We estimate the root-mean-square (RMS) residual error across the pupil for 1) the full state of the uncorrected turbulence, 2) a pseduo-integrator, in which we apply a perfect correction with a 2 ms time delay (for Keck, the time-delay is \(\sim\)1.5 ms[1]), 3) open loop EOF (well optimized in previous work [9, 4]), and 4) a preliminary implementation of closed-loop EOF. These simulations use perfect wavefront sensing and correction, and are meant to provide comparative estimates only for the bandwidth error. The parameters for the simulation are shown in Figure 2.
### Performance in an Idealized AO System
Figure 3 (left) shows the results of our initial implementation; we find a factor of improvement in RMS error of \(\sim\) 2.5 over a standard integrator from closed-loop EOF, under-performing compared to the standard psuedo
Figure 3: Left: Two predictive methods alongside a psuedo-integrator (a perfect correction 2 timesteps behind), all applied with no measurement or fitting error (i.e., perfect wavefront sensing and control) running at 1 kHz. Full uncorrected turbulence RMS median: 1534.21 nm, quasi-integrator: 24.30 nm, closed-loop predictor: 9.49 nm, open loop predictor: 1.03 nm. Right: Impacts of a DM-model mismatch. Closed-loop EOF correction goes from a median of 9.49 (for the nominal run) to 16.66 nm RMS when a DM model error is introduced, but still shows better performance than the pseudo-integrator. The open loop EOF error increases by an order of magnitude and is not displayed on the figure for scale purposes.
Figure 2: Left: Piston, tip, and tilt subtracted simulated phase screens from an 8 meter telescope with a single wind layer. Right: Table of parameters for generating the turbulence and AO simulations.
open loop. In practice, we should be able to recreate similar (if not better) performance between a closed and open loop implementation, however closed-loop minimization problem contains twice as many variables (treating the WFS and DM independently), twice as many regressors, and will require more training data to converge to an optimal solution, as shown in van Kooten, 2019 [21]. Future work will explore optimizing filter length and training length for a closed-loop implementation for a more fair comparison with open loop.
### Robustness to Model Errors
Figure 3 (right) shows the performance of both predictors on a second set of data, in which we simulate a DM model mismatch in our control system - adding a static factor of 2 between what the DM expects to apply and actually applies. We see that the performance of closed-loop EOF is less sensitive to errors in the system model. While the closed-loop performance drops when a DM model issue is introduced, the closed-loop predictor still outperforms a typical integrator. However, introducing the same issue into a open loop EOF increases the residual error by more than an order of magnitude. We note that in theory a closed-loop EOF should exactly learn and reconstruct the model-error, and future work to optimize the filter and training length will likely make a closed-loop EOF even more robust to model errors.
## 4 Conclusion
In conclusion, we revisit the work of Dessenne, 1998 [3] to compare open and closed-loop implementations of empirical orthogonal function (EOF) in simulation. A preliminary simulation of closed-loop EOF does not perform as well as an optimized implementation of open loop EOF, but still provides improvement over a classic integrator. A closed-loop implementation also proves to be more robust to the introduction of DM model errors.
We speculate that future optimization of the closed-loop predictive filter (e.g., exploring history vector length, training data length, etc) will likely close the gap between the open and closed-loop implementations and provide a controller that is even more robust to model errors. We also note that for our preliminary simulations we used perfect knowledge of the system to train our closed-loop predictive filter, which is not realistic for on-sky implementation; if performance comparisons prove promising, a more realistic training method could be devised, for example using the transfer function as in Dessenne's original work [3] or training on open and closed-loop data at the beginning of the night. Future work could also examine the comparative stability of open and closed-loop methods, as well as the impact of a more robust consideration of time-delay.
We revisit this closed-loop predictive controller not only as an exploratory method for AO control, but also as a laboratory to explore model-mismatch and improve performance of the open loop implementations that are operating on-sky. For extreme adaptive optics on extremely large telescopes, novel ways to account for bandwidth error are worth pursuing.
## Appendix A Time Reference Frames and Turbulence Reconstruction
One intriguing issue when comparing and reworking control methods is a robust understanding of time delays over the course of a control loop and how that plays out with different methods. Pseudo-open loop implementations gloss over this issue, by recreating a single point of information at a single point in time, but closed-loop implementations provide the opportunity to represent each piece of information at the time it enters the system, some work even accounts for fractional time-delays [17]. The following is a minimal proof of information flow in a control loop, switching between time reference frames.
We consider this system in two reference frames: (1) the time represented by the physical information flowing through the system (i.e., what time the turbulence the system is analyzing occurred); this is physically intuitive for timekeeping and residual comparison and we call it a physical clock and (2) the time represented by the control system and when information could be sampled from various sensors or correctors, which we call a control clock.
Dessenne 1998 [3], uses a control clock framework. If, for example, we wanted to associated 3 states in time, we would build a history vector (used to predict the state at iteration n) of the form:
\[\mathbf{\phi}(n)=\begin{bmatrix}y(n-1)\\ y(n-2)\\ y(n-3)\\ \epsilon^{\prime}(n-2)\\ \epsilon^{\prime}(n-3)\\ \epsilon^{\prime}(n-4)\end{bmatrix} \tag{10}\]
The goal of this appendix is to show that these time steps from the perspective of the control clock associate logical pieces of information from the perspective of a physical clock. If we consider information moving through a system with a two step delay, we would see the chain of control events outlined in Table 1
If we now consider the history vector \(\mathbf{\phi}\), we can rewrite it to prediction some iteration \(n=7\)
\begin{table}
\begin{tabular}{c|c c c} \hline \(s\) & \(\epsilon\) & \(\epsilon^{\prime}\) & \(y\) \\ \hline \(s_{1}\) & \(\epsilon_{1}=s_{1}\) & - & - \\ \hline \(s_{2}\) & \(\epsilon_{2}=s_{2}\) & \(\epsilon_{2}^{\prime}=\text{WF}(\epsilon_{1})\) & - \\ \hline \(s_{3}\) & \(\epsilon_{3}=s_{3}+y_{3}\) & \(\epsilon_{3}^{\prime}=\text{WF}(\epsilon_{2})\) & \(y_{3}=\text{DM}(\epsilon_{2}^{\prime})\) \\ \hline \(s_{4}\) & \(\epsilon_{4}=s_{4}+y_{4}\) & \(\epsilon_{4}^{\prime}=\text{WF}(\epsilon_{3})\) & \(y_{4}=\text{DM}(\epsilon_{3}^{\prime})\) \\ \hline \(s_{5}\) & \(\epsilon_{5}=s_{5}+y_{5}\) & \(\epsilon_{5}^{\prime}=\text{WF}(\epsilon_{4})\) & \(y_{5}=\text{DM}(\epsilon_{4}^{\prime})\) \\ \hline \(s_{6}\) & \(\epsilon_{6}=s_{6}+y_{6}\) & \(\epsilon_{6}^{\prime}=\text{WF}(\epsilon_{5})\) & \(y_{6}=\text{DM}(\epsilon_{5}^{\prime})\) \\ \hline \(s_{7}\) & \(\epsilon_{7}=s_{7}+y_{7}\) & \(\epsilon_{7}^{\prime}=\text{WF}(\epsilon_{6})\) & \(y_{7}=\text{DM}(\epsilon_{6}^{\prime})\) \\ \hline \(s_{8}\) & \(\epsilon_{8}=s_{8}+y_{8}\) & \(\epsilon_{8}^{\prime}=\text{WF}(\epsilon_{7})\) & \(y_{8}=\text{DM}(\epsilon_{7}^{\prime})\) \\ \hline \end{tabular}
\end{table}
Table 1: How information moves through the control loop, starting from the loop turning on. With a step of delay between each component, we at first have no information sensed and turned into a DM command.
Figure 4: Control diagram of a classic two-step delay. The wavefront (\(s\)) comes into the system, and first meets a sum junction, where we apply deformable mirror (DM) commands (\(y\)), making it a closed-loop system. At this point, our residual is \(\epsilon=s+y\). That signal is sensed by a wavefront sensor; that sensed signal is \(\epsilon^{\prime}=\text{WF}(\epsilon)\), where WF represents a functional form of how the wavefront sensor interprets the signal. Finally, this is fed into a computer that will calculate and apply a correction \(y=DM(\epsilon^{\prime})\), where DM represents a functional form of how that correction is applied and calculated based on \(\epsilon^{\prime}\).
\[\mathbf{\phi}(7)=\begin{bmatrix}y(n-1)=y_{6}=\text{DM}(\epsilon_{5}^{\prime})=\text{ DM}(\text{WF}(\epsilon_{4}))\\ y(n-2)=y_{5}=\text{DM}(\epsilon_{4}^{\prime})=\text{DM}(\text{WF}(\epsilon_{3}))\\ y(n-3)=y_{4}=\text{DM}(\epsilon_{3}^{\prime})=\text{DM}(\text{WF}(\epsilon_{2}) )\\ \epsilon^{\prime}(n-2)=\epsilon_{5}^{\prime}=\text{WF}(\epsilon_{4})\\ \epsilon^{\prime}(n-3)=\epsilon_{4}^{\prime}=\text{WF}(\epsilon_{3})\\ \epsilon^{\prime}(n-4)=\epsilon_{3}^{\prime}=\text{WF}(\epsilon_{2})\end{bmatrix} \tag{11}\]
Notice that though the control clock indexing appears to introduce a one step offset, the most expanded part of the expression for each iteration shows that the elements on the history vector depend on the the same physically timed pieces of information.
It should be noted that a classic two-step delay is not actually physically reminiscent of most systems. In Cetre, 2018[1] they found that for the Keck pyramid WFS RTC, the time delay over the entire correction (including wavefront sensing time, calculations, and DM latency after correction is applied) takes \(\sim\)1.5 ms, where \(\sim\)1 ms is the wavefront sensing and readout time, and \(\sim\)0.5 ms is the calculation and hardware latency time. For this simulation work we have opted to only update the DM once per new piece of wavefront sensor information, essentially forcing this to be a classic two-step delay. However, future work will explore the efficacy of updating the DM more frequently, as we could project forward the correction easily and provide two DM updates per wavefront sensor readout.
|
2307.08390 | Correlation-aware Spatial-Temporal Graph Learning for Multivariate
Time-series Anomaly Detection | Multivariate time-series anomaly detection is critically important in many
applications, including retail, transportation, power grid, and water treatment
plants. Existing approaches for this problem mostly employ either statistical
models which cannot capture the non-linear relations well or conventional deep
learning models (e.g., CNN and LSTM) that do not explicitly learn the pairwise
correlations among variables. To overcome these limitations, we propose a novel
method, correlation-aware spatial-temporal graph learning (termed CST-GL), for
time series anomaly detection. CST-GL explicitly captures the pairwise
correlations via a multivariate time series correlation learning module based
on which a spatial-temporal graph neural network (STGNN) can be developed.
Then, by employing a graph convolution network that exploits one- and multi-hop
neighbor information, our STGNN component can encode rich spatial information
from complex pairwise dependencies between variables. With a temporal module
that consists of dilated convolutional functions, the STGNN can further capture
long-range dependence over time. A novel anomaly scoring component is further
integrated into CST-GL to estimate the degree of an anomaly in a purely
unsupervised manner. Experimental results demonstrate that CST-GL can detect
anomalies effectively in general settings as well as enable early detection
across different time delays. | Yu Zheng, Huan Yee Koh, Ming Jin, Lianhua Chi, Khoa T. Phan, Shirui Pan, Yi-Ping Phoebe Chen, Wei Xiang | 2023-07-17T11:04:27Z | http://arxiv.org/abs/2307.08390v2 | # Correlation-aware Spatial-Temporal Graph Learning for Multivariate Time-series Anomaly Detection
###### Abstract
Multivariate time-series anomaly detection is critically important in many applications, including retail, transportation, power grid, and water treatment plants. Existing approaches for this problem mostly employ either statistical models which cannot capture the non-linear relations well or conventional deep learning models (e.g., CNN and LSTM) that do not explicitly learn the pairwise correlations among variables. To overcome these limitations, we propose a novel method, correlation-aware spatial-temporal graph learning (termed CST-GL), for time-series anomaly detection. CST-GL explicitly captures the pairwise correlations via a multivariate time series correlation learning module based on which a spatial-temporal graph neural network (STGNN) can be developed. Then, by employing a graph convolution network that exploits one- and multi-hop neighbor information, our STGNN component can encode rich spatial information from complex pairwise dependencies between variables. With a temporal module that consists of dilated convolutional functions, the STGNN can further capture long-range dependence over time. A novel anomaly scoring component is further integrated into CST-GL to estimate the degree of an anomaly in a purely unsupervised manner. Experimental results demonstrate that CST-GL can detect anomalies effectively in general settings as well as enable early detection across different time delays.
Multivariate Time Series, Anomaly detection, Graph neural networks.
## I Introduction
Rapid developments in Cyber-Physical Systems (CPS) have resulted in an explosive growth of time-series data collected across industries. In many applications, the CPS implemented generates time-series data from multiple devices or sensors, forming a complex _multivariate time-series_. Importantly, an operator may have thousands to millions of CPS systems, recording a manually unmanageable amount of multivariate time-series data. For example, each server of a cloud infrastructure provider generates multivariate time-series data and many providers may have up to over millions of servers [1]. A similar scale in the CPS system has also been observed in numerous commercial systems and critical infrastructures including power systems, spacecraft [2], engines, transportation, cyber networks [3], and water treatment plants [4]. Relying on human labours to monitor these operations would thus be not only impractical but also impossible.
To enable effective monitoring and warning of large-scale system operations, multivariate time-series anomaly detection has become an important topic. Successful implementation of multivariate time-series anomaly detection model could bring substantial economic and social benefits. For instance, in a water treatment plant [5], hundreds of sensors are installed to monitor water level, flow rates and water quality. A malicious attack may occur by simply turning on a single motorized valve, causing a disastrous cascading effect on the entire water distribution system. Automatically monitoring and detecting these abnormal behaviours can thus provide a fast response, which helps rectify errors, reduce cost, and save lives.
Among the various implementation approaches for detecting anomaly events, _unsupervised_ anomaly detection is one of which has attracted the most attention due to the difficulty of obtaining ground-truth anomalies over time. Early approaches typically employed either statistical unsupervised models such as ARIMA/VAR [6] or distance-based approaches [7, 8]. Unfortunately, these methods cannot capture the non-linear spatial and temporal relationship from the multivariate time-series data well. More recently, with the flourish of deep learning (DL), significant advances have been made. For instance, Hundman et al. proposed a Long Short-Term Memory (LSTM) network together with a nonparametric thresholding approach [2] and Su et al. proposed a representation learning-based stochastic recurrent neural network [3] approach to improving the current ability to detect multivariate time-series anomaly events. While the proposed DL frameworks can efficiently scale through high-dimensional multivariate time-series data, they did not explicitly model the underlying pairwise inter-dependence among variable pairs, weakening their capacity in detecting complex anomaly events.
The difficulty of detecting anomaly events in multivariate time-series data lies in the fact that the variable pairs are intricately related. Figure 1 shows a real-world inspired example of multivariate time-series data with six variables where A, B and C represents closely related variable groups. A1 variable is not closely related to other variables and the detection of anomaly events can simply be a significant deviation from past behaviours. B1 and B2 are two inter-related variables that should go up and down together, a deviation from this relationship is thus an anomaly event. On the other hand, C1 always increases with a lag after C2 is switched on (upward spike). The exception to the C1-C2 relationship (grey span of Figure 1) is when C3 is also switched on as C3 decreases
C1, creating an offsetting effect on C2. The red span in the C variable groups indicates that an anomaly event has occurred because C1 does not increase despite C2 being switched on and C3 being switched off. While variable pairs that form the multivariate time-series are naturally interdependent, the degree of inter-dependence tells the full story of multivariate time-series data. Further, as shown above, the complexity increases exponentially with the increase in the number of variables. It is thus crucial for an anomaly detection model to not only assume the inter-dependent relationship but to _explicitly learn and capture the pairwise correlations (i.e., degree of spatial dependence) between the variables of a multivariate time-series_.
To explicitly capture the pairwise correlations, a natural way is to model multivariate time series as a graph. For example, by treating each sensor as a node, a sensor graph can be constructed in which the node features are continuously changed over time. With a representative graph, spatial-temporal graph neural networks (STGNNs) can then be employed to tackle the multivariate time series anomaly detection task by explicitly modeling the pairwise correlations via a graph neural network (GNN) module and temporal information via a CNN [9] or RNN [10] module. However, using the generic STGNN models to explicitly model pairwise correlations between variable pairs requires a predefined graph that is often not available in many multivariate time-series data. Consequently, while previous STGNN methods are equipped to capture spatial dependencies, their capacity to learn and construct the relationship between the variables may not always be optimized, especially in cases where a predefined graph is not readily available in many multivariate time-series data.
To address the limitations of generic STGNNs, Graph Deviation Network (GDN) [11] employs a simple graph learning layer to learn and construct the pairwise correlation relationship between the variable pairs. Then, a graph attention network is used to propagate historical information among the variables to forecast the next observation. Anomaly events are subsequently detected based on the magnitude of deviation between forecast and real observations. Nevertheless, in this preliminary study, GDN only captures the spatial dependency in the direct neighbors of each variable, which may cause it to lose important information from high-order (multi-hop) neighbors. Furthermore, it did not explicitly model temporal relations within each univariate time series, which are crucial for characterizing multivariate time series data [12] and thus further compromises the effectiveness of GDN.
Based on the above observations, we summarize the challenges for multivariate time series anomaly detection from the graph neural network perspective as follows.
* **Multivariate time-series correlation learning (Challenge 1).** The underlying correlations among the time-series variable pairs are important for multivariate time series anomaly detection task. How to explicitly capture the pairwise relations among variables to enable spatial-temporal analysis is the first challenge.
* **Spatial-temporal dependency modeling (Challenge 2).** Multivariate time-series analysis requires a deep understanding of spatial-temporal dependency; how to simultaneously capture spatial and temporal dependency remains a challenge for multivariate time series.
* **Anomaly scoring (Challenge 3).** How to estimate the anomaly score in an unsupervised way is the ultimate challenge for multivariate time series anomaly detection.
To address these challenges, we propose a novel algorithm CST-GL in this paper. Our theme is to model multivariate time series as a graph and design a spatial-temporal graph neural network to perform forecasting. Based on the forecasting results, the anomaly score can be well estimated, and the anomalies can be detected accordingly. To be more specific, we propose a multivariate time series correlation learning module that can automatically infer the underlying correlation among variables (_for Challenge 1_). Then, a well-designed spatial-temporal graph neural network is presented to model both the spatial and temporal dependency (_for Challenge 2_). The spatial dependence is modeled via a graph convolution network based on the gated mix-hop feature propagation that exploits neighbors from both single and multiple hops to better encode spatial information. The temporal dependence is captured via a temporal convolutional network which incorporates a gating mechanism with temporal convolution functions for long dependence modeling. Based on the forecasting results, we propose an anomaly forecast indicator that performs normalization on the most recent window of historical data and estimates the anomaly scores based on the reconstruction from a simple Principal Component Analysis model (_for Challenge 3_). Experimental results on real datasets demonstrate the superb performance of our method.
The main contributions of this paper are as follows:
* We propose an integrated algorithm for multivariate time series data analysis. Our method seamlessly integrates correlation learning into a spatial-temporal network for multivariate time series.
* We propose a novel algorithm for multivariate time series anomaly detection. Our method can automatically detect anomalies from complex time-series via auto-thresholding and enable early detections effectively.
* We compare our method with _eleven_ baselines for both general multivariate time series anomaly detection which
Fig. 1: An example of multivariate time-series data. Red span represents anomaly events while grey span represents time event highlights that are non-anomalous but form a reference to compare the behaviours of anomaly events. The first three examples showcased and from the Server Machine Dataset (SMD) [3], which contains data from internet servers, while the last three are drawn from the WADI, a water treatment plant sensing dataset [5].
aims to evaluate the overall performance in a whole dataset, as well as early detection of anomalies that needs to detect anomaly as early as possible. Our experimental results demonstrated that our method outperforms all baselines in both settings.
* We conduct a case study to demonstrate that our method not only enables effective anomaly detection but also provides interpretability in real-life applications.
The rest of the paper is structured as follows. Section II reviews the related work. Section III gives the definition of the task. Section IV presents the proposed CST-GL. Section V illustrates our experiments and conclusion in Section VI.
## II Related Work
In this section, we introduce the past work on multivariate time-series anomaly detection and graph neural networks.
### _Anomaly Detection in Multivariate time-series_
Detecting anomalies in time-series is a challenging task that has been perennially studied [13, 14, 15]. Historically, statistical models such as ARIMA/VAR [6], PCA [16] and SVM [17] have been applied to detect anomalies in univariate and multivariate time-series. Traditional techniques involving wavelet analysis [18], non-parametric [19], pattern-based [20, 21] and distance-based [7, 8] approach have also been collaboratively implemented. More recently, substantial efforts have been made to advance deep learning approaches for anomaly detection in multivariate time-series data across numerous domains [2, 3, 22]. As argued by [23, 24], this phenomenon has arisen because (a) deep learning frameworks are free from stationary assumptions and can scale through high dimensional temporal data and (b) unlike pattern-based approaches that only detect anomaly events by identifying anomalous sub-sequences, deep learning frameworks can detect anomalous event timestamp-by-timestamp within sequences and are thus well suited for the deployment of real-time streaming anomaly detection systems.
Deep learning models for multivariate time-series anomaly detection are primarily designed using recurrent neural network (RNN) that are combined either with convolutional neural networks (CNN) [22, 25], variational autoencoder (VAE) [26, 3] or Generative Adversarial Networks [27]. The RNN is employed to capture temporal dependencies [28, 29, 2] while the CNN, VAE or GAN is incorporated to capture dependencies among the multivariate variables. Any time-series observations which unexpectedly deviate from the learned temporal and relational dependencies would then be treated as anomalies. However, since CNN, VAE and GAN do not explicitly learn the relationship between the multivariate variables and only encapsulate interactions among variables into a global hidden state, they cannot fully exploit the latent dependencies between the variable pairs [30, 31]. For more research on deep learning for time-series anomaly detection, we refer readers to the most recent survey [32].
### _Graph Learning_
Graph learning [33] is a new learning paradigm that enables machine learning for graph data. A key component of this paradigm, graph neural networks, have been widely studied to handle an array of graph-structured data [34]. This includes a well-known subset of methods, namely spatial-temporal graph neural networks, which are typically applied to modeling multivariate time series [35]. In this context, graph structure learning is often involved when prior knowledge of the underlying graph topology is not readily available.
_Generic graph neural networks._ Graph neural networks (GNNs) have recently become de facto models to exploit graph data for graph analytics [36, 37, 38, 39, 40, 41]. The core idea of graph neural networks is to employ a _message passing_ scheme, which iteratively updates the representation (embedding) of a target node by propagating the representations of neighboring nodes. For instance, GCN [36] updates its node embedding by assigning a predefined weight to each message (embedding) from a neighbor. GAT [37] automatically learns the weight of each neighbor and performs a weighted aggregation to update the target node's representation. Due to the outstanding capacity of modeling inter-relationship of different entities in various domains, GNNs have been widely used in domains and applications including traffic [42, 35], recommender systems [43], drug discovery [44], and anomaly detection [45, 46].
_Graph Structure Learning._ Learning graph neural network models typically requires a predefined graph structure so that the _message passing_ can be performed along with the topological structure. However, in many applications related to time series, the graph structure may be not available and the GNN models are not directly applicable. To overcome this challenge, graph structure learning [47, 48] recently has emerged to automatically learn the graph structure from the data itself. For instance, SUBLIME [48] presents a structure bootstrapping contrastive learning framework to infer the relationship among data. However, these approaches can only be applied to static data. For dynamic data such as time series considered in this paper, these methods cannot be directly applied.
_Spatial-Temporal Graph Neural Networks._ To extend GNNs for handling dynamic graph-structured data, recent research has delved into spatial-temporal graph neural networks (STGNNs) [49]. These are especially effective in situations where the underlying graph structure remains static, but the features of the nodes undergo dynamic changes over time. A prime example of STGNNs in action is traffic forecasting, where the physical infrastructure such as subway stations and tracks is constant, but the traffic volume fluctuates continuously. Seo et al. [10] proposed a recurrent STGNN which adopts the Long-short Term Memory Networks (LSTMs) [50] and Graph Convolution Network (GCN) [36] as key components to capture temporal and spatial dependencies. Instead, Li et al. [9] proposed a CNN-based method (1D convolution) to capture the capture temporal dependencies and a GCN to capture spatial dependencies. Wu et al. [23] propose a joint graph structure learning and forecasting framework for spatial-temporal modeling. However, these methods typically only
consider general forecasting tasks and they did not exploit anomaly detection from time-series data.
## III Problem Formulation
A multivariate time-series with \(T\) successive observations of equal-spaced samples as represented \(\mathbf{X}=\{\mathbf{x}^{I},\mathbf{x}^{2},\cdots,\mathbf{x}^{T}\},\mathbf{x}^{t }\in\mathbb{R}^{N}\) is composed of \(N\) number of univariate time-series \(\{\mathbf{x}^{I}_{1},\mathbf{x}^{I}_{2},\cdots,\mathbf{x}^{t}_{N}\}\). In a real-time fashion, the multivariate time-series anomaly detection task requires learning of a scoring function, \(A(\cdot)\), that outputs an anomaly score to current observation \(T\) so that we have \(A(\mathbf{x}^{\alpha})>A(\mathbf{x}^{\mathrm{M}})\) where \(\mathbf{x}^{\alpha}\) is anomalous observation and \(\mathbf{x}^{\mathrm{M}}\) is not. Ideally, the proposed framework should also output a binary label that indicates whether a timestamp is anomalous or not, where \(y^{T}\in\{0,1\}\) and \(y^{T}=1\) if the observation \(\mathbf{x}^{T}\) is anomalous.
In this paper, we consider the unsupervised _real-time_ anomaly detection task. Firstly, a model is required to learn the normality of a time-series based on a non-anomalous train set. Then, given streaming time-series observations that consists both normal and anomalous observations, the model should detect anomaly events in real time. Under this setting, models can only rely on past observations to make a decision at every timestamp and cannot reverse its previous decisions.
## IV Methodology
In this section, we present the overall framework of CST-GL and its detailed designs to detect anomaly events in a multivariate time-series. As shown in Figure 2, our method mainly consists of three main constituents: I. _multivariate time-series correlation learning_, II. _spatial-temporal graph neural network_, and III. _anomaly detection and diagnosis_ module.
Given a multivariate time-series, we first propose to exploit the latent associations (i.e., edges) between each univariate time-series (i.e., nodes) explicitly via a pairwise correlation learner, where the learned graph structure together with the historical observations are then encoded by a sandwiched-structured spatial-temporal graph neural network to make reliable forecasting. Specifically, we interlace the designed graph and temporal convolutions to capture rich spatial and temporal dependencies respectively. The underlying considerations are two-fold: (1) The potential anomalies in a univariate time-series can be easily identified by referring to its historical observations. For example, a sudden high CPU outage is likely to trigger the system alert if compared with long-term historical readings. (2) However, for multivariate time-series data, the anomalies in a specific variable may not only associate with its historical observations but also the readings of other variables. A concrete example is traffic networks, where the change of road conditions in a street may cause a serious traffic jam in another one. Thus, it is crucial to model the underlying spatial and temporal dependencies in historical observations to perform precise and stable anomaly detection at each time step. To accomplish this goal, we propose a _anomaly detection and diagnosis_ module on the top of _multivariate time-series correlation learning_ and _spatial-temporal graph neural network_, where the anomaly score at each time point is derived from the forecasting errors. In other words, we conjecture that time-series anomalies are typically reflected as the mismatch between _anomalous observations_ and the forecasting results given by the well-trained spatial-temporal model on _non-anomalous data_.
Further, we argue that for root cause of anomaly events to be identified, pairwise correlations of variables have to be learned and captured by a proposed model. This is because univariates that deviate significantly from past spatial and temporal behaviours may only be symptoms of the root cause and the relationship of pairwise correlation can reveal the root cause variables. As shown later in the experimental section, CST-GL identifies root cause of an anomaly events using the well-learned pairwise correlation graph that captures well the inter-dependence relationship between the variable pairs.
In the rest of this section, we introduce the multivariate time-series correlation learning (MTCL) in Subsection IV-A. Then, in Subsection IV-B1 and IV-B2, we illustrate the detailed designs of the proposed spatial-temporal graph neural network (STGNN) in capturing the underlying spatial and temporal clues for accurate forecasting. Finally, we discuss how the proposed anomaly detection and diagnosis module can compute the real-time anomaly score in Subsection IV-C1 and identify the root cause of an anomaly event in Subsection IV-C2.
### _Multivariate Time-Series Correlation Learning_
To explicitly enable the modeling of pairwise dependencies among variables in a multivariate time series, we design a correlation learning layer and propose to learn the underlying unknown graph adjacency matrix \(\mathbf{A}\) adaptively, where nodes and edges denote variables and their connectivity. Specifically, our detailed formulation is given as follows:
\[\begin{cases}\widetilde{\mathbf{N}}_{1}=tanh(\alpha\mathbf{N}_{1}\mathbf{W}_{1 }),\\ \widetilde{\mathbf{N}}_{2}=tanh(\alpha\mathbf{N}_{2}\mathbf{W}_{2}),\\ \mathbf{A}=ReLU\{tanh(\alpha(\widetilde{\mathbf{N}}_{1}\widetilde{\mathbf{N}} _{2}^{\mathrm{T}}-\widetilde{\mathbf{N}}_{2}\widetilde{\mathbf{N}}_{1}^{ \mathrm{T}}))\},\end{cases} \tag{1}\]
where \(\mathbf{N}_{1},\mathbf{N}_{2}\in\mathbb{R}^{N\times d}\) are two randomly initialized node embedding matrices, and \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{d\times d}\) are two set of trainable parameters. The hyper-parameter \(\alpha\) denotes the non-linear activation saturation rate. Compared with our approach, many existing works construct such adjacency matrix by measuring the pairwise distance or similarity between variables in a multivariate time-series, such as Euclidean distance [51] and Cosine similarity [52], resulting in high time and space complexity of \(\mathcal{O}(N^{2})\)[23]. Another significant drawback of existing methods based on distance or similarity metrics is that the learnt pairwise dependencies are symmetric, which is not desired in describing the relations between variables in real-world multivariate time series. For example, the traffic jam on a street may cause the jam on another street but may not vice versa if there are alternative routes. Thus, we expect the learned time series dependencies to be uni-directional. Let \(\widetilde{\mathbf{N}}_{1}\) and \(\widetilde{\mathbf{N}}_{2}\) be two transformed node embedding matrices, the uni-directional property can then be achieved by the subtraction term \(\widetilde{\mathbf{N}}_{1}\widetilde{\mathbf{N}}_{2}^{\mathrm{T}}-\widetilde{ \mathbf{N}}_{2}\widetilde{\mathbf{N}}\) and two nonlinear activation functions, i.e., if \(A_{ij}\) is a positive number, then \(A_{ji}\) will be zero. The output adjacency matrix \(\mathbf{A}\) will have all its elements regularized
between 0 to 1. To reduce the required computational cost and ease the optimization, we further mask elements with zeros in the learned graph adjacency matrix only except for the top-\(k\) closest neighbors of each node to make \(\mathbf{A}\) sparse controlled by the hyper-parameter \(k\). Specifically, for \(i\)-th row in \(\mathbf{A}\), we have the following post-processing:
\[\begin{cases}\mathbf{topk}=argmax(\mathbf{A}[i,\cdot],k),\\ \mathbf{A}[i,-\mathbf{topk}]=0,\end{cases} \tag{2}\]
where \(argmax(\cdot,k)\) returns the indices of top-\(k\) largest values in the input vector.
### _Spatial-Temporal Graph Neural Network_
#### Ii-B1 Graph Convolution Network
The spatial correlations between variables play a vital role in reflecting the intrinsic dynamics of multivariate time-series. Towards this, we design a spatial graph convolution layer to effectively pass messages between variables and their neighbors to exploit the underlying spatial patterns, allowing better encoding historical observations to make more precise and stable predictions, thus benefiting the downstream anomaly detection tasks. Similar to SGC [53] and the discrete of MTGODE [12], given an adjacency matrix \(\mathbf{A}\) and the input (initial) states \(\mathbf{H}_{in}\), we may characterize the graph propagation process as a combination of the feature propagation and linear transformation steps:
\[\begin{cases}\mathbf{H}^{k+1}=\widetilde{\mathbf{A}}\ \mathbf{H}^{k},\ k\in\{0, \cdots,K\},\\ \mathbf{H}_{out}=\mathbf{H}^{K}\ \ \mathbf{\Theta},\end{cases} \tag{3}\]
where \(K\) denotes the graph propagation depth, and we have \(\mathbf{H}^{0}=\mathbf{H}_{in}\). Specifically, \(\widetilde{\mathbf{A}}\) in the above equation denotes the normalized adjacency matrix, i.e., \(\widetilde{\mathbf{A}}=\widetilde{\mathbf{D}}^{-1}(\mathbf{A}+\mathbf{I})\) and \(\widetilde{\mathbf{D}}_{ii}=1+\sum_{j}\mathbf{A}_{ij}\).
However, Equation 3 suffers from two critical limitations. Firstly, although the above feature propagation design allows to recursively propagate latent node states along with a given graph structure, it is inevitable to see that node latent states become indistinguishable, i.e., converge to a single point, or known as over-smoothing, with an increase of the propagation depth \(K\)[12]. Secondly, only applying the linear transformation on the last node latent states \(\mathbf{H}^{K}\) may be prone to errors [54, 23]. For example, if there are no correlations between variables in a multivariate time series, the feature propagation step will introduce noises to latent node states by blindly aggregating the neighbouring information. Thus, merely considering the linear transformation of the last propagated states hinders accurately modeling the latent spatial dynamics of a multivariate time series. To address these two limitations, we equipped the vanilla feature propagation in Equation 3 with a gating mechanism and replaced the upcoming linear mapping with an attentive transformation that mixes the information from multiple hops. We have the proposed graph convolution network defined as follows:
\[\begin{cases}\mathbf{H}^{k+1}=\beta\ \mathbf{H}_{in}+(1-\beta)\ \widetilde{\mathbf{A}}\ \mathbf{H}^{k},\ k\in\{0,\cdots,K\},\\ \mathbf{H}_{out}=\sum_{k=0}^{K}\mathbf{H}^{k}\ \mathbf{\Theta}^{k},\end{cases} \tag{4}\]
Fig. 2: Overall Framework of CST-GL. \(\mathbf{L}\)\(\mathbf{MTCL}\) starts with a randomly initialized node embedding for each multivariate variable, and learn the underlying graph adjacency matrix, \(A\), adaptively with the entire model in an end-to-end manner. The adjacency matrix, \(A\), will used by the graph convolution networks in the STGNN module. \(\mathbf{II}\). \(\mathbf{STGNN}\)’s \(1\times 1\) convolution layer projects the sliding window input into the latent space. Then, the temporal and graph convolution networks are interfaced to capture rich spatial and temporal dependencies, representing one spatial. Skip connections, \(Z^{0}+Z^{1}+...+Z^{L}\), are incorporated to obtain hidden features that encapsulated the spatial-temporal patterns. Finally, the forecast head module projects the hidden features into a one-step forecast output, \(\hat{x}^{T}\). \(\mathbf{III}\). \(\mathbf{ADD}\). a) Real-time Anomaly Indicator: takes in the current one-step forecast result and all observation-forecast pairs computed prior to timestamp \(\mathbf{T}\), computes normalized forecast deviation and outputs an anomaly indicator score in real-time using PCA-based score. b) Root Cause Anomaly Diagnosis: takes in result from Real-time Anomaly Indicator and learned pairwise correlation, \(A\), from MTCL to enhance CST-GL’s interpretability and identify the root causes of anomaly events.
where \(\beta\) controls to retain how much original node information to avoid the aforementioned over-smoothing issue. Regarding the attentive transformation in Equation 4, we can easily alleviate the problem mentioned in the above example by assigning a relevant large weight to the initial node states \(\mathbf{H}^{0}\) and small weights to \(\mathbf{H}^{k}\), where \(k\in\{1,\cdots,K\}\).
As mentioned in Subsection IV-A, the learned pairwise dependencies are uni-directional. Thus, we refactor the final output of the graph convolution network as the summation of two transformations described in Equation 4, where the input latent node states are both \(\mathbf{H}_{in}\) but with different adjacency matrices, i.e., \(\mathbf{A}\) and \(\mathbf{A}^{\mathsf{T}}\), to incorporate nodes' inflow and outflow information, respectively.
#### Iv-B2 Temporal Convolution Network
Solving Equation 4 only allows to model the spatial dynamics at a certain point of time, where the rich temporal clues in multivariate time-series are neglected. To complete this missing information, we devise a simple yet effective temporal convolution network together with our graph convolution network to capture the expressive spatial and temporal patterns in historical observations.
We first introduce the composition of the proposed temporal convolution network, which consists of multiple residual dilated temporal convolution layers to extract and aggregate high-level temporal features in a non-recursive manner to avoid the shortcomings of Recurrent Neural Networks (RNNs), such as time-consuming iteration and gradient explosion [12, 23, 55]. Specifically, given a sequence of historical observations \(\mathbf{X}=\{\mathbf{x}^{l},\mathbf{x}^{2},\cdots,\mathbf{x}^{T\cdot l}\}\), we have the a temporal convolution layer defined as follows:
\[\mathbf{Z}^{l+1}=\mathcal{T}(\mathbf{Z}^{l},Q^{l+1})+TCN(\mathbf{Z}^{l}, \mathbf{\Phi}^{l}),\ l\in\{0,\cdots,L\}, \tag{5}\]
where the outputs of network is \(\mathbf{Z}_{out}=\mathbf{Z}^{L}\), the input states \(\mathbf{Z}^{0}\) are obtained by applying a linear mapping on \(\mathbf{X}\), \(TCN(\cdot,\mathbf{\Phi}^{l})\) is an temporal convolution function parameterized by \(\mathbf{\Phi}^{l}\) at the \(l\)-th layer, and \(\mathcal{T}(\mathbf{Z}^{l},Q^{l+1})\) denotes a truncate function that taking the last \(Q^{l+1}\) elements from \(\mathbf{Z}^{l}\) along its sequence length axis. The underlying consideration is that the residual input \(\mathbf{Z}^{l}\) has to be truncated to the length of \(TCN(\mathbf{Z}^{l},\mathbf{\Phi}^{l})\) before adding them together because the sequence length of latent node states shrinks gradually as the underlying temporal information is aggregated after each temporal convolution layers. Specifically, we have \(Q^{l+1}=Q^{l}-r^{l}\times(k-1)\) and \(Q^{1}=R-k+1\), where \(k\), \(r\) and \(R\) are kernel size, dilation factor, and receptive field (i.e., \(R=L(k-1)+1\) and \(R=1+(k-1)(r^{L}-1)/(r-1)\) when \(r=1\) and \(r>1\)). In terms of the design of temporal convolution function \(TCN(\cdot,\mathbf{\Phi}^{l})\), we follow [23] and adopt a gating mechanism to guide the information flow during the aggregation:
\[\text{TCN}(\mathbf{Z}^{l},\mathbf{\Phi}^{l})=f_{C}(\mathbf{Z}^{l},\mathbf{\Phi }^{l}_{c})\odot f_{\mathcal{G}}(\mathbf{Z}^{l},\mathbf{\Phi}^{l}_{g}), \tag{6}\]
where \(f_{C}(\cdot)\) and \(f_{\mathcal{G}}(\cdot)\) are filtering and gating convolutions, and \(\odot\) denotes the element-wise product. Specifically, we define these two convolutions in below:
\[\begin{cases}f_{C}(\mathbf{Z}^{l},\mathbf{\Phi}^{l}_{c})=tanh\big{(}\mathbf{W }^{1\times n}_{\mathbf{\Phi}^{l}_{c}}\star_{\Delta}\mathbf{Z}^{l}\ +\ \mathbf{b}^{1\times n}_{\mathbf{\Phi}^{l}_{c}}\big{)},\\ f_{\mathcal{G}}(\mathbf{Z}^{l},\mathbf{\Phi}^{l}_{g})= sigmoid\big{(}\mathbf{W}^{1\times n}_{\mathbf{\Phi}^{l}_{g}}\star_{\Delta} \mathbf{Z}^{l}\ +\ \mathbf{b}^{1\times n}_{\mathbf{\Phi}^{l}_{g}}\big{)}.\end{cases} \tag{7}\]
In the above equation, \(\star_{\Delta}\) denotes the dilated convolution operation, where the dilation \(\Delta=r^{l}\). Specifically, to allow the model exploring multi-granular temporal clues and inspired by [23], \(f_{C}(\mathbf{Z}^{l},\mathbf{\Phi}^{l}_{c})\) and \(f_{\mathcal{G}}(\mathbf{Z}^{l},\mathbf{\Phi}^{l}_{g})\) consists of multiple convolution filters (e.g., \(\mathbf{W}^{1\times n}_{\mathbf{\Phi}^{l}_{c}}\) and \(\mathbf{b}^{1\times n}_{\mathbf{\Phi}^{l}_{c}}\)) with width \(n\in\{2,3,6,7\}\). Since most of multivariate time-series data has some intrinsic periods [23, 55], such as 7, 14, 24, 28, and 30, the combination of the aforementioned kernel widths allows these common periods to be fully covered.
To simultaneously model spatial and temporal dynamics of a sequence of historical observations in a multivariate time-series, we construct a spatial-temporal graph neural network by combining the proposed spatial and temporal convolution networks, where the temporal and spatial convolution layers are interlaced, as shown in Figure 2. More precisely, a layer of the proposed spatial-temporal graph neural network is defined as follows by combining Equation 4 and 5:
\[\mathbf{Z}^{l+1}=\mathcal{T}(\mathbf{Z}^{l},Q^{l+1})+GCN\big{(}TCN(\mathbf{Z}^{ l},\mathbf{\Phi}^{l}),\mathbf{\Theta}\big{)}, \tag{8}\]
where \(GCN(\cdot,\mathbf{\Theta})\) and \(TCN(\cdot,\mathbf{\Phi}^{l})\) are defined in Equation 4 and 6. Finally, we take the output states \(\mathbf{Z}_{out}\) to make a single-step-ahead forecasting via a multi-layer perceptron, i.e., \(\mathbf{\hat{x}}^{T}=MLP(\mathbf{Z}_{out},\mathbf{W}_{mlp})\), which forms a critical evidence to detect anomalies in a multivariate time-series.
### _Anomaly Detection and Diagnosis_
#### Iv-C1 Real-time Anomaly Indicator
With an effective joint learning of spatial and temporal dependencies from the non-anomalous data, it is expected that the anomalous observations in the test set deviate significantly from the learned patterns. Accordingly, to detect anomalous multivariate observations, we first compute the normalized forecasting deviation for every univariate variable and take the sum of the reconstructed univariate deviations to be the anomalous score for each multivariate observation.
Univariate variables within a multivariate time-series often possess vastly different attributes and scales. Consequently, we independently normalize each univariate deviation to preclude any single variable from dominating the aggregate multivariate deviation value. For every univariate variable, \(\mathbf{x}^{T}_{i}\), we compute the absolute forecasting error, given by \(\mathbf{e}^{T}_{i}=\left|\mathbf{x}^{T}_{i}-\mathbf{\hat{x}}^{T}_{i}\right|\), at the current timestamp \(T\). This error is then normalized:
\[\mathbf{\hat{e}}^{T}_{i}=\frac{\mathbf{e}^{T}_{i}-\boldsymbol{\mu}^{T}_{i}}{ \boldsymbol{\sigma}^{T}_{i}}\]
where \(\boldsymbol{\mu}^{T}_{i}\) and \(\boldsymbol{\sigma}^{T}_{i}\) are the median and inter-quartile range (IQR) value across error values, \(\{\mathbf{e}^{T-W_{a}}_{i},\mathbf{e}^{T-W_{a}+1}_{i},...,\mathbf{e}^{T}_{i}\}\) in a sliding window where \(W_{a}\) represents the window length. Our normalization approach is an extension of [11] where we acquire the median and IQR values through a sliding window rather than from the entire test set observations. This modification allows us to detect anomalies in real-time as normalizing error at time \(T\) only relies on past observations and does not require future information as in [11].
After the normalization of each univariate variable, we obtain a multivariate normalized error vector, \(\mathbf{\hat{E}}^{T}\in\mathbb{R}^{1\times N}\), for the current timestamp. Although prior research suggests directly taking the summation [56] or maximum [11] to summarize the error vector into a single anomaly score at the current
timestamp, we propose to leverage Principal Component Analysis (PCA) as an intermediate step before aggregating the normalized errors into a final anomaly score.
In particular, after training the spatial-temporal graph neural network module, we compute the normalized errors in the validation set, \(\widetilde{\mathbf{E}}_{\nu}\). We fit a PCA on the validation normalized errors by finding the validation mean vector \(\widetilde{\mathbf{E}}_{\nu}=mean(\widetilde{\mathbf{E}}_{\nu})\), their covariance matrix, \(\mathbf{C}_{\nu}=cov(\widetilde{\mathbf{E}}_{\nu})\), and the orthogonal eigenvectors, \(\mathbf{U}\). \(\mathbf{U}\) consists of \(N\) orthogonal eigenvectors associated with the \(N\) largest eigenvalues in the diagonal matrix, \(\mathbf{\Lambda}\), decomposed from \(\mathbf{C}_{\nu}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{-1}\). With the fitted PCA, we reconstruct the normalized errors at current timestamp:
\[\begin{cases}\mathbf{P}=(\widetilde{\mathbf{E}}^{T}-\widetilde{\mathbf{E}}_{ \nu})\mathbf{U}^{T}\\ \widetilde{\mathbf{P}},\widetilde{\mathbf{U}}=\mathbf{P}[:,L],\mathbf{U}[:,L]\\ \widetilde{\mathbf{E}}_{\text{PCA}}^{T}=\widetilde{\mathbf{P}}\,\widetilde{ \mathbf{U}}^{T}+\widetilde{\mathbf{E}}_{\nu}\end{cases} \tag{9}\]
In the above equation, we first apply zero-centering to the normalized error at the current timestamp, \(\widetilde{\mathbf{E}}^{T}\), by subtracting the mean validation error, \(\widetilde{\mathbf{E}}_{\nu}\). We then project these results using validation eigenvectors, \(\mathbf{U}\). Secondly, we keep only the first \(L\) principal components. Finally, we reconstruct the normalized errors, \(\widetilde{\mathbf{E}}_{\text{PCA}}^{T}\), using the reduced \(L\) dimensions and revert the zero-centering deduction by adding the validation mean error. We set \(L\) as the number of components necessary to achieve a symmetric mean absolute percentage error (sMAPE) of less than 10% on the validation set.
With the reconstructed normalized error, we compute the final anomaly score at current timestamp by taking the L1 distance between the denoised and original normalized errors as the final anomaly score:
\[A(T)=\|\widetilde{\mathbf{E}}_{\text{PCA}}^{T}-\widetilde{\mathbf{E}}^{T}\|_{1} \tag{10}\]
The incorporation of PCA addresses the fundamental problem posed by anomalies: the anomalous node variables have the potential to introduce bias into the learned embeddings within a neural network module, inadvertently affecting the forecast across all dimensions. This effect, corroborated by previous research [57], often leads to an unwarranted increase in forecast errors in variable nodes that are otherwise unaffected. Even in the absence of anomaly events, certain variable nodes may sporadically experience an upsurge in errors due to random fluctuations [11]. Such fluctuations can set off a cascade of effects across all nodes, echoing the impact of an actual anomalous event and potentially resulting in false positives. This unintended effect contributes to the degradation of accuracy in anomaly detection and diagnosis.
Previous methodologies have attempted to resolve this issue with the utilization of Markov Chain Monte Carlo (MCMC) imputation [58, 1]. This approach, however, is inefficient. In contrast, we propose the application of PCA to resolve these issues. PCA can efficiently project the normalized errors at current timestamp, \(\widetilde{\mathbf{E}}^{T}\), onto the principal components of the validation errors, and subsequently reconstruct them as, \(\widetilde{\mathbf{E}}_{\text{PCA}}^{T}\). This process effectively dampens common noise variations and pinpoints variables that contribute significantly to anomaly events. This identification is made possible because the variables that cannot be accurately reconstructed are more likely the true contributors to the anomaly events, thereby offering a more accurate depiction of the anomaly event itself.
Though PCA is capable of mitigating the inherent noise for more accurate representation, it is still the ability of STGNN to capture spatial-temporal patterns that holds the key to a comprehensive solution. The joint implementation of STGNN and PCA is instrumental in detecting and diagnosis anomalies, as we demonstrate in Section V: Experimental Study.
Last but not least, an anomaly indicator that can well signify the abnormality of a timestamp observation helps in informing system operators and determining an appropriate threshold by human experts to classify and detect anomalies. Nevertheless, for industrial operations that involve over thousands of multivariate time-series with distinct attributes such as warehousing robots [59], this approach does not scale well. To automate the threshold selection process, we classify current observation in the test set as anomalous if \(A(T)\) in test set observation exceed the maximum \(A(t)\) of all observations in the validation set. This non-parametric approach relies on CST-GL's ability in sufficiently capturing the spatial and temporal dependencies of a multivariate time-series data, so that any observations that exceeds the maximum anomaly score during normal time period (i.e., validation data) are in fact anomalies while those that do not exceed the maximum value are not anomalies.
#### Iii-B2 Root Cause Anomaly Diagnosis
Since the final anomaly score is calculated as the linear combination of reconstruction errors, we can identify the root cause of anomalous events by ranking the univariate variables that contribute most significantly to the anomaly score. In practical scenarios, we determine the percentage contribution of each univariate variable to the final anomaly score. This approach would provide a more detailed perspective, allowing operators to more effectively identify the root cause of anomalous events.
In some cases, the top ranked variables that most contribute to the anomaly score may not be the root causes but are merely the symptoms [11, 24]. When the top ranked contributors are identified not to be the root cause, we further search for the variables that are most related to the top ranked contributors by aggregating the anomaly contribution scores of one-hop distance neighbors:
\[R_{i}(T)=\sum_{j\in N(i)}A_{i}(T) \tag{11}\]
where \(A_{i}(T)\) represents the absolute error for the univariate node \(i\) as per Equation 10. \(N(i)\) represents the neighbors of the univariate node \(i\), which is based on the learned relation between variable pairs from the MTCL module.
As demonstrated in our experiments in Section V, this two-pronged approach ensures the systematic identification and diagnosis of (a) variables that exhibit abnormal behavior, and (b) variables closely related to these abnormal variables, as potential root causes of an anomaly event. The choice between directly ranking the variables based on the error contribution or based on the one-hop distance neighbors will largely depend on the nature of the anomalies.
## V Experimental Study
In this section, we conduct experiments to explore CST-GL's capabilities by answering following questions:
* **Overall Detection Performance.** Does our framework outperform baseline methods in the unsupervised, real-time anomaly detection task? How do the individual modules within CST-GL each contribute specifically to its ability to achieve anomaly detection and diagnosis?
* **Early Detection Performance.** Can CST-GL be adapted and generalized to commercial systems where early detection of anomaly events is often paramount?
* **Interpretability & Case Study.** Would CST-GL benefit system operators in detecting and diagnosing multivariate time-series anomaly events in an interpretable manner?
### _Experimental Settings_
In this subsection, we introduce the settings of our experiments, including datasets, baseline methods and parameter settings and computing infrastructures.
#### V-A1 Datasets
We evaluate CST-GL on three widely used benchmark datasets for multivariate time-series anomaly detection: SWaT, WADI and SMD. The statistics of these datasets are demonstrated in Table I, and the detailed descriptions are given as follows:
* **SWaT [4]** is a scaled-down version of a real-world industrial water treatment plant initiated by Singapore's Public Utility Board. The dataset comprises 7 days of normal operations (train data) and 4 days of attack scenarios (test data). The anomaly labels represent the attacks that are conducted at different intervals in the test set.
* **WADI [5]** is an extension of the SWaT dataset with a larger number of water pipelines, storage, and treatment systems, representing a more complete and realistic water treatment dataset [11]. The train set of WADI is two weeks of normal operation while the test set is a 2 days attack scenario. Following the original author's implementation [11], we removed the first 21,600 samples and down-sampled SWaT and WADI to one measurement every 10 seconds by taking the median values.
* **SMD [3]** is a real-world server machine dataset collected by a large Internet company. SMD contains time-series data of servers, each with 38 multivariate variables. It is divided into train and test sets of equal size. The original SMD dataset did not have any preprocessing applied to remove servers experiencing concept drift. This was subsequently addressed by the original authors in [1] to remove servers suffering from concept drift. Following the subsequent work, the reported results in this section takes the average scores computed for the 12 servers that do not suffer from concept drift.
#### V-A2 Baselines
We compare our CST-GL with five standard multi-dimensional anomaly detection methods that do not take temporal dependencies into consideration and six recently proposed frameworks designed specifically for multivariate time-series anomaly detection. Baseline descriptions and implementation details are provided in the Appendix.
The five standard multi-dimensional anomaly detection methods are Raw Signal [24], **PCA**, **AutoEncoder**, **Kmeans** and **DAGMM**[60]. **Raw Signal** is a simple baseline model that reconstructs any signal to zero, resulting in an error equivalent to the normalized signals themselves. Using the normalized signals, a Gaussian scoring function is employed to compute the negative log-likelihood of observing these signal values at each timestamp. This model provides insights into the nature and difficulty of the benchmark dataset.
The six state-of-the-art frameworks for multivariate time-series anomaly detection are **LSTM-VAE**[26], **Omni-Anomaly**[3], **USAD**[61], **MTAD-GAT**[56], **GDN**[11] and **InterFusion**[1]. Notably, **InterFusion**, an extension of **OmniAnomaly**, is the state-of-the-art RNN framework, while **MTAD-GAT** and **GDN** are the state-of-the-art GNN baselines for the multivariate time-series anomaly detection task.
#### V-A3 Parameter Settings
We train our model for 20 epochs with a batch size of 64, Adam optimizer is applied to optimize CST-GL with learning rate of \(3\times 10^{-4}\) and \((\beta_{1},\beta_{2})=(0.9,0.999)\). Following previous works [3, 1], validation set ratio for SWaT, WADI and SMD are 0.1, 0.1 and 0.3 respectively. We set sliding window length, \(w\), to be 5, 5, and 100 for SWaT, WADI and SMD as suggested by the original papers [3, 11]. We define the hyperparameter search space as shown in Appendix, and select the hyperparameters that achieve lowest average root-mean-square error in the validation set.
After hyperparameter search, the MTCL module has neighbour size, \(k\), set to be 15, 30 and 10 for SWaT, WADI and SMD respectively. Across all datasets, the correlation learning module has a node dimension of 256, the retain ratio of 0.1 and saturation rate of 20. The graph convolution network and the temporal convolution network modules both have 16 output dimensions. The skip connection layers all have 32 output dimensions. We use 2 graph and temporal module layers. Lastly, for the number of principal components, we set it automatically based on the number required to achieve less than 10% sMAPE on the validation set.
#### V-A4 Computing Infrastructures
Our proposed learning framework is implemented using PyTorch 1.7.0. The computation of F1 score, ROC and PRC is acquired by Scikit-learn. All experiments are conducted on a personal computer with Ubuntu 20.04 OS, with an NVIDIA Tesla T4 GPU, a 2.20GHz Intel Xeon CPU, and 12.7 GB RAM. For model comparison with a single and five experimental runs, we use seed 0 and 0-4 respectively. The empirical computational complexity for all methods requiring non-trivial training costs is detailed in the Appendix.
### _Overall Detection Performance_
As many baseline methods do not incorporate a threshold selection mechanism [60, 26, 61, 1], we compare
model performances using the Receiver Operating Characteristic (ROC) and Precision-Recall Curve (PRC) Area Under the Curve (AUC) scores by treating every timestamp as an independent observation to be classified as an anomaly or not. Under this pointwise approach, a model is required to predict the occurrence of anomaly events across the entire time-series, including when they have started and ended. The closer the ROC and PRC score is to 1, the better a model is at scoring and differentiating anomalous and non-anomalous time points. For comparison between
#### Iv-B1 Baseline Comparison
The PRC and AUC results are summarized in Table II and we observe that:
* **Proposed Framework.** CST-GL showed superior performances against all the other baselines with an average outperformance of 7.16 and 8.30 percentage points against the next best baseline for the ROC and PRC scores respectively. It also achieved high performance with relatively low variability and, in the case of WADI's PRC values, the performance gain is greater than 45% when compared to the next best result. The experimental result in Table II demonstrates CST-GL's superior performance in providing a representative anomaly indicator to inform and alert system operators. It also aids experts in deciding on an appropriate threshold for human intervention as the anomaly scores for anomalous and non-anomalous timepoints are well separated.
* **Temporal Dependency.** On average, baseline methods that consider temporal information achieve higher ROC and PRC results, validating that temporal information is paramount for detecting anomalies in multivariate time-series. The importance of effective learning of temporal cues is also evident by the performance of GDN, which did not address the temporal dependencies between time-series observations directly. Despite explicitly learning spatial correlation between multivariate variable pairs, the GDN model is less effective when adapted to the unsupervised, _real-time_ anomaly detection task.
* **Spatial Pairwise Correlation.** As LSTM-VAE, Omni-Anomaly and USAD do not directly capture the underlying pairwise inter-dependence among the multivariate time-series variables, they performed poorer than InterFusion and CST-GL. Similar to our framework, InterFusion directly addresses the spatial-temporal dependencies by learning dual-view latent embeddings. Nonetheless, as InterFusion's latent embedding only encapsulates spatial correlation within a global hidden state, they do not explicitly model the relationships between variable pairs. We conjuncture that successful capturing of spatial correlation dependencies requires an _explicit_ graphical modeling of relationships between the multivariate variables as it evidently improves the effectiveness of a time-series anomaly detection model.
#### Iv-B2 Ablation Study
We conduct an ablation study on SWaT and WADI to validate how various modules of CST-GL contribute to its multivariate time-series anomaly detection performance. We implement different variants of CST-GL with modifications to the following modules:
* **w/o MTCL:** CST-GL without Multivariate time-series Correlation Learning. We replace the learned adjacency matrix, **A**, with a complete digraph adjacency matrix and remove MTCL.
* **w/o GCN:** CST-GL without the Graph Convolution Network. We remove the GCN module (including MTCL) and replace it with a linear layer.
* **mod. TCN:** CST-GL with modified Temporal Convolution Network. We modify TCN to nullify its ability in capturing multi-granular temporal clues by replacing the multi-convolution filters with a single 1x1 filter.
* **w/o PCA:** CST-GL without the PCA-based anomaly scoring module. We replace our the PCA module with standard Gaussian scoring function [24]. The Gaussian scoring function would correspond to the Raw Signal in Table II, but the input for this function is the forecast error from the STGNN in CST-GL.
* **w/o STGNN:** CST-GL without the STGNN. This is equivalent to the PCA model in Table II. Focusing on MTCL module, we see a drop in performance this module is removed (**w/o MTCL**), and a a complete digraph adjacency matrix is used for modelling interactions between variables. Importantly, the degradation of performance is notably more pronounced for the WADI dataset. We hypothesize that the noise from unimportant neighbouring nodes is more
pronounced when the GCN propagate information among the variables under the WADI with 127 number of multivariate variables, as compared to SWaT that only has 51.
Next, we scrutinize the effects of modifications to the STGNN. We observe that the exclusion of the GCN module (**w/o GCN**) significantly degrades the anomaly detection results. This is consistent with previous studies [11, 1] as modelling of the pairwise correlations among variables can enable information flow among the interdependent univariate variable nodes, thereby improving the performance of detecting anomaly events. Conversely, when we modify the TCN (**mod. TCN**) within CST-GL, it also leads to decline in performance. This can be attributed to the fact that the temporal dependency of the multivariate time-series data is less effectively captured.
When the PCA-based anomaly scoring module is replaced by a Gaussian scoring function [24] (**w/o PCA**), we note a reduction in performance in the WADI dataset. This performance drop can be attributed to the Gaussian function's lack of robust denoising capabilities, an area where PCA excels. Despite this, the CST-GL still outperforms the established baselines.
Finally, the removal of the STGNN (**w/o STGNN**), leaving only PCA model, significantly reduces performance. This underscores the crucial role of the STGNN. While PCA can lessen inherent noise for improved representation, it is the capacity of the STGNN to recognize spatial-temporal patterns that forms a comprehensive solution for anomaly detection.
#### Iv-B3 Automatic Thresholding Mechanism
Our framework incorporates an automatic thresholding mechanism where the maximum anomaly score in the validation set is taken as the threshold without a need for human experts in determining the optimal threshold. Table IV shows the best F1 score achieved through an enumerative search of global optimal threshold against the F1 score of our automatic thresholding mechanism.
The non-parametric threshold selection of CST-GL, despite its simplicity, effectively captures the spatial-temporal dependencies of multivariate time-series during the normal period (i.e., training set). This capability allows for a notable degree of separation between anomalous and non-anomalous timepoints in the test set, as evidenced by promising F1 scores. However, it is important to note that the effectiveness of the automatic threshold is most pronounced on the SWaT dataset, and exhibits some performance drops on WADI and SMD. Moving forward, we aim to refine the thresholding process to close the gap between the automatically determined threshold and the threshold determined using best-F1 scores across a broader range of scenarios.
### _Early Detection Performance_
As time-series anomaly events usually form contiguous anomaly segments, previous works have argued that detecting anomalies within any subset of a ground truth anomaly segment is sufficient in real-world scenarios. Based on this notion, they evaluated multivariate time-series anomaly detection models using point-adjusted (PA) approach [1, 61, 3]. Under this approach, if any timestamp in a contiguous anomaly segment with \(M_{a}\) timestamps are correctly detected as an anomaly, the PA approach considers the entire anomaly segment as correctly predicted with \(M_{a}\) true positives [57]. However, since any detection within a contiguous anomaly segment is treated equally, _the PA approach does not reward early detections in an anomalous segment_[62, 24, 63]. Nevertheless, early detection of anomaly events is often crucial in a wide range of practical applications and a model which can detect anomaly events early will have significant value in real-world settings [64].
To evaluate early detection ability of CST-GL and baseline methods, we adopt the metric suggested by [65], where detection of contiguous anomaly segment is only treated as true positives, if and only if an anomaly point is detected correctly and its timestamp is at most \(\delta\) steps after the first anomaly of the contiguous anomaly segment. For example, \(\delta\) = 0 would equate to identifying an anomaly segment as early as possible without any delays and \(\delta\) = 60 for a time-series with second-interval would equate to detecting anomaly segment within a minute after the first anomaly timestamp. As \(\delta\) becomes sufficiently large (i.e., the delay constraint is removed), the results of the early detection PA approach will be the same as the original PA approach.
In this work, we evaluate models' early detection ability with delay 0, 1, 5, 10, 20, 30 and 60 minutes. Following previous work [1, 61, 3] in computing model's anomaly scoring ability, we report the best F1 score for each delay. Based on Table V, VI and VII, we observe the following:
* **Immediate Detection**. CST-GL showed a substantial advantage against the next baseline when \(\delta\) = 0 where the performance improvement is 86.27%, 43.44% and 2.75% for SWaT, WADI and SMD respectively. This indicates that our model significantly outperform the simple and state-of-the-art baselines in early detection of multivariate time-series anomaly events.
* **Practicality**. On all three benchmark datasets, our proposed framework performed best across all delays, \(\delta\), with the exception of 5 and 10 minutes under the SMD dataset. While model performance gaps decrease as \(\delta\) is increased, our framework remains state-of-the-art even when delay is set to be 60 minutes. These results suggest that our
anomaly detection model has the greatest ability at detecting not only anomalous events that require immediate attention but also anomalous events that is less hurried. CST-GL can thus be potentially applied across a wide-range of practical applications and is dependable in a diverse range of real-world operational requirements.
* **Baseline Comparison.** Consistent with overall detection results, InterFusion that learns the temporal dependencies and inter-dependence between univariate variables achieved the second best results in the early anomaly detection task. We further observe that other baseline methods which do not address both dependencies have greater variability across different delays and datasets, validating that effective learning of temporal and pairwise inter-dependence univariate time-series helps in generalizing a detection model across different tasks and datasets.
### _Root Cause Anomaly Diagnosis_
In accordance with the approach suggested by Garg et al. [24], we gauge the anomaly diagnosis performance of all models using the Root-cause top 3 metric (RC-Top3). The RC-Top3 measures instances where at least one of the genuine
causes is identified among the top three causes as determined by the detection model. For all models, we provide the mean performance along with its standard deviation. Since InterFusion utilizes MCMC imputation on the original reconstruction to diagnose root causes, we present results both with and without MCMC imputation. As argued by the original author [1], anomalous node variables have the potential to introduce bias into the learned embeddings within their network module, which could create undesirable noise. MCMC imputation can help to dampen this noise, similar to the role of the PCA-based scorer in CST-GL.
For CST-GL, we report the diagnosis performance using the ranking derived from the PCA-based method, as well as the ranking obtained after aggregating the anomaly scores from its one-hop neighbour (CST-GL +MTCL-Graph). The latter approach leverages the learned relationships between variables from the MTCL module to diagnose root causes. This approach considers that anomalous behavior exhibited by some variables may merely be symptomatic, while the root cause could be attributed to closely related variables. To further assess the benefits of MTCL, we also present the diagnostic results of the raw signal using the MTCL-Graph (Raw Signal+MTCL-Graph). This serves to evaluate the effectiveness of MTCL in facilitating the diagnosis of anomalies, even when the raw signal alone is used.
As evidenced in Table VIII, CST-GL excels in identifying root causes of anomalies on SWaT and WADI, using MTCL-Graph. It comes a close second to OmniAnomaly on SMD without MTCL-Graph. As detailed in Section IV-C2, the decision to diagnose root cause based directly on error contribution or one-hop distance neighbors using MTCL-Graph depends on the anomaly characteristics. SWaT and WADI often have symptomatic variables that exhibit abnormal behaviours but not causative [11, 24]. These variables were influenced by true root causes that exhibit normal behaviours. Thus, unearthing the true root causes requires the explicit identification of variables closely associated with symptomatic variables. This task is effectively achieved with our MTCL module.
Contrastingly, in SMD, variables that display abnormal behaviors are indeed the root causes themselves. As such, the MTCL-Graph does not provide any added benefit; instead, it is more advantageous to directly evaluate the anomaly score from the PCA-based scorer. These observations align with the diagnosis results of InterFusion. When MCMC imputation is employed, the effects of anomalous noise are mitigated, enabling InterFusion to accurately diagnose the root causes in SMD. However, MCMC imputation does not enhance results in SWaT and WADI, as it reduces the impacts of abnormal behaviors transferred to related variables. Consequently, MCMC imputation softens the anomalous effects on the actual root cause variables that do not themselves exhibit abnormal behaviors. To consolidate, we also demonstrate that MTCL-Graph improves the ability in diagnosing the root cause directly from raw signals alone in SWaT and WADI, but not in SMD.
On the whole, CST-GL with the aid of MTCL-Graph provides a comprehensive and actionable tool for for operators in detecting and diagnosing anomaly events.
### _Case Study in Practice_
To showcase CST-GL's implementation under real-world scenarios, we conduct time-series anomaly detection case studies on WADI's Water Distribution System where the root cause of the anomaly event is known.
BackgroundThe water distribution process in WADI is segmented into three sub-processes: P1, P2 and P3. P1 involves water intake and water quality management, P2 takes in water from P1 and supplies it to the consumers and P3 returns excess water back to P1. To monitor and automate the system effectively, 127 sensors are installed. The sensors within each sub-process are intimately linked to monitor and automate the water distribution sub-process. Nonetheless, any attacks on a single sub-process will have a cascading effect on the entire water distribution system. In this experimental setting, CST-GL is required to detect malicious attacks by ill-intentioned parties that have access to the system control from October 9, to October 11, 2017. CST-GL is provided with 14 days of normal multi-sensors data from September 25, to October 9, 2017, to train the model. No labels or information related to the attacks are given during training and CST-GL is required to detect the anomalies in an unsupervised manner.
Stealth AttackAt 10:55 a.m. on December 10, 2017, the attacker launch a 29-minute stealthy attack on WADI to drain an elevated reservoir by changing the reading seen by water quality sensor, 1_MIT_001 (i.e., the root cause of attack). Further, the attacker cleverly manipulates the root cause sensor to make the event undetectable. Consequently, determining the root cause of this attack is non-trivial given that the attacker has extensive knowledge about the WADI system and is deliberately hiding the root cause.
Proposed Framework In ActionThe following describes CST-GL's real-time anomaly detection mechanisms.
* **Automated Early Detection.** Relying on the automated thresholding mechanism, CST-GL alerted the human operators at 10:58 a.m. (less than 4 minutes after the
attack) that a possible anomaly event has occurred and emergency intervention is required. During the attack period, the anomaly score remained high, continuously warning operators about the urgency of the attack event.
* **Root Cause Identification with Learned Relation.** Looking at CST-GL's system outputs, the human operators see 5 sensors from sub-process P1 to contribute the substantial majority of anomaly scores during this period: 1_MV_002, 1\(P\)002, 1\(P\)006, 1_LS_001 and 1_LS_002. After inspecting all 5 sensors, it is found that they are not the root cause but are merely the symptoms of this attack. Nevertheless, they are very likely to be related to the root cause of the stealthy attack event. Thus, the sensors most closely related to the five sensors are immediately ranked based on the CST-GL's learned relation between variable pairs. The aggregated scores of one-hop distance neighbors in sub-process P1 ranks 1_ATT_001 as the sensor most associated with the 5 aforementioned sensors, as illustrated in Figure 3. The root cause of the attack is thus successfully identified after inspecting merely 6 out of 127 sensors.
* **Informativeness with Pointwise Detection.** The original WADI dataset assumes no human intervention and the 29-minute stealth attack ended at 11:24 a.m. While CST-GL continues to inform the human operator that an anomaly event is ongoing after this period due to imperfect prediction and lag effects from the stealth attack, it is able to provide continuous affirmations after 11:26 a.m. that the attack has ended (i.e., the timestamps after this period are labeled as non-anomalous), less than 2 minutes of lag time. This not only allows system operator to make decisions with informed knowledge but also direct efforts on exploring the data within the most relevant time frame to thoroughly understand the anomaly events that have already occurred.
In another WADI anomaly event, a flow sensor, 1_FIT_001, is attacked via false readings. To detect the root cause of this attack is again non-trivial because the false readings are within the normal range of this sensor [11]. Following the implementation above, CST-GL is able to alert system operators that an anomalous event has occurred after just _10 seconds_ of the attack. Similarly, the top sensors that contributed to the anomaly score are again found not to be the the root cause. Through aggregated scores of the learned relations between sensors, CST-GL ranked the root cause sensor, 1_FIT_001, as the third most possible sensor to be the root cause, correctly identifying the root cause after inspecting 8 out of 127 sensors. Lastly, CST-GL informs that the anomaly event has ended within the precision of \(\pm 1\) minute.
SummaryThe case studies demonstrate CST-GL's ability in (1) detecting anomalous event early, (2) significantly reducing the search range for human operators to identify
Fig. 3: Root cause analysis of stealth attack on Water Distribution System. Size of nodes represents the computed ranking of nodes as described in section IV-C2. Red node represents the highest ranked sensor and orange nodes represent the top 5 sensors that contributed the majority of anomaly score during the first 5 minutes of attack. Apart from the highest-ranked red node, four blue nodes indicate other potential sources of the anomaly, with their sizes signifying relative importance. However, the primary node associated with the attack is the first red one, not the four blue nodes. The directed arrows represent the uni-directional relationships that CST-GL has learned using the MTCL module and correspond to spatial dependencies between different sensors.
the root cause by localizing the relevant variables and (3) informing operators about the duration of anomaly events with reasonable precision. Importantly, it also illustrates that joint learning of spatial-temporal and pairwise correlation relational dependencies can help a multivariate time-series anomaly detection model to detect and diagnose anomaly events, significantly reduce the destructive impact of such events on industrial systems.
Moving forward, our aim is to enhance CST-GL for a broader range of applications by addressing the issues of concept drift and missing values. In terms of concept drift, we plan to implement mechanisms that can detect and quantify the magnitude of data drift, thus facilitating necessary adjustments to the model in line with evolving data distributions [66, 67]. For handling missing values, we intend to assess the robustness of CST-GL by employing standard interpolation and imputation algorithms [68]. Furthermore, we aspire to incorporate spatial-temporal graph controlled differential equations [12], inherently suited to scenarios involving missing values.
## VI Conclusion
In this work, we proposed a novel framework for multivariate time series anomaly detection. Our model, CST-GL, explicitly learns pairwise correlations between variables pairs of multivariate time-series data, jointly capture spatial-temporal dependencies and effectively detect anomaly events when the behaviour of time-series data deviate from the non-anomalous patterns. Experiments on three real-world datasets showed that CST-GL outperformed eleven baselines in general and early detection settings. CST-GL also enables interpretation and root cause diagnosis of anomaly events in multivariate time-series data, paving the way for STGNN-based methods to be implemented in real-world applications. In the future, we will study generalizability of CST-GL in dynamic and missing values scenarios together with the trustworthiness of our GNN model [34] through the perspectives of robustness and explainability. We will also look into how large language models can enhance graph learning [69] for time series data.
Our appendix primarily provides details of the experimental settings to ensure the reproducibility of our work. **A1. Implementation of Baseline**, details the hyperparameters of the baselines we reproduced in our work, while **A2. Empirical Computational Complexity** provides information about the time complexities of the baselines and our model, CST-GL. Lastly, **A3. CST-GL Hyperparameter Search Space** outlines the search space that we use to set the hyperparameters of CST-GL, based on the combination of parameters that achieve the lowest average Root-Mean-Square-Error (RMSE) in the validation set.
### _Implementation of Baseline_
* **Raw Signal**[24] is a trivial baseline model that reconstructs any signal to zero, resulting in an error that equates to the normalized signals themselves. On the normalized signals, a Gaussian scoring function is utilized to compute the negative log-likelihood of observing these signal values in each timestamp. This baseline is reproduced using the code provided in the Github repository: [https://github.com/astha-chem/mvts-ano-eval](https://github.com/astha-chem/mvts-ano-eval). We use the dynamic gaussian scoring function (**Gauss-D**) or the 'uni-var_gaussian' option in the fit_scores_distribution function provided in the repository.
* **PCA** assigns an anomaly score for each timestamp based on reconstruction error. In particular, we fit PCA on the training data, including the validation data, to obtain the mean and eigenvectors. During real-time anomaly detection testing, we project the multi-dimensional input onto a low-dimensional space, and reconstruct them back again to find the root-mean-square reconstruction error. For the number of principal components, we set it automatically based on the number required to achieve less than 10% sMAPE.
* **AutoEncoder** independently assigns an anomalous score to each observation by tracking the reconstruction error using an encoder-decoder framework. The encoder is a two-layer multilayer perceptron with the dimensions being [input_dimension, 50 and 20], and the decoder is also a a two-layer multilayer perceptron with the dimensions [20, 50 and input_dimension]. Similar to PCA, we train the AutoEncoder on the training data, including the validation data. During real-time anomaly detection testing, we apply AutoEncoder for computing the root-mean-square reconstruction error as anomaly scores at each timestamp.
* **Kmeans** treats each observation as independent points, and generate multiple clusters using the training data. To determine the number of cluster, K, we use Silhouette score and we search K from 0 to 20. During real-time anomaly detection testing, we calculate the distance between multivariate observation and the centroid of its closest corresponding cluster. The computed L2 distance is used as the anomaly score for detecting anomalies.
* **DAGMM**[60] joints Autoencoders and Gaussian Mixture Model to attain anomaly score using reconstruction errors generated from a low-dimensional representation. To reproduce their results on our settings, we use the Github repository:[https://github.com/tnakae/DAGMM](https://github.com/tnakae/DAGMM). We set the dimensions as [20, 10, 5, 1] for the compression network, and as [5, 2] for the estimation network. We set dropout ratio as 0.5. The rest of the parameters follow the default settings. Similar to PCA and AutoEncoder, we train on the training data, including the validation data. During real-time anomaly detection testing, DAGMM predicts the energy of the observation with the more energy suggesting that it more likely to be an anomaly.
* **LSTM-VAE**[26] replaces the feed-forward neural networks in the VAE with a long short-term memory (LSTM) to capture the temporal dependency of time-series data. Nevertheless, the stochasticity of variables modeled by VAE is without temporal dependence. To reproduce the results for LSTM-VAE, we use the code from Github repository: [https://github.com/](https://github.com/)
lin-shuyu/VAE-LSTM-for-anomaly-detection. The hidden dimension of the network is set as 10 and number of epoch for training as 20. The window size is set as 5, 5 and 100 for SWaT, WADI and SMD, respectively. During real-time anomaly detection testing, the anomaly score is based on reconstruction errors at each timestamp.
* **OmniAnomaly**[3] adopts the stochastic variable connection technique, OmniAnomaly's recurrent neural network explicitly models the temporal dependencies between stochastic variables. The anomaly score is the posterior reconstruction probability of each input. Each timestamp is classified as either anomalous or non-anomalous using the Peaks-Over-Threshold method [19]. To reproduce the results from OmniAnomaly, we use the code from Github repository: [https://github.com/NetManAIOps/OmniAnomaly](https://github.com/NetManAIOps/OmniAnomaly). Following the default hyperparameters, we set the z hidden dimension as 3, RNN hidden dimension as 500, normalizing flow layers as 20, and number of epoch for training as 20. The window size is set as 5, 5 and 100 for SWaT, WADI and SMD, respectively. During real-time anomaly detection testing, the anomaly score is based on inverse of reconstruction probability at each timestamp.
* **USAD**[61] is an autoencoder with encoder-decoder architecture that is trained in an adversarial manner to combine the advantages of autoencoders and adversarial training. To reproduce the results from USAD, we use the code from Github repository: [https://github.com/mnigalati/usad](https://github.com/mnigalati/usad). USAD utilizes one encoder network and two decoder networks. In accordance with the default setting, all networks are three-layer multilayer perceptrons, with the hidden dimension being one-half and one-quarter of the original input dimension respectively. We train USAD over 250 epochs. The window size is set as 5, 5 and 100 for SWaT, WADI and SMD, respectively. During real-time anomaly detection testing, the anomaly score is derived from the reconstruction error at each timestamp.
* **MTAD-GAT**[56] is an attention-based graph neural network that implicitly learns dependence relationships between the multivariate variables by assuming a complete graph between the variables. It computes both reconstruction and forecast errors to detect anomalies. To reproduce the results from MTAD-GAT, we use the code from Github repository: [https://github.com/ML4ITS/mtad-gat-pytorch](https://github.com/ML4ITS/mtad-gat-pytorch). The Graph Attention Networks used to model spatial and temporal cues consist of a single layer. The initial convolution layer possesses a kernel size of 7, while the number of GRU layers is also set to one, having a hidden dimension of 150. The forecast output module is designed with three hidden layers, each with hidden dimensions of 150. In contrast, the reconstruction output module contains only one hidden layer with a hidden dimension of 150. We train the MTAD-GAT over 50 epochs with a dropout rate of 0.3. The window size is set as 5, 5 and 100 for SWaT, WADI and SMD, respectively. During real-time anomaly detection testing, the anomaly score is computed based on the reconstruction and forecast error at each timestamp.
* **GDN**[11] is an attention-based graph neural network that explicitly learns dependence relationships between the multivariate variables and computes forecast errors by leveraging these relationships as anomaly scores. To reproduce the results from GDN, we use the code from Github repository: [https://github.com/d-ailin/GDN](https://github.com/d-ailin/GDN). Following the default hyperparameters for WADI (SWaT), we set the embedding vector for the graph learning module to 128 (64), the number of neighbors, k, to 30 (15), and the dimension of hidden layers to 128 (64) neurons. For SMD, we set the hyperparameters to match those of SWaT. We train GDN using 50 epochs with early stopping at 10 epochs. When calculating the deviations, the original GDN model inadvertently incorporates future information into the current timestamp by normalizing errors using the full test set's median values. To rectify this, we replace this median value with the median value from the validation set. The window size is set as 5, 5 and 100 for SWaT, WADI and SMD, respectively. During real-time anomaly detection testing, the anomaly score is determined based on the normalized forecast error at each timestamp.
* **InterFusion**[1] explicitly learns a low-dimensional that captures inter-metric (i.e., the relationship between each univariate variable) and temporal dependency for a sequence of multivariate time-series. The anomalous score is the reconstruction probability. To reproduce the results from InterFusion, we use the code from Github repository: [https://github.com/zhhlee/InterFusion](https://github.com/zhhlee/InterFusion). As the repository contain the parameters for each of the setting we used in this study and each setting uses a different configuration, we refer the readers to the repository for details of hyperparameters. During real-time anomaly detection testing, the anomaly score is based on inverse of reconstruction probability at each timestamp.
### _Empirical Computational Complexity_
The table below details the time complexities of all the models. Simple baselines, namely RawSignal, PCA and Kmeans, have negligible implementation time and are thus excluded from the table:
### _Cst-Gl_ Hyperparameter Search Space
We define the hyperparameter search space as shown in the
table below, and select the hyperparameters that achieve lowest average root-mean-square error in the validation set.
|
2303.15150 | High-dimensional frequency conversion in hot atomic system | One of the major difficulties in realizing a high-dimensional frequency
converter for conventional optical vortex (COV) stems from the difference in
ring diameter of COV modes with different topological charge numbers l. Here,
we implement a high-dimensional frequency convertor for perfect optical vortex
(POV) modes with invariant size through the four-wave mixing (FWM) process by
utilizing Bessel-Gaussian beams instead of Laguerre-Gaussian beams. The
measured conversion efficiency from 1530 nm to 795 nm is independent of l at
least in subspace of {-6,...,6}, and the achieved conversion fidelities for
two-dimensional (2D) superposed POV states exceed 97%. We further realize the
frequency conversion of 3D, 5D and 7D superposition states with fidelities as
high as 96.70%, 89.16% and 88.68%, respectively. The reported scheme is
implemented in hot atomic vapor, it's also compatible with the cold atomic
system and may find applications in high-capacity and long-distance quantum
communication. | Wei-Hang Zhang, Ying-Hao Ye, Lei Zeng, En-Ze Li, Jing-Yuan Peng, Dong-Sheng Ding, Bao-Sen Shi | 2023-03-27T12:35:06Z | http://arxiv.org/abs/2303.15150v1 | # High-dimensional frequency conversion in hot atomic system
###### Abstract
One of the major difficulties in realizing a high-dimensional frequency converter for conventional optical vortex (COV) stems from the difference in ring diameter of COV modes with different topological charge numbers \(l\). Here, we implement a high-dimensional frequency converter for perfect optical vortex (POV) modes with invariant size through the four-wave mixing (FWM) process by utilizing Bessel-Gaussian beams instead of Laguerre-Gaussian beams. The measured conversion efficiency from 1530 nm to 795 nm is independent of \(l\) at least in subspace \(l\in\{-6,...,6\}\), and the achieved conversion fidelities for two-dimensional (2D) superposed POV states exceed 97%. We further realize the frequency conversion of 3D, 5D and 7D superposition states with fidelities as high as 96.70%, 89.16% and 88.68%, respectively. The reported scheme is implemented in hot atomic vapor, it's also compatible with the cold atomic system and may find applications in high-capacity and long-distance quantum communication.
One of the most common methods for preparing conventional optical vortex (COV) modes is imprinting the helical phase pattern onto the fundamental Gaussian mode through a spatial light modulator (SLM) [1] or a spiral phase plate (SPP) [2]. The COV beams have found important applications in a variety of fields such as improved image edge detection [3] and optical tweezers for manipulating particles [4] due to its unique phase structure. The most extensively researched topic regarding COV modes is high-dimensional communication [5; 6; 7] due to its potential for encoding in an infinite-dimensional Hilbert space. As for this topic, the implementation of high-dimensional entangled states [8], frequency conversion [9] of COV beams [10; 11] and quantum memory for superposed COV modes [12; 13] have been realized recently.
However, the intrinsic dependence of the ring diameter of COV modes on the topological charge number \(l\) limits its applications in scenarios where multiple modes with different \(l\) are coupled into an optical system simultaneously. To overcome this obstacle, various concepts of structured light fields such as perfect Laguerre-Gaussian mode [14; 15] and flat-top beam have been proposed. For example, the frequency conversion of a 5-dimensional superposition state has been reported by using flat-top beams [16]. The most widely used kind of size-invariant light field is the perfect optical vortex (POV) beam proposed by Ostrovsky et al. [17]. A POV beam can be generated by Fourier transforming the corresponding Bessel-Guassian (BG) beam [18; 19]. It has been proved that POV beams offer advantages in establishing higher-dimensional quantum states over COV beams [20], and POV states with different \(l\) can also be distinguished and quantitatively identified in a projective measurement [21]. Although many pioneering works regarding the generation [19] or property analysis [22] of POV beams have been reported, and also its applications in optical manipulation [23; 24], the high-dimensional frequency conversion of POV beams still remains to be a meaningful topic that needs to be studied. Here, we report a high-dimensional frequency conversion through the four-wave mixing (FWM) process in a hot atomic system. Our solutions can also be applied in a cold atomic system and thus it is useful for high-capacity and long-distance quantum communication.
As shown in Fig.1(b), the POV beams in our experiment are generated by Fraunhofer diffracting a BG beam embedded in the corresponding helical phase. The latter is prepared by passing a fundamental Gaussian beam through a SLM with hologram \(Arg\left[J_{l}(k_{r}r)e^{il\theta}\right]\) on it, here \(Arg[\cdots]\) represents finding the argument, \(J_{l}\) is \(l\)th order Bessel function of the first kind, \(r\) and \(\theta\) are radial and azimuthal coordinate respectively, \(k_{r}=2.405/r_{0}\) is the radial wave vector with \(r_{0}\) being the central core spot waist of the BG beam with \(l=0\) [25]. The generated BG beam can be expressed as [26]:
\[E_{BG}(r,\theta)=J_{l}(k_{r}r)exp(\frac{-r^{2}}{\omega^{2}})exp(il\theta), \tag{1}\]
where \(\omega\) is the waist of the original fundamental Gaussian beam. The lens L1 works as a Fourier transform system to obtain The POV beam, and it can be written as [18]
\[E_{POV}(r,\theta)=i^{l-1}\frac{\omega}{\omega_{0}}exp(il\theta)exp(-\frac{r^{ 2}+r_{r}^{2}}{\omega_{0}^{2}})I_{l}(\frac{2r_{r}r}{\omega_{0}^{2}}), \tag{2}\]
where \(\omega_{0}=2f/k\omega\) is the Gaussian beam waist at the rear focal plane of L1. \(r_{r}=k_{r}f/k\) is the ring radius of the POV beam and \(k=2\pi/\lambda\) is the wave vector. \(I_{l}\) is an \(l\)th order modified Bessel function of the first kind.
The diamond-type energy configuration of \({}^{85}\)Rb atom used in our experiment is shown in Fig.1(a), which consists of one ground state \(\left|1\right>\) (\(\left|5S_{1/2},F=3\right>\)), one excited state \(\left|4\right>\) (\(\left|4D_{3/2},F^{\prime\prime}=3\right>\)) and two intermediate states \(\left|2\right>\) (\(\left|5P_{3/2},F^{\prime}=3\right>\)) and \(\left|3\right>\) (\(\left|5P_{1/2},F^{\prime}=3\right>\)). The pump1 (780 nm), pump2 (1475 nm) and signal (1530 nm)
lights couple the atomic transitions of \(|1\rangle\rightarrow|2\rangle\), \(|3\rangle\rightarrow|4\rangle\) and \(|2\rangle\rightarrow|4\rangle\) under resonance, respectively. According to the phase-matching condition of wave-vector conservation and energy conservation, a FWM light at 795 nm can be generated in the transition \(|1\rangle\rightarrow|3\rangle\) through the FWM process.
The experimental setup is illustrated in Fig.1(b). We prepare the POV beam (signal) by SLM1 and L1, and image it into the center of a 5-cm-long \({}^{85}\)Rb vapor cell through a 4f system consisting of L2 and L3. The vertically polarized pump2 and vertically polarized pump1 lights are combined with the horizontally polarized signal light through a polarizing beam splitter (PBS) and a long-pass dichroic mirror (DM), respectively. The pump1 light and pump2 light propagate collinearly with the signal beam in the cell and have a waist of 1.67 mm (pump1) and 1.50 mm (pump2). The atomic cell is heated to 80 \({}^{\circ}\)C to ensure a sufficiently high optical depth. In order to improve the signal-to-noise ratio, we use a short-pass filter and a band-pass filter to filter out the generated FWM light from the strong pump2 (100 mW) and pump1 (50 mW) lights respectively. Another 4f imaging system consisting of L4 and L5 images the frequency up-converted POV light (at point A) to point B for further detection with a charge coupled device (CCD). Finally, we perform the projective measurements with a lens L6, a SLM (SLM2) and a single mode fiber (SMF2) that is placed next to the second 4f imaging system. The FWM light collected via SMF2 is measured by a photomultiplier tube (PMT). The signal light (80 \(\mu\)W when it is continuous wave) is modulated to a square-shaped pulse with a temporal width of 1 \(\mu\)s.
We acquire the intensity profiles of the input (1530 nm) and converted (795 nm) POV beams through a CCD at the focal points A and B marked in Fig.1(b), respectively. The results are shown in Fig.2(a), both of the ring diameters of the two beams are calculated to be around 930 \(\mu\)m, which indicates a spatial-mode-conserved frequency conversion. What's more, this \(l\)-independent of ring diameter is consistent with the theory. The Fig.2 (b) and (c) exhibit the achieved conversion efficiency \(\eta\) with different \(l\) and \(k_{r}\). Due to the existence of the last term in right hand side of the Eq.2, which originates from the Gaussian-shaped intensity distribution, the ring diameter will increase slowly with \(l\) and this growth tendency becomes slower with the increase in \(k_{r}\)
Figure 1: (a) Energy diagram of diamond configuration. (b) Schematic diagram of the experimental setup. SLM1, SLM2: spatial light modulator; PBS: polarizing beam splitter; The focal lengths of lenses L1, L2, L3, L4, L5 and L6 are 75, 150, 150, 150, 150, 75 mm, respectively; DM: long-pass dichroic mirror; SF: short-pass filter; BF: band-pass filter; BT: beam traps. SMF1, SMF2: single mode fiber.
Figure 2: (a) The intensity profiles of input POV beam (red) and converted POV beam (blue). (b), (c) The distribution of conversion efficiency \(\eta\) with different \(l\) in the case of \(k_{r}=6.13\) mm\({}^{-1}\) and \(k_{r}=24.51\) mm\({}^{-1}\).
The overlapped area between two pump beams and the POV beam changes with the increasing ring diameter, which leads to a reduced effective power of pump light that participates in the FWM process and thus a decreased \(\eta\). We also find that the decreasing trend of \(\eta\) becomes slower as \(k_{r}\) increases by comparing Fig.2 (b) and (c). In this work, \(\eta\) barely changes when \(l\) is in the range of -6 to 6 and \(k_{r}\) is 24.51 mm\({}^{-1}\), as shown in Fig.2(c).
Fig.3(a) depicts the normalized cross-talk matrix between input and converted POV beam, where the input and detected states \(\left|l\right\rangle\) are tailored from \(\left|-6\right\rangle\) to \(\left|6\right\rangle\) by loading the corresponding phase hologram on SLM1 and SLM2 respectively. We define a SNR (\(C=\sum_{a}M_{a,a}/\sum_{a,b}M_{a,b}\)) of the cross-talk matrix to quantify the performance of our convertor. A high SNR of \(C=90.97\pm 0.23\%\) reveals that different POV states are well-distinguished from each other and have a low cross-talk noise. Because of the \(l\)-independent efficiency \(\eta\) in the range of \(l\in[-6,6]\), we are able to realize a high-dimensional frequency conversion for a state \(\left|\Psi\right\rangle\) consisting of arbitrary POV states \(\left|l\right\rangle\) within this subspace. Generally speaking, a N-dimensional (ND) superposition state can be written as
\[\left|\Psi\right\rangle=\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left|l_{i}\right\rangle \tag{3}\]
with \(l_{i}\in[-6,6]\).
We testify the frequency convertor with four 2D states that has different values of \(\Delta l(=\left|l_{2}\right|-\left|l_{1}\right|=6,4,3,2)\). These four states are listed as follows: \(\left|\Psi_{1}\right\rangle=(\left|0\right\rangle+\left|6\right\rangle)/ \sqrt{2}\), \(\left|\Psi_{2}\right\rangle=(\left|1\right\rangle+\left|-5\right\rangle)/ \sqrt{2}\), \(\left|\Psi_{3}\right\rangle=(\left|2\right\rangle+\left|4\right\rangle)/ \sqrt{2}\) and \(\left|\Psi_{1}\right\rangle=(\left|-3\right\rangle+\left|-6\right\rangle)/ \sqrt{2}\), Fig.3(b) shows the theoretical profile, the registered profiles of input and converted beam of each state simultaneously. The generated beam profile is in good agreement with the theoretical simulation, and a high similarity between the intensity profiles of input and converted beams implies a faithful conversion process. We calculate the conversion fidelity (\(F=Tr[\sqrt{\sqrt{\rho}\rho_{0}\sqrt{\rho}}]^{2}\)) between the input and converted field by performing a projective measurement, here \(\rho_{0}\) and \(\rho\) are the theoretical and experimental density matrices respectively. The chosen bases for the measurement are \(\left|l_{1}\right\rangle\), \(\left|l_{2}\right\rangle\), \((\left|l_{1}\right\rangle-i\left|l_{2}\right\rangle)/\sqrt{2}\) and \((\left|l_{1}\right\rangle+\left|l_{2}\right\rangle)/\sqrt{2}\). The measured fidelities of \(\left|\Psi_{1}\right\rangle\), \(\left|\Psi_{2}\right\rangle\), \(\left|\Psi_{3}\right\rangle\) and \(\left|\Psi_{4}\right\rangle\) are \(97.90\pm 2.11\%\), \(97.70\pm 1.83\%\), \(99.37\pm 0.82\%\) and \(99.06\pm 0.55\%\) respectively, and the reconstructed density matrices are shown in Fig.3(c). Our results indicate that the capability of our system to convert a 2D state \(\left|l\right\rangle\) in subspace \(\left\{\left|-6\right\rangle,...,\left|6\right\rangle\right\}\) faithfully.
We then verify the validity of our system to work in 3D, 5D and 7D states by implementing the frequency conversion process of the following states: \(\left|3D\right\rangle=(\left|-1\right\rangle+\left|3\right\rangle+\left|-6 \right\rangle)/\sqrt{3}\), \(\left|5D\right\rangle=(\left|0\right\rangle+\left|1\right\rangle+\left|-3 \right\rangle+\left|-5\right\rangle+\left|6\right\rangle)/\sqrt{5}\), and \(\left|7D\right\rangle=(\left|0\right\rangle+\left|-1\right\rangle+\left|2 \right\rangle+\left|3\right\rangle+\left|-4\right\rangle+\left|5\right\rangle+ \left|-6\right\rangle)/\sqrt{7}\) respectively. In Fig.4(a), the complex intensity profile of the theoretically simulated beam leads to a weak intensity part of the beam that can't be detected by the CCD in our experiment, thus we observe a similarity between the theoretical and experimental beam profiles that tends to decrease as the dimensionality increases. However, the similarity between the input and converted beam profiles is still high. We chose the projected bases in the space of \(\left\{\left|l_{n}\right\rangle\right\}\), \(\left\{\left|l_{n}\right\rangle+\left|l_{n+1}\right\rangle,...,\left|l_{n} \right\rangle+\left|l_{N}\right\rangle\right\}\), and \(\left\{\left|l_{n}\right\rangle+i\left|l_{n+1}\right\rangle,...,\left|l_{n} \right\rangle+i\left|l_{N}\right\rangle\right\}\) with \(n=1,...,N\), and make projective measurements to calculate the fidelity. The reconstructed density matrix is shown in Fig.4(b) and the fidelities are \(96.70\pm 0.83\%\), \(89.16\pm 0.32\%\) and \(88.68\pm 1.23\%\) for 3D, 5D and 7D states. Due to the limited fixed pixel pitch (8 \(\mu\)m), the distortion of the hologram displayed by the SLM becomes more obvious as the phase pattern of the superposed POV state tends to be more complex with the increased number of dimensions. This result in differences in the detection efficiency of different bases during the projective measurements, and causes a decrease in measured fidelity. This may be overcome by using an SLM with higher pixel density or calibrating the experimental SLM2 before the measurement. By employing POV beams, we obtained a conversion fidelity over 88% even when the number of dimension reaches 7, in comparison, the achieved fidelity decreases to 50.97% for a 3D COV state \(((\left|-1\right\rangle+\left|3\right\rangle+\left|-6\right\rangle)/\sqrt{3}\), note that the fidelity of the POV counterpart is 96.70%). Our system provides advantages in extending the number of dimension of frequency conversion. Last but not the
Figure 3: (a) Cross-talk matrix between input and converted beams formed by POV states in subspace \(\left\{\left|-6\right\rangle,...,\left|6\right\rangle\right\}\). (b) The theoretical, input and converted beam intensity profiles of four 2D states. (c) The real and imaginary parts of the reconstructed density matrix for the four 2D states.
least, we have obtained a nearly unchanged conversion efficiency for superposed POV with different dimensions, as shown in Fig.4(c).
In conclusion, we report a high-dimensional frequency conversion in a hot atomic system through the FWM process. An \(l\)-independent frequency conversion process is achieved by using the POV beams. We find that the range of \(l\) with constant conversion efficiency increases with the increasing \(k_{r}\) used in our experiment, and verify that the capability of our system to convert a 2D superposition state in the subspace faithfully by performing frequency conversion on four 2D states with different value of \(\Delta l\) and all of the measured fidelities exceed 97% after conversion. We finally perform the frequency conversion on 3D, 5D and 7D states, and find that the conversion fidelity reaches 88.68\(\pm\)1.23% for the 7D state. Our scheme, where the conversion efficiency is \(l\)-independent, can be also compatible with a cold atomic system and may find applications in the field of high-dimensional and long-distance quantum communication.
## Data Availability Statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
## Author Contributions
D-SD and B-SS coordinated the research project. W-HZ peformed the experimental fabrication, measurments and analyzed the data. All authors discussed the manuscript.
## Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Funding
This work was supported by National Key R&D Program of China (Grants No. 2017YFA0304800), Anhui Initiative in Quantum Information Technologies (Grant No. AHY020200), the National Natural Science Foundation of China (Grants No. U20A20218, No. 61722510, No. 11934013, No. 11604322, No. 12204461), and the Innovation Fund from CAS, the Youth Innovation Promotion Association of CAS under Grant No. 2018490, the Anhui Provincial Key Research and Development Project under Grant No. 2022b13020002, and the Anhui Provincial Candidates for academic and technical leaders Foundation under Grant No. 2019H208.
|
2309.00260 | Dynamics of ultrarelativistic charged particles with strong radiation
reaction. II. Entry into Aristotelian equilibrium | As first proposed by Gruzinov, a charged particle moving in strong
electromagnetic fields can enter an equilibrium state where the power input
from the electric field is balanced by radiative losses. When this occurs, the
particle moves at nearly light speed along special directions called the
principal null directions (PNDs) of the electromagnetic field. This equilibrium
is "Aristotelian" in that the particle velocity, rather than acceleration, is
determined by the local electromagnetic field. In paper I of this series, we
analytically derived the complete formula for the particle velocity at leading
order in its deviation from the PND, starting from the fundamental
Landau-Lifshitz (LL) equation governing charged particle motion, and
demonstrated agreement with numerical solutions of the LL equation. We also
identified five necessary conditions on the field configuration for the
equilibrium to occur. In this paper we study the entry into equilibrium using a
similar combination of analytical and numerical techniques. We simplify the
necessary conditions and provide strong numerical evidence that they are also
sufficient for equilibrium to occur. Based on exact and approximate solutions
to the LL equation, we identify key timescales and properties of entry into
equilibrium and show quantitative agreement with numerical simulations. Part of
this analysis shows analytically that the equilibrium is linearly stable and
identifies the presence of oscillations during entry, which may have
distinctive radiative signatures. Our results provide a solid foundation for
using the Aristotelian approximation when modeling relativistic plasmas with
strong electromagnetic fields. | Yangyang Cai, Samuel E. Gralla, Vasileios Paschalidis | 2023-09-01T05:32:09Z | http://arxiv.org/abs/2309.00260v1 | # Dynamics of ultrarelativistic charged particles with strong radiation reaction.
###### Abstract
As first proposed by Gruzinov, a charged particle moving in strong electromagnetic fields can enter an equilibrium state where the power input from the electric field is balanced by radiative losses. When this occurs, the particle moves at nearly light speed along special directions called the principal null directions (PNDs) of the electromagnetic field. This equilibrium is "Aristotelian" in that the particle velocity, rather than acceleration, is determined by the local electromagnetic field. In paper I of this series, we analytically derived the complete formula for the particle velocity at leading order in its deviation from the PND, starting from the fundamental Landau-Lifshitz (LL) equation governing charged particle motion, and demonstrated agreement with numerical solutions of the LL equation. We also identified five necessary conditions on the field configuration for the equilibrium to occur. In this paper we study the entry into equilibrium using a similar combination of analytical and numerical techniques. We simplify the necessary conditions and provide strong numerical evidence that they are also sufficient for equilibrium to occur. Based on exact and approximate solutions to the LL equation, we identify key timescales and properties of entry into equilibrium and show quantitative agreement with numerical simulations. Part of this analysis shows analytically that the equilibrium is linearly stable and identifies the presence of oscillations during entry, which may have distinctive radiative signatures. Our results provide a solid foundation for using the Aristotelian approximation when modeling relativistic plasmas with strong electromagnetic fields.
## I Introduction
Every electromagnetic field defines, algebraically at each point, a pair of (possibly identical) light-speed velocities, its principal null directions (PNDs) [1]. Recently, it has become clear that this mathematical notion has an elegant physical manifestation: _ultrarelativistic charged particles follow the PNDs_. The phenomenon appears to be quite universal in that it emerges in different regimes and with different mechanisms regulating the ultimate particle speed, such as classical radiation reaction in magnetically dominated fields relevant to astrophysics [2; 3; 4; 5; 6; 7] or quantum radiation reaction in nearly null fields relevant to laser-plasma physics [8; 9; 10; 11; 12]. This regime is _Aristotelian_ in that the particle velocity, rather than acceleration, is determined by the local electromagnetic field.
In paper I of this series [13] we initiated a detailed study of Aristotelian motion for classical charged particles in strong external fields. We considered the fundamental Landau-Lifshitz (LL) equation [14], which includes both Lorentz force and self-force. We adopted the approximation that a particle is nearly, but not exactly, moving on a PND, and derived equations for its velocity at leading order in the deviation from the PND. We identified precise conditions on the field configuration that are necessary for the equilibrium to occur. Finally, we demonstrated numerical agreement of this approximation with full solutions of the LL equation in the appropriate regime, using a new numerical code.
In this paper we will use similar analytical and numerical techniques to study the entry into Aristotelian equilibrium. Our main results are: (1) the necessary conditions identified in paper I are in fact sufficient for equilibrium to occur; (2) the equilibrium is linearly stable; (3) in some parameter ranges there are oscillations during the approach to equilibrium, whose properties we study analytically. Together with the findings of paper I, these results provide a definite prescription for using the Aristotelian approximation in practice.
This paper is organized as follows. In Sec. II we review the Aristotelian equilibrium and relate the assumptions and results of paper I [13] to the simple versions given originally by Gruzinov [4; 15]. In Sec. III we review the LL equation and introduce notation. In Sec. IV we show an example of Aristotelian equilibrium in a helical field configuration. In Sec. V we analytically study the approach to equilibrium and demonstrate agreement with numerical simulations. In Sec. VI we perform a large numerical parameter survey that validates our conditions for entry into equilibrium. Finally, in Sec. VII we summarize our results, focusing on a simple prescription for using the Aristotelian approximation in astronomical modeling. Appendix A provides details of our numerical scheme. We use Gaussian units with the speed of light set equal to one.
## II Aristotelian equilibrium
In this section we review the properties of the PNDs and the Aristotelian equilibrium. The PNDs [1]\(\ell_{+}^{\mu}\) and \(\ell_{-}^{\mu}\) of an electromagnetic field \(F_{\mu\nu}\) are the solutions to the pointwise eigenvalue equation
\[F^{\mu}{}_{\nu}\ell_{\pm}^{\nu}=\pm E_{0}\ell_{\pm}^{\nu}. \tag{1}\]
The explicit solution in terms of electric and magnetic fields is
\[\vec{\ell}_{\pm}^{\mu} =(1,\vec{v}_{\pm}), \tag{2}\] \[\vec{v}_{\pm} =\frac{\vec{E}\times\vec{B}\pm(B_{0}\vec{B}+E_{0}\vec{E})}{B^{2}+E_ {0}^{2}}, \tag{3}\]
where \(E_{0}\) and \(B_{0}\) are given in terms of the invariants \(P=\vec{B}^{2}-\vec{E}^{2}\) and \(Q=\vec{E}\cdot\vec{B}\) as
\[E_{0} =\sqrt{\sqrt{(P/2)^{2}+Q^{2}}-P/2} \tag{4}\] \[B_{0} =\text{sign}(Q)\sqrt{\sqrt{(P/2)^{2}+Q^{2}}+P/2}. \tag{5}\]
When the PNDs are degenerate (\(\ell_{+}^{\mu}=\ell_{-}^{\mu}\)), the eigenvalue vanishes and the field is null (\(E_{0}=B_{0}=0\)), in which case the electric and magnetic fields are orthogonal and equal in magnitude in any Lorentz frame. When the PNDs are distinct, the eigenvalues are \(\pm E_{0}\), and \(E_{0}>0\) is the magnitude of the electric field in any frame where the electric and magnetic fields are parallel. Similarly, \(|B_{0}|\) is the magnitude of the magnetic field in such a frame, with \(B_{0}\) positive/negative when the fields are aligned/antialigned.
The PNDs define integral curves \(\vec{x}_{\pm}(t)\) by the equation \(d\vec{x}_{\pm}/dt=v_{\pm}\). The parameter \(t\) is the arc length of the space curve, since \(v_{\pm}^{2}=1\). On each curve we may erect a Frenet Serret frame \(\{\vec{\ell},\vec{n},\vec{k}\}\) with \(\vec{\ell}=v_{\pm}\)[13]. Since these curves fill space, for each choice of \(\pm\) we have a full orthonormal basis for vector fields. In particular, we may decompose the velocity vector of a charged particle as
\[\vec{v}=v_{\ell}\vec{\ell}+v_{n}\vec{n}+v_{k}\vec{k}, \tag{6}\]
where we choose \(\vec{\ell}=\vec{v}_{+}\) for positively charged particles and \(\vec{\ell}=\vec{v}_{-}\) for negatively charged particles. As this is an orthonormal frame, the Lorentz factor is reconstructed by
\[\gamma=\frac{1}{\sqrt{1-v_{\ell}^{2}-v_{n}^{2}-v_{k}^{2}}}. \tag{7}\]
The Frenet-Serret vectors \(\{\vec{\ell},\vec{n},\vec{k}\}\) and their associated curvature \(\kappa\) and torsion \(\iota\) are local functions of the electric and magnetic fields for each choice of \(\pm\). This may be seen from the defining equations,
\[\vec{\ell} =\vec{v}_{\pm}, \tag{8}\] \[\kappa\vec{n} =(\vec{\ell}\cdot\vec{\nabla})\vec{\ell},\qquad(\vec{n}\cdot \vec{n}=1)\] (9) \[\vec{k} =\vec{\ell}\times\vec{n},\] (10) \[\iota\vec{n} =-(\vec{\ell}\cdot\vec{\nabla})\vec{k}. \tag{11}\]
The first vector \(\vec{\ell}=v_{\pm}\) is determined by the values of \(\vec{E}\) and \(\vec{B}\) via Eqs. (8) and (3). The second vector \(\hat{n}\) and the curvature \(\kappa\) involve first derivatives as well [Eq. (9)]. The third vector \(\vec{k}=\vec{\ell}\times\vec{n}\) also depends on first derivatives. Finally, the torsion \(\iota\) depends on first and second derivatives [Eq. (10)]. We will also define the radius of curvature \(R\),
\[R=\frac{1}{\kappa}. \tag{12}\]
Note that while \(E_{0}\), \(B_{0}\) and \(\ell^{\mu}\) are invariant notions, the projection of the null vector \(\ell^{\mu}\) to the spatial velocity \(v_{\pm}=\vec{\ell}\) depends on the choice of Lorentz frame. The corresponding Frenet-Serret basis, together with its associated curvature and torsion, are similarly non-invariant. We will be formulating assumptions and deriving results in terms of these non-invariant quantities; the interpretation is that our results hold in frames satisfying our assumptions. Note that we will use the phrase "PND" for both the invariant null direction in a spacetime sense and the non-invariant spatial direction \(\vec{v}_{\pm}\) in a given frame. Context will make clear which notion is meant.
A particle of charge \(q\) and mass \(m\) defines a length scale \(\mathcal{R}\) and a field scale \(\mathcal{E}\) by
\[\mathcal{R}\equiv\frac{q^{2}}{m},\qquad\mathcal{E}\equiv\frac{3}{2}\frac{m^{2 }}{|q|^{3}}, \tag{13}\]
with a conventional factor of \(3/2\). The "classical electron radius" \(\mathcal{R}\) is the distance where the electrostatic self-energy of a point charge equals its rest mass, and the "classical critical field" \(\mathcal{E}\) is (three-halves times) the strength of the electric field at this location. These represent typical scales at which the classical description of the particle will break down. These quantities also define time and magnetic field scales after multiplying by suitable factors of the speed of light (set here to unity). We therefore assume
\[E_{0},B_{0}\ll\mathcal{E}\qquad L,T\gg\mathcal{R}, \tag{14}\]
where \(L\) and \(T\) are typical length and time scales for the field configuration to change significantly.
### Original Derivation
We now summarize Gruzinov's original arguments for Aristotelian motion [4; 15] using the notation of our paper. Gruzinov considered the case where a particle moves primarily along a PND, i.e.,
\[\gamma\gg 1, \tag{15}\] \[\sqrt{v_{n}^{2}+v_{k}^{2}}\ll 1. \tag{16}\]
If we approximate the particle motion as a circle of radius equal to the radius of curvature \(R=1/\kappa\) of the PND, the power radiated is \((2/3)q^{2}R^{2}\gamma^{4}\). Gruzinov assumed that this "curvature radiation" power is balanced by the Lorentz force power \(|q|E_{0}\),
\[\frac{2}{3}q^{2}R^{2}\gamma^{4}=|q|E_{0}. \tag{17}\]
Solving for \(\gamma\) gives the Gruzinov formula for the Lorentz factor,
\[\gamma_{g}=\left(\frac{3E_{0}R^{2}}{2|q|}\right)^{1/4}. \tag{18}\]
Gruzinov obtained an additional condition for this equilibrium based on the idea that it can only occur when the particle has enough space to be accelerated to this terminal Lorentz factor before the PND curves significantly. The curvature radius \(R\) sets the scale over which the uniform field approximation breaks down, so the particle must be able to gain \(m\gamma_{g}\) energy in a region much smaller than \(R\). The typical energy gain over a region of size \(D\) is \(qE_{0}D\), so we obtain the condition
\[m\gamma_{g}\ll qE_{0}R, \tag{19}\]
which may equivalently be written
\[\gamma_{g}\ll\frac{E_{0}R}{\mathcal{E}\mathcal{R}} \tag{20}\]
or
\[\mathcal{E}^{3/2}\mathcal{R}\ll E_{0}^{3/2}R. \tag{21}\]
### LL Derivation
In paper I [13] we sought to better understand the Aristotelian equilibrium by studying the fundamental LL equation of charged particle dynamics. We again assumed motion along a PND [Eqs. (15) and (16)]. However, the energy balance condition (17) is inappropriate in this context since (1) it requires the particle motion to be treated as circular, an uncontrolled approximation whose compatibility with the LL equation is not obvious; and (2) the local LL dynamics contains more information than just energy conservation. Instead, to express the idea of energy balance, we assumed that the local change in energy is small compared to the typical value set by the Lorentz force,
\[m\left|\frac{d\gamma}{dt}\right|\ll|q|E_{0}. \tag{22}\]
We also assumed that the timescale \(T\) for changes in the field is long compared to the lengthscale \(L\) for spatial changes in the field,
\[T\gg L. \tag{23}\]
We found that Gruzinov's equilibrium emerged only after a final additional assumption,
\[|\iota|\ll\frac{|q|}{m\gamma}\text{Max}\{E_{0},|B_{0}|\}, \tag{24}\]
where \(\iota\) is the torsion of the PND. These three conditions (22), (23) and (24) replace the assumption (17) of global power balance in approximate circular motion. Eq. (22) guarantees approximate power balance locally at the level of the LL equation (as opposed to globally, at the level of total radiated energy), while Eqs. (23) and (22) reflect the approximate circular motion with radius \(R\).
Under these five assumptions (15), (16), (22), (23), (24), we found that the field determines the velocities pointwise as
\[\gamma =\left(\frac{9}{4}\frac{R^{2}}{\mathcal{R}^{2}}\frac{E_{0}}{ \mathcal{E}}\right)^{1/4}=\gamma_{g} \tag{25}\] \[v_{n} =-\frac{1+\delta}{\gamma}\sqrt{\frac{E_{0}}{\mathcal{E}}}\] (26) \[v_{k} =\frac{\delta}{\gamma}\frac{B_{0}}{E_{0}}\sqrt{\frac{E_{0}}{ \mathcal{E}}}, \tag{27}\]
where we used the definitions (13) and also introduced
\[\delta=\frac{E_{0}\mathcal{E}}{E_{0}^{2}+B_{0}^{2}}. \tag{28}\]
We thus reproduced Gruzinov's Lorentz factor (18) and derived new expressions (26) and (27) for the drift velocities.
### Conditions on the field configuration
The five assumptions (15), (16), and (22)-(24) can be recast as conditions purely on the field configuration by using the results (25)-(27). Presenting the results as five conditions \(C_{i}\ll 1\), the conditions are
\[C_{1} =\frac{\mathcal{R}}{R}\sqrt{\frac{\mathcal{E}}{E_{0}}}\ll 1 \tag{29}\] \[C_{2} =\delta\frac{\mathcal{R}}{R}\sqrt{\frac{\mathcal{E}}{E_{0}}}\ll 1\] (30) \[C_{3} =|\iota|\sqrt{R\mathcal{R}}\left(\frac{E_{0}}{\mathcal{E}}\right) ^{1/4}\frac{\mathcal{E}}{\text{Max}\{E_{0},|B_{0}|\}}\ll 1\] (31) \[C_{4} =\eta\sqrt{\frac{\mathcal{R}}{R}}\left(\frac{\mathcal{E}}{E_{0}} \right)^{3/4}\ll 1,\] (32) \[C_{5} =\frac{L}{T}\ll 1 \tag{33}\]
with
\[\eta=\left|\vec{v}\cdot\vec{\nabla}R+\frac{R}{2E_{0}}\vec{v}\cdot\vec{\nabla} E_{0}\right|. \tag{34}\]
In this last expression, it is understood that Eqs. (25)-(27) [as well as (7)] are to be used to express \(\vec{v}\) in terms of the local field configuration. In obtaining (30) we have used the fact that \(\sqrt{v_{n}^{2}+v_{k}^{2}}\approx\sqrt{\delta}/\gamma\) under the assumptions (14).
The names \(C_{i}\) are derived from paper I. In this paper, \(C_{1}\)-\(C_{5}\) correspond (respectively) to Eqs. (15), (16), (24),
(22), and (23). In particular, \(C_{1}\ll 1\) and \(C_{2}\ll 1\) express the approximate PND motion (15) and (16), while \(C_{4}\ll 1\) expresses the approximate local equilibrium (22).
Our expression for \(C_{4}\) differs from that presented in paper I, where we assumed that \(\eta\sim R/L\). Generically we will have \(\eta\sim 1\). In this paper we also include cases where \(\eta\) is very small, since they arise quite naturally in our numerical studies, and help to illustrate the differences between our approach and that originally given by Gruzinov (Sec. II.1).
Notice that \(C_{1}\) and \(C_{2}\) are related to \(C_{4}\) by the equations
\[\eta C_{1}^{3/2} =\frac{\mathcal{R}}{R}C_{4}\ll C_{4} \tag{35}\] \[\eta C_{2} \leq\sqrt{\frac{\mathcal{R}}{R}}C_{4}\ll C_{4}. \tag{36}\]
In the second line we used the bound \(\delta\leq\mathcal{E}/E_{0}\), which follows from the definition (28) of \(\delta\). In the generic case that \(\eta\sim 1\), we see that both \(C_{1}\) and \(C_{2}\) are small compared with \(C_{4}\). This means that \(C_{4}\ll 1\) in fact implies that \(C_{1}\ll 1\) and \(C_{2}\ll 1\), and generically one can ignore the \(C_{1}\) and \(C_{2}\) conditions. While \(\eta\) can be a small number in special cases (such as the circular fields studied in Sec. VI.1 below), it would have to be less than \(\sim\sqrt{\mathcal{R}/R}\) for the \(C_{1}\) and \(C_{2}\) conditions to become relevant. For macroscopic fields this factor is very small; for example, \(\sqrt{\mathcal{R}/R}\approx 5\times 10^{-8}\) if \(R\) is one meter. A field configuration with \(\eta\) this small would be extraordinarily fine-tuned. We therefore conclude that for the macroscopic fields of relevance in astrophysics, we may always ignore the \(C_{1}\) and \(C_{2}\) conditions, since they are implied by the \(C_{4}\) condition.
Let us therefore focus on the \(C_{4}\) condition (32). This condition can be rewritten as
\[\eta^{2}\mathcal{R}\mathcal{E}^{3/2}\ll RE_{0}^{3/2}, \tag{37}\]
showing that it is in fact equivalent to Gruzinov's condition (21) in the generic case \(\eta\sim 1\). This agreement is interesting since Gruzinov's condition (21) arose from reasoning about when a particle _can_ enter equilibrium, whereas our condition (37) arose from the assumption (22) that equilibrium had been _achieved_. We will see numerically in Sec. VI that Eq. (37) is necessary and sufficient for equilibrium, provided the other conditions are satisfied.
Finally we discuss \(C_{3}\), which satisfies
\[C_{3}=\frac{|\iota|}{\kappa}\eta^{-1}C_{4}\frac{E_{0}}{\text{Max}\{E_{0},|B_ {0}|\}}. \tag{38}\]
The last factor \(E_{0}/\text{Max}\{E_{0},|B_{0}|\}\) is always \(\lesssim 1\) and will be small for pulsars, where \(E_{0}\ll|B_{0}|\). Again assuming the generic case \(\eta\sim 1\), Eq. (38) shows that large values of the torsion-to-curvature ratio \(|\iota|/\kappa\) are required for the condition \(C_{3}\ll 1\) to be violated in a regime where the condition \(C_{4}\ll 1\) is satisfied. For fields with \(|\iota|/\kappa\lesssim 1\), the \(C_{3}\) condition can be ignored, since it is implied by the \(C_{4}\) condition. We discuss cases with large \(|\iota|/\kappa\) in Sec. IV below.
The conditions \(C_{i}\ll 1\) arose in paper I as a minimal set of assumptions under which the LL equation reduced to the simple results (25)-(27). Although we do not attempt any rigorous mathematical proof, it seems clear from the steps of the derivation there that these assumptions were all necessary for the simple results to emerge. We therefore regard the conditions (29)-(33) as necessary conditions for the equilibrium described by (25)-(27). One of the main goals of this paper is to argue that they are also _sufficient_ for equilibrium to occur.
## III Landau-Lifshitz equation
The LL equation is reviewed in paper I [13]; here we briefly recap the physics and introduce our notation for numerical simulations. In terms of \(\vec{E}\) and \(\vec{B}\) fields, the LL equation is1
Footnote 1: We have dropped terms involving the derivative of the field strength, which are always negligible in the regime of validity discussed below (39).
\[\frac{d(\gamma\vec{v})}{d\tau}=\frac{\gamma q}{m}\Bigg{\{} \vec{E}+\vec{v}\times\vec{B}\] \[\pm\frac{1}{\mathcal{E}}\left[(\vec{E}\cdot\vec{v})\vec{E}+(\vec {E}+\vec{v}\times\vec{B})\times\vec{B}\right]\] \[\mp\frac{\gamma^{2}}{\mathcal{E}}\left[(\vec{E}+\vec{v}\times \vec{B})^{2}-(\vec{E}\cdot\vec{v})^{2}\right]\Bigg{\}}, \tag{39}\]
where \(\pm\) is the sign of \(q\), \(\vec{v}\) is the velocity, \(\gamma=1/\sqrt{1-v^{2}}\) is the Lorentz factor, and \(\tau\) is the proper time. As discussed originally by LL [14], this equation is valid provided that the corrections to Lorentz force motion (the final two terms) are small compared to the Lorentz force _in the rest frame of the particle_. However, in the lab frame they may be of comparable or greater magnitude, and this is the regime of interest to us.
For numerical purposes, we normalize the equation using the natural scales of the equation, together with a dimensionless number \(\chi\) that can be chosen for convenience. Defining
\[\tilde{\tau} =\frac{3}{2}\chi^{2}\frac{\tau}{\mathcal{R}} \tag{40}\] \[\tilde{\vec{E}} =\frac{\vec{E}}{\chi\overline{\mathcal{E}}}\] (41) \[\tilde{\vec{B}} =\frac{\vec{B}}{\chi\overline{\mathcal{E}}}\] (42) \[\tilde{\vec{p}} =\frac{\vec{p}}{m}=\gamma\vec{v}, \tag{43}\]
Eq. (39) becomes
\[\frac{d\tilde{\vec{p}}}{d\tilde{\tau}}=\frac{1}{\chi}\vec{f_{L}}\pm \left[(\tilde{\vec{E}}\cdot\tilde{\vec{p}})\tilde{\vec{E}}+\vec{f_{L}}\times \tilde{\vec{B}}\right]\mp\left[f_{L}^{2}-(\tilde{\vec{E}}\cdot\tilde{\vec{p}})^ {2}\right]\tilde{\vec{p}}, \tag{44}\]
where \(\vec{f_{L}}=\gamma\tilde{\vec{E}}+\tilde{\vec{p}}\times\tilde{\vec{B}}\) is the Lorentz force term. The Lorentz factor is related to the rescaled momentum by
\[\gamma=\sqrt{1+\vec{p}^{2}}. \tag{45}\]
Once the equation is solved for \(\tilde{\vec{p}}(\tilde{\tau})\), the position can be recovered by a subsequent integration. For convenience we define a normalized position vector \(\tilde{\vec{x}}\),
\[\tilde{\vec{x}}=\frac{3}{2}\chi^{2}\frac{\vec{x}}{\mathcal{R}}= \frac{3}{2}\chi^{2}\frac{1}{\mathcal{R}}\int\vec{v}dt=\int\tilde{\vec{p}}\;d \tilde{\tau}. \tag{46}\]
If necessary, one can also determine the lab-frame time \(t\) from \(dt=\gamma d\tau\).
We will choose the value of \(\chi\) so that lengths are measured in meters when the electron is considered,
\[\chi=\sqrt{\frac{2}{3}\frac{\mathcal{R}}{1\text{m}}}\approx 4.33\times 10^{-8}. \tag{47}\]
In particular for electrons we have
\[\tilde{\vec{x}} = \left(\text{position in meters}\right) \tag{48}\] \[\tilde{\tau} = \left(\text{proper time in meters}\right)\] (49) \[\tilde{\vec{E}} = \left(\text{electric field in units of }10^{13}\text{ V/m}\right)\] (50) \[\tilde{\vec{B}} = \left(\text{magnetic field in units of }10^{8}\text{ G}\right)\text{,} \tag{51}\]
These are somewhat convenient for pulsars, which have radii of \(\sim 10\)km and magnetic fields of \(10^{8}\)-\(10^{15}\)G.
Our numerical method is described in Appendix. A.
## IV An example with torsion
In paper I we presented a numerical example of Aristotelian equilibrium in which the torsion was precisely zero. Here we complement that case with an example where the torsion is significant. We consider a simple field configuration consisting of parallel electric and magnetic fields tangent to helical curves that fill space. In terms of some constant \(h>0\), the field configuration is
\[\vec{E} = \frac{E_{0}}{\sqrt{h^{2}+x^{2}+y^{2}}}\left\{-y,x,h\right\} \tag{52}\] \[\vec{B} = \frac{B_{0}}{\sqrt{h^{2}+x^{2}+y^{2}}}\left\{-y,x,h\right\}. \tag{53}\]
The PNDs are also helical, and their curvature and torsion are
\[\kappa = \frac{\sqrt{x^{2}+y^{2}}}{x^{2}+y^{2}+h^{2}} \tag{54}\] \[\iota = \frac{h}{x^{2}+y^{2}+h^{2}}, \tag{55}\]
with ratio
\[\frac{\iota}{\kappa}=\frac{h}{\sqrt{x^{2}+y^{2}}}. \tag{56}\]
We choose field strengths \(\tilde{E}_{0}=\tilde{B}_{0}=1\) and height \(\tilde{h}=10\). For electrons, the physical units of \(\tilde{E}_{0}\) and \(\tilde{B}_{0}\) are given in Eqs. (50) and (51), while the value of \(\tilde{h}\) represents meters. We start the particle at \((1,0,0)\) and it reaches equilibrium within several timescales \(m/(qE)\). Fig. 1 shows a portion of the ensuing trajectory, which closely hugs a PND while slowly drifting outwards. Although the torsion of this PND is times larger than the curvature, we still have \(C_{3}\approx.02\ll 1\) and the formulas for the Aristotelian equilibrium should still be valid. Fig. 2 shows indeed that the numerical results match the predicted analytical formulas.
By suitable choice of the parameters \(E_{0}\), \(B_{0}\), and \(h\), one can arrange for \(C_{3}\) to be arbitrarily large over an arbitrarily large region of space. However, since \(\kappa\) (54) falls off more slowly than \(\iota\) (55), the value of \(C_{3}\) always becomes small at large distances from the \(z\) axis [see Eq. (38)]. In numerical experiments we find that equilibrium does not occur in the region of large \(C_{3}\).2 However, the particle still moves primarily along the principal null direction, now with Lorentz factor growing in time, indicating yet another regime of PND motion worthy of future exploration. In this particular numerical experiment, the motion also has a slower outward drift that eventually takes the particle to a region of small \(C_{3}\) and \(C_{4}\), where its Lorentz factor finally settles down to the Gruzinov value (18). These properties are illustrated in Fig. 3
Footnote 2: Although \(C_{3}\ll 1\) is required for the formulas (25)–(27) to apply, in principle the particle could reach an equilibrium described by different formulas. In paper I we in fact derived a more general formula for the Lorentz factor as a quartic equation for \(\gamma\) (see Eq. (61) of paper I) that does not rely on \(C_{3}\ll 1\). However, this formula does require the equilibrium assumption (22), and we found in this case that the particle does not enter equilibrium in this sense while \(C_{3}\) is still large. We have not identified a field configuration where the alternative Lorentz factor formula both applies and is significantly different from the Gruzinov value.
These simple examples demonstrate that the equilibrium works as expected when torsion is non-zero, as long as \(C_{3}\ll 1\). In the remainder of the paper we will set the torsion precisely to zero in order to simplify the discussion.
## V Approach to equilibrium
We now study the process by which a particle enters equilibrium, using a combination of analytical and numerical approaches.
### Uniform field solution
At least for some short time, the motion of a particle can be determined in the approximation that the field is uniform. In this approximation, we can always work in a frame where \(\vec{E}\) and \(\vec{B}\) are parallel. The full analytic solution to the LL equation was found in this case by [16]. We denote the initial velocity by \((v_{1},v_{2},v_{3})\) with initial Lorentz factor \(\gamma_{0}\), such that the initial four-velocity is given by
\[u^{\alpha}(0)=\gamma_{0}(1,v_{1},v_{2},v_{3}). \tag{57}\]
Taking the field to be in the \(z\) direction, the analytic solution to the LL equation is
\[\gamma(\tau) =\frac{1}{2}\frac{(1+v_{3})+(1-v_{3})e^{-2\tau/\tau_{E}}}{\sqrt{1- v_{3}^{2}-(v_{1}^{2}+v_{2}^{2})e^{-2\tau/(\delta\tau_{E})}}}e^{\tau/\tau_{E}} \tag{58}\] \[v_{x}(\tau) =A(\tau)e^{-\tau/\tau_{\perp}}\sin(\tau/\tau_{B}+\phi)\] (59) \[v_{y}(\tau) =-\text{sign}(qB_{0})A(\tau)e^{-\tau/\tau_{\perp}}\cos(\tau/\tau_ {B}+\phi)\] (60) \[v_{z}(\tau) =\text{sign}(q)\frac{(1+v_{3})-(1-v_{3})e^{-2\tau/\tau_{E}}}{(1+v _{3})+(1-v_{3})e^{-2\tau/\tau_{E}}}. \tag{61}\]
We are using Cartesian coordinates with \(v_{x}(\tau=0)=v_{1}\), \(v_{y}(\tau=0)=v_{2}\), and \(v_{z}(\tau=0)=v_{3}\), and \(\phi\) is the initial direction of motion in the \(xy\) plane (\(\tan\phi=v_{2}/v_{1}\)). Here \(A(\tau)\) is defined by
\[A(\tau)=\frac{2\sqrt{v_{1}^{2}+v_{2}^{2}}}{(1+v_{3})+(1-v_{3})e^{-2\tau/\tau_{ E}}}, \tag{62}\]
and we introduced three time scales
\[\tau_{B} =\frac{m}{|qB_{0}|}, \tag{63}\] \[\tau_{E} =\frac{m}{|q|E_{0}}\] (64) \[\tau_{\perp} =\frac{m}{|q|E_{0}}\frac{\delta}{\delta+1}. \tag{65}\]
The definition of \(\delta\) was given above in Eq. (28).
We see that the particle asymptotically approaches light speed in the \(z\) direction, i.e., it eventually moves along the PND. We interpret this to mean that radiation damps only the perpendicular momentum. The particle executes a damped circular motion in the \(xy\) plane (around the field direction), in accordance with the \(E_{0}\) intuition of synchrotron motion and associated radiation damping. The period of oscillation is just the classical synchrotron period \(\tau_{B}\) (expressed here in proper time), while the decay time \(\tau_{\perp}\)
Figure 1: Trajectory of a particle under the helical field configuration (52) and (53) with equal electric and magnetic field strengths. The particle’s trajectory is almost tangent to the PND shown in the plot while drifting outwards slowly. Notice that the \(\tilde{z}\) axis has a compressed scale; the torsion-to-curvature ratio is \(\iota/\kappa\approx 10\).
Figure 2: Agreement of the analytical predictions (25)–(27) with numerical simulation in a case with significant torsion. The trajectory is shown in Fig. 1. The fractional differences \(\epsilon_{i}\) are defined as the difference between the numerical and analytical value, divided by the analytical value, for \(i=\{\gamma,v_{n},v_{k}\}\). The bottom two plots show the torsion and curvature of the PND at the particle position, normalized as \(\tilde{\kappa}=(2/3)\chi^{-2}\mathcal{R}\kappa\) and \(\tilde{\iota}=(2/3)\chi^{-2}\mathcal{R}\kappa\) according to the conventions of Sec. III. The \(x\)-axis uses a timescale \(\tau_{E}=m/(qE_{0})\).
synchrotron damping result. In particular, we have
\[\tau_{\perp}\approx\begin{cases}\tau_{B}\frac{\mathcal{E}}{|B_{0}|}&|B_{0}|\gg E _{0}\\ \tau_{E}&E_{0}\gg|B_{0}|\end{cases}, \tag{66}\]
taking into account \(E_{0}\ll\mathcal{E}\). The former case agrees in order of magnitude with Eq. (101) of Ref. [17].
To understand the dynamics it is convenient to expand the Lorentz factor at early and late times,
\[\gamma(\tau)\approx\begin{cases}\gamma_{0}\left(1-\tau/\tau_{\rm drop}-\frac{v _{3}}{\gamma_{0}^{2}}\tau/\tau_{E}\right),&\tau\to 0\\ \frac{1}{2}\sqrt{\frac{1+v^{3}}{1-v_{3}^{3}}}e^{\tau/\tau_{E}},&\tau\to\infty. \end{cases} \tag{67}\]
where we introduce yet another timescale
\[\tau_{\rm drop}\equiv\frac{\delta}{\gamma_{0}^{2}(v_{1}^{2}+v_{2}^{2})}\tau_{E}. \tag{68}\]
For sufficiently large initial velocity perpendicular to the field, \(\gamma_{0}(v_{1}^{2}+v_{2}^{2})\gg\delta\), we have \(\tau_{\rm drop}\ll\tau_{\rm E}\). In this case there is a sudden drop in Lorentz factor on timescale \(\tau_{\rm drop}\) as the particle loses its perpendicular momentum, followed by a subsequent rise on timescale \(\tau_{E}\) as the particle gains parallel momentum. The minimal Lorentz factor is determined by the details of (58); in the special case \(\delta\gg 1\), the minimum occurs at approximately \(\sqrt{\delta}\).
In a realistic field configuration, the exponential rise in Lorentz factor is ultimately cut off by the effects of non-uniform fields. If the conditions are right for Aristotelian equilibrium, there will be a transition around the equilibrium Lorentz factor. In order to understand this transition, we now study the LL equation near the Aristotelian equilibrium solution.
### Near equilibrium solution
To study the dynamics near equilibrium, we adopt the same assumptions as of paper I, assuming time derivatives are perturbatively small (instead of dropping them entirely). Following the steps of Sec. IVB of that reference, except retaining time derivatives,3 we arrive at
Figure 3: An example of evolution through a region where the torsion condition \(C_{3}\ll 1\) is violated. Since \(C_{3}\propto\iota/\sqrt{\kappa}\) diverges on the symmetry axis for the nested helical configuration we consider, the torsion condition is violated near the axis. Here we show an evolution beginning in this region; the particle starts at \(\widetilde{\vec{x}}=\{0.01,0,0\}\), where \(C_{3}\approx 5.6\). The initial velocity is along the PND with initial Lorentz factor equal to half the Gruzinov value. The particle still approximately follows the PND, but with time-variable Lorentz factor, such that it does not reach equilibrium where \(C_{3}\) is large. However, it eventually drifts to a region of small \(C_{3}\) and settles down to the Aristotelian equilibrium.
three coupled equations for \(\gamma\), \(v_{n}\) and \(v_{k}\)
\[\frac{d\gamma}{d\tau} =\frac{|q|\gamma E_{0}}{m}-\frac{|q|\gamma^{3}}{m\mathcal{E}}(E_{0} ^{2}+B_{0}^{2})(v_{n}^{2}+v_{k}^{2}) \tag{69}\] \[\frac{dv_{n}}{d\tau} =\frac{|q|B_{0}}{m}v_{k}-\frac{|q|E_{0}}{m}\frac{1+\delta}{\delta }v_{n}-\gamma\kappa\] (70) \[\frac{dv_{k}}{d\tau} =-\frac{|q|B_{0}}{m}v_{n}-\frac{|q|E_{0}}{m}\frac{1+\delta}{\delta }v_{k}. \tag{71}\]
We now express these quantities as small perturbations of their equilibrium values,
\[\gamma =\Gamma+\bar{\gamma} \tag{72}\] \[v_{n} =V_{n}+\bar{v}_{n}\] (73) \[v_{k} =V_{k}+\bar{v}_{k}, \tag{74}\]
where \(\Gamma,V_{n},V_{k}\) are taken to be Eqs. (25)-(27), respectively. Regarding \(E_{0},B_{0},\kappa\) as constants and linearizing in the barred quantities, we find
\[\tau_{E}\kappa\frac{d\bar{\gamma}}{d\tau} =-2\kappa\bar{\gamma}+2\bar{v}_{n}/\tau_{\perp}-\epsilon 2\bar{v}_{k}/\tau_{B} \tag{75}\] \[\frac{d\bar{v}_{n}}{d\tau} =-\bar{v}_{n}/\tau_{\perp}+\epsilon\bar{v}_{k}/\tau_{B}-\bar{ \gamma}\kappa\] (76) \[\frac{d\bar{v}_{k}}{d\tau} =-\epsilon\bar{v}_{n}/\tau_{B}-\bar{v}_{k}/\tau_{\perp}, \tag{77}\]
where the timescales \(\tau_{i}\) were introduced in Eqs. (63)-(65) and we have also written \(\epsilon=\text{sign}(B_{0})\). Defining
\[T =\frac{\tau}{\tau_{E}} \tag{78}\] \[b =\epsilon\frac{\tau_{E}}{\tau_{B}}=\frac{B_{0}}{E_{0}}\] (79) \[c =\frac{\tau_{E}}{\tau_{\perp}}=\frac{1+\delta}{\delta}>1, \tag{80}\]
this set of equations may also be written
\[\frac{d\bar{X}}{dT}=M\bar{X}, \tag{81}\]
with
\[\bar{X}=\begin{pmatrix}\bar{\gamma}\kappa\tau_{E}\\ \bar{v}_{n}\\ \bar{v}_{k}\end{pmatrix},\quad M=\begin{pmatrix}-2&2c&-2b\\ -1&-c&b\\ 0&-b&-c\end{pmatrix}. \tag{82}\]
Making the ansatz
\[\bar{X}=\bar{X}_{0}e^{\lambda T}, \tag{83}\]
presents the eigenvalue problem \(M\bar{X}_{0}=\lambda\bar{X}_{0}\) (here \(\bar{X}_{0}\) is a constant independent of \(T\)). If \(\lambda^{(i)}\) are the eigenvalues and \(\bar{X}_{0}^{(i)}\) the associated eigenvectors (for \(i=1,2,3\)), then the general solution is
\[\bar{X}(T)=\sum_{i}C_{i}\text{Re}[\bar{X}_{0}^{(i)}e^{\lambda^{(i)}T}], \tag{84}\]
for three real constants \(C_{i}\). The eigenvalues and eigenvectors depend only on \(E_{0}\) and \(B_{0}\) and can be found numerically for any given values of \(E_{0}\) and \(B_{0}\).
We now show that the equilibrium is linearly stable, i.e., \(\text{Re}[\lambda^{(i)}]<0\). The eigenvalues \(\lambda^{(i)}\) are the roots of the characteristic polynomial (defined with a minus sign for convenience),
\[f(\lambda)=-\text{det}(M-\lambda I)\] \[=\lambda^{3}+2(c+1)\lambda^{2}+(b^{2}+6c+c^{2})\lambda+4(b^{2}+c ^{2}). \tag{85}\]
Notice that all the coefficients are real and positive. Thus \(f\) is strictly positive for \(\lambda\geq 0\), implying that any real root is strictly negative. We have therefore proven stability in the case of real roots only. For complex roots, note that such roots must appear in a complex conjugate pair, since we consider a real polynomial. We may therefore write
\[f(\lambda)=(\lambda-\lambda_{r})(\lambda-\alpha-i\beta)(\lambda-\alpha+i\beta), \tag{86}\]
with \(\lambda_{r},\alpha,\beta\) all real. Comparing the coefficient of \(\lambda^{2}\) with the polynomial (85), we find that
\[2\alpha=-\lambda_{r}-2(c+1). \tag{87}\]
However, we find that \(f\) is negative at \(\lambda=-2(c+1)\):
\[f(-2(c+1))=-2\left[(c+3)(c+2)c+b^{2}(c-1)\right]<0, \tag{88}\]
noting from Eq. (80) that \(c>1\). Since \(f\) is positive for \(\lambda\geq 0\), there must be a real root between \(\lambda=-2(c+1)\) and \(\lambda=0\). In the case (86) we consider, \(\lambda_{r}\) is the single real root, so we have
\[-2(c+1)<\lambda_{r}<0. \tag{89}\]
It then follows from (87) that \(\alpha\) is strictly negative, completing the proof that \(\text{Re}[\lambda^{(i)}]<0\).
Perturbations away from equilibrium are thus exponentially damped. The qualitative behavior of approach to equilibrium is determined by the mode (or pair of complex conjugate modes) with smallest \(|\text{Re}[\lambda^{(i)}]|\), which decays the slowest. If this mode has a non-zero imaginary part (i.e., it is part of a complex-conjugate pair), then oscillations will accompany the decay, with frequency equal to \(|\text{Im}[\lambda^{(i)}]|\). Note also that the properties of the decay are insensitive to the sign of \(b\), since the characteristic polynomial (85) depends only on \(b^{2}\).
It is interesting to ask for what parameter ranges such oscillations will be important. The discriminant of the cubic (85) is equal to
\[\Delta =-\frac{4}{27}[(3b^{2}-c^{2}+10c-4)^{3}+ \tag{90}\] \[(c^{3}-15c^{2}+30c+9b^{2}c-45b^{2}-8)^{2}] \tag{91}\]
When \(\Delta<0\) there are complex-conjugate roots; otherwise all roots are real. Both signs occur over the parameter ranges \(b\in\mathbb{R},c>1\), so both behaviors are possible.
If \(\Delta\geq 0\), there will be no oscillations. If \(\Delta<0\) there will be oscillations, and these will be the dominant behavior if \(\alpha<\lambda_{r}\) in the notation of (86).
One tractable case is when \(\delta\gg 1\) so that \(c\approx 1\). It is easy to see that the discriminant is negative for \(c=1\), so that there will be complex-conjugate roots. One can then check that these oscillatory modes decay more slowly than the pure exponential mode provided \(|b|\geq 2/\sqrt{3}\), and that they decay with a similar order of magnitude even for \(|b|\leq 2/\sqrt{3}\). The oscillations will thus always be important in the case \(\delta\gg 1\).
Some further details are helpful for interpreting this case. As a function of \(b\), the dominant decay time (real part of eigenvalue with real part closest to zero) ranges between \(-1\) at \(b=0\) and \(-4\) at \(|b|\to\infty\), and the frequency of oscillations (imaginary part of complex eigenvalue) ranges between (approximately) \(1.32\) at \(b=0\) and infinity at large \(|b|\), approaching linearly like \(|b|\) as \(b\to\pm\infty\). Back in the physical time coordinate \(\tau=\tau_{E}T\), we see that decay will typically take several to tens of \(\tau_{E}\), and the oscillation angular frequency will be at least \(1.32\tau_{E}^{-1}\), and will be approximately \(|b|\tau_{E}^{-1}=\tau_{B}^{-1}\) at large \(|b|\). It is notable that these oscillations exist even in the pure-E case (\(\delta\gg 1,b=0\)) and are thus physically distinct from synchrotron motion in that limit.
### Numerical examples
We now show two numerical examples illustrating the features explored in the previous subsections. We consider the case of parallel circular electric and magnetic fields, i.e., Eqs. (52) and (53) with \(h=0\). Since the electric and magnetic fields are parallel in the lab frame, no boost is required to relate the parallel-frame uniform field solution (Sec. V.1) to the lab-frame near-equilibrium solution (Sec. V.2). We always place the particle in a region of the circular field configuration where \(C_{4}\ll 1\) and hence equilibrium is expected to occur.
First we consider the case where the particle begins with a large momentum perpendicular to the PND. Based on the analysis of the previous subsections, we expect the particle to sharply lose its perpendicular momentum on a timescale \(\tau_{\rm drop}\) (68), then gain parallel momentum exponentially on a timescale \(\tau_{E}\) (64) until it nears the equilibrium Lorentz factor, and finally approach equilibrium exponentially, with timescale and possible oscillations determined from \(\tau_{E},\tau_{B},\tau_{\perp}\) via the eigenvalue problem discussed in Sec. V.2. These expectations are confirmed by our numerical experiments; an example is shown in Fig. 4.
We next consider the case where the particle begins with a large momentum \(\gamma\gg\gamma_{g}\) parallel to the PND. Here the uniform field approximation is not useful since the particle quickly reaches the region where the field line bends away from its motion. However, at this stage we can regard the particle as traveling through a new uniform field configuration with some perpendicular momentum, which will be removed by radiation reaction according to the intuition discussed in the first new paragraph below Eq. (65). In this way the particle will lose energy until it either enters equilibrium directly ("from above") or finds itself in a uniform field configuration that will accelerate it back up to near-equilibrium conditions, after which it enters equilibrium "from below". An example somewhat intermediate between these two cases is shown in Fig. 5.
## VI Numerical survey
Up until now, we have explored the properties of the Aristotelian equilibrium and the manner in which particles enter the equilibrium. We now turn to the general question of when particles will indeed enter the equilibrium. As reviewed in Sec. II.3, paper I identified five necessary conditions \(C_{i}\ll 1\) for equilibrium to occur. We now provide numerical evidence that these conditions are indeed sufficient for it to occur, and we use the results to gain a more quantitative understanding of how well they must be satisfied.
Exploring all five conditions would be computationally intractable. However, given that \(C_{4}\ll 1\) implies \(C_{1}\ll 1\) and \(C_{2}\ll 1\) outside of extremely finely tuned field configurations (Eqs. (35) and (36)), we can safely ignore \(C_{1}\) and \(C_{2}\). The case where \(C_{3}\ll 1\) is violated is potentially interesting for helical fields (as occur, e.g., in relativistic jets), and we found that equilibrium does not occur in this case (Sec. IV). Violations of the quasi-static assumption \(C_{5}\ll 1\) are certainly interesting (e.g., for laser fields), but outside the scope of this paper. Instead, we will pick field configurations where \(C_{3}=0\) and \(C_{5}=0\) (exactly torsion-free and static, respectively) and explore the role of \(C_{4}\) in determining whether equilibrium occurs.
### Circular fields and the role of \(\eta\)
Eq. (34) defined a quantity \(\eta\) that appears in the conditions for equilibrium. For generic fields, \(\eta\approx 1\) and factors of \(\eta\) can be dropped. To illustrate a situation where \(\eta\)_cannot_ be dropped, consider a purely azimuthal field configuration with constant parallel electric and magnetic fields. The field lines and PNDs are just circles of radius \(\rho=\sqrt{x^{2}+y^{2}}\). Since \(\vec{\nabla}\rho=\vec{n}\), where \(\vec{n}\) is the Frenet-Serret normal direction to the curve, \(\eta\) can be calculated as
\[\eta=|\vec{v}\cdot\nabla\rho|=|v_{n}|. \tag{92}\]
This quantity is small in equilibrium since \(|v_{n}|\ll 1\) by the assumption of motion along a PND. The small size of \(\eta\) arises because the field strength and radius of curvature are both constant along the PND direction \(\vec{\ell}\), a finely
tuned arrangement. From Eqs. (26) and (25), we have
\[|v_{n}|^{2}=\frac{2}{3}\frac{\mathcal{R}}{R}\sqrt{\frac{E_{0}}{\mathcal{E}}}(1+ \delta)^{2}. \tag{93}\]
The condition (37) then becomes
\[R^{2}E_{0}\gg(1+\delta)^{2}\mathcal{R}^{2}\mathcal{E}, \tag{94}\]
where we drop a factor of \(2/3\).
Eq. (94) is a necessary condition for equilibrium to occur. To check whether it is also sufficient, we simulated a large number of trajectories with different field configuration parameters and particle initial conditions. Specifically, we fixed \(\tilde{B}_{0}=0.1\) and chose the particle to begin on the \(x\) axis, choosing the other parameters randomly from uniform distributions in the ranges \(\log_{10}\tilde{E}_{0}\in(-4,0)\) for the \(\log\) of the field strength, \(\gamma\in(1,\gamma_{g})\) for the initial Lorentz factor, \(\theta\in(0,\pi)\) and \(\phi\in(0,2\pi)\) for the initial direction of motion (where \(\theta,\phi\) are spherical coordinates on the space of velocities), and \(\log_{10}x\in(-3,0)\) for the \(\log\) of the initial position. The initial radius of curvature \(\tilde{R}\) is just \(x\), so we sample an initial range \(\log_{10}\tilde{R}\in(-3,0)\)
Figure 4: An example of entry into equilibrium. The field configuration is circular with \(\tilde{E}_{0}=1\) and \(\tilde{B}_{0}=10\). The particle begins at \(\tilde{x}=(1,0,0)\) with initial momenta \(\tilde{p}=(-2.32,8.28,3.97)\times 10^{4}\). The particle quickly loses its perpendicular momentum on timescale \(\tau_{\text{drop}}\approx 10^{-4}\tau_{E}\) (68) before accelerating along the PND to near the equilibrium Lorentz factor over several \(\tau_{E}\) and finally approaching the equilibrium in a weakly damped exponential fashion. Along with the numerical trajectory, we show the corresponding analytical solutions in the uniform-field and near-equilibrium approximations. The uniform field solution (58)–(61) has 5 parameters \(E_{0},B_{0},v_{i}\), all of which are fixed by the initial data. The near-equilibrium solution (84) has six parameters parameters \(E_{0},B_{0},\kappa,C^{(i)}\). The first three are chosen according to the local field at the initial position and the last three are chosen by fitting with data from \(\tau=15\tau_{E}\) to the end of the evolution at \(\tau=30\tau_{E}\). The oscillation period is approximately \(.61\tau_{E}\), and the damping time scale is approximately \(10\tau_{E}\) (the exponential envelope is \(e^{-0.1\tau/\tau_{E}}\)). Only the Lorentz factor was used for these fits; however, using the fit parameters, the perpendicular velocities \(v_{n}\) and \(v_{k}\) display a similar level of agreement.
Figure 5: Entry into equilibrium after starting with large momentum parallel to the PND. The field setup is the same as Fig. 4, except that the particle starts with momentum entirely in the \(y\) direction with Lorentz factor equal to 100 times the equilibrium value, \(\gamma_{0}=100\gamma_{g}\). The particle quickly loses energy until near the equilibrium value and then approaches equilibrium with a sinusoidally modulated exponential.
For each run of the code we evolve for a maximum time of \(\tau_{\max}=(6\log\gamma_{g})\tau_{E}\), which is sufficient for the particle to enter equilibrium if it is destined to do so.4 At any given time in the trajectory, the particle is considered to have entered equilibrium if \(\langle|(\gamma-\gamma_{g})|/\gamma_{g}\rangle<3\%\), where the average \(\langle...\rangle\) is calculated over the last \(20\%\) of the computed trajectory, using the local value of \(\gamma_{g}\). Once a particle enters equilibrium, we record the local electric field \(E_{0}\) and curvature radius \(R\) and terminate the run. (If the particle never enters equilibrium, we make no corresponding record.) Fig. 6 shows the values of \(E\) and \(R\) at which equilibrium occurred in our random sample, showing clearly that the condition (94) indeed controls whether equilibrium is obtained. We find that the left-hand-side must be approximately \(15\) times larger than the right-hand side (red line in the figure) for equilibrium to occur.
Footnote 4: According to Eq. (67), the time for a particle to be accelerated to \(\gamma_{g}\) is of order \(\tau_{E}\log\gamma_{g}\). For the circular field \(E_{0}\) and hence \(\tau_{E}\) is constant; however, for other field configurations we update the maximum allowed run time after each time step to use the local value of \(E_{0}\). We also examined a selection of the trajectories that did not enter equilibrium in this time and evolved them for longer, finding that indeed they never enter equilibrium.
When \(E_{0}\gg B_{0}\), we have \(\delta\approx\mathcal{E}/E_{0}\gg 1\), and this condition reduces to the original Gruzinov condition (21). (This agreement is accidental, due to the specific form of \(\eta\) for this field configuration.) By contrast, when \(E_{0}\lesssim B_{0}\), the condition (94) remains distinct from the Gruzinov condition (21). This behavior is seen clearly in Fig. 6. The Gruzinov condition is necessary, but not sufficient, in this special case. The correct condition for entry into equilibrium is the modified condition (37).
### More general field configurations
The circular field is a special case that cleanly illustrates the role of \(\eta\) in the condition \(C_{4}\ll 1\) (32) for entry into equilibrium. In order to test the condition more generally, we consider two more kinds of field configuration. The "ellipse" field configuration simply changes the shape of the field lines to ellipses instead of circles, while keeping everything else the same. The "separate center" field configuration consists of circular electric and magnetic field lines with equal field strengths \(E_{0}=B_{0}\) but with the center of the circles shifted by a distance \(d\). We randomly varied the field strength and center-distance while also randomly choosing initial conditions as before.
The results of these surveys are shown along with the circular case in Fig. 7, demonstrating clearly that the condition \(C_{4}\ll 1\) is necessary and sufficient for equilibrium to occur. The precise value of \(C_{4}\) required depends on the field configuration, ranging from \(\sim 0.1\) to \(\sim 0.01\) for the cases considered here.
## VII Summary of results
We have provided a detailed understanding of the entry into Aristotelian equilibrium, identifying the relevant timescales, giving analytical descriptions where possible, and showing that numerical simulations match the predicted behavior. We showed analytically that the equilibrium is linearly stable and identified the presence of
Figure 6: Values of the local electric field and PND curvature radius at entry into equilibrium in a large parameter survey. In this circular field configuration, equilibrium is expected to occur when \(N=R^{2}E_{0}/[(1+\delta)^{2}\mathcal{R}^{2}\mathcal{E}]\gg 1\) [Eq. (94)]. The numerics validate this condition: the red line is the curve \(N=15\).
Figure 7: Validity of the main condition for entry into equilibrium [\(C_{4}\ll 1\), (32) or (37)] in three numerical parameter surveys. The colored dots correspond to the local field properties where a particle in the corresponding survey entered equilibrium. Lines of constant \(C_{4}\) have slope \(-1\) on this plot and the \(y\)-intercept is the log of \(1/C_{4}^{2}\). For each field configuration, the edge of the region of equilibria clearly has the predicted slope of \(-1\). We have drawn approximate reference lines for this edge to guide help guide the eye. Entry to equilibrium occurs when \(C_{4}\lesssim 5\%\).
oscillations at entry for a large region of parameter space. We performed numerical parameter surveys exploring the conditions for equilibrium to occur. Combined with the results of paper I, this study provides strong evidence that the conditions (29)-(33) are necessary and sufficient for Eqs. (25)-(27) to describe the motion of a charged particle. This provides a solid foundation for using the Aristotelian approximation in astrophysical modeling.
## Acknowledgements
This work was supported in part by NSF grant PHY-1752809, and NSF Grants PHY-1912619 and PHY-2145421, to the University of Arizona.
## Appendix A Numerical Method
Our numerical scheme is based on the implicit 4th order Runge-Kutta-Nystrom method. We first review this method before presenting an improved iteration method and an adaptive timestep version that we also used in some cases.
For a second-order ordinary differential equation
\[\ddot{x}^{\alpha}=f^{\alpha}(t,x^{\alpha},\dot{x}^{\alpha}), \tag{100}\]
the implicit 4th order Runge-Kutta-Nystrom method can be expressed as follows. First fix a time step \(h\). If \(x^{\alpha}_{n}\) and \(\dot{x}^{\alpha}_{n}\) represent the value and derivative at time \(t_{n}\), then the values at the next step \(t_{n+1}=t_{n}+h\) are obtained by
\[x^{\alpha}_{n+1} =x^{\alpha}_{n}+h\dot{x}^{\alpha}_{n}+h^{2}b_{i}k^{\alpha}_{i} \tag{101}\] \[\dot{x}^{\alpha}_{n+1} =\dot{x}^{\alpha}_{n}+ha_{i}k^{\alpha}_{i}, \tag{102}\]
where \(a_{i}\) and \(b_{i}\) are
\[a_{i} =\left(\frac{1}{2},\ \frac{1}{2}\right) \tag{103}\] \[b_{i} =\left(\frac{1}{4}+\frac{\sqrt{3}}{12},\ \frac{1}{4}-\frac{\sqrt{3}}{12} \right), \tag{104}\]
and \(k^{\alpha}_{i}\) must satisfy
\[k^{\alpha}_{i}=f^{\alpha}\big{(}t_{n}+c_{i}h,x^{\alpha}_{n}+c_{i} h\dot{x}^{\alpha}_{n}+h^{2}B_{ij}k^{\alpha}_{j},\dot{x}^{\alpha}_{n}+hA_{ij}k^{ \alpha}_{j}\big{)}. \tag{105}\]
In these expressions, repeated indices are summed. Here \(c_{i}\) is the vector
\[c_{i}=\begin{pmatrix}\frac{1}{2}&-\frac{\sqrt{3}}{6}\\ \frac{1}{2}+\frac{\sqrt{3}}{6}\end{pmatrix} \tag{106}\]
and \(A_{ij}\) and \(B_{ij}\) are the matrices
\[A_{ij} =\begin{pmatrix}\frac{1}{4}&\frac{1}{4}-\frac{\sqrt{3}}{6}\\ \frac{1}{4}+\frac{\sqrt{3}}{6}&\frac{1}{4}\end{pmatrix} \tag{107}\] \[B_{ij} =\begin{pmatrix}\frac{1}{36}&\frac{5}{36}-\frac{\sqrt{3}}{12}\\ \frac{5}{36}+\frac{\sqrt{3}}{12}&\frac{1}{36}\end{pmatrix}. \tag{108}\]
### Iteration method
Eq. (105) can be solved by a fixed point iteration method as follows. The initial values of \(k^{\alpha}_{i}\) are taken to be
\[(k^{\alpha}_{i})^{0}=f^{\alpha}(t_{n},x^{\alpha}_{n},\dot{x}^{ \alpha}_{n})\ \ (i=1,2). \tag{109}\]
We then iteratively improve the guess by calculating
\[(k^{\alpha}_{i})^{N+1}=f^{\alpha}\big{(}t_{n}+c_{i}h,\] \[x^{\alpha}_{n}+c_{i}h\dot{x}^{\alpha}_{n}+h^{2}B_{ij}(k^{\alpha }_{j})^{N}\] \[\dot{x}^{\alpha}_{n}+A_{ij}(k^{\alpha}_{j})^{N}\big{)}. \tag{110}\]
However, for large Lorentz factors we found that this iteration method sometimes converged very slowly or got stuck in a limit cycle. In these cases we instead used overrelaxation with the secant method,
\[y_{n} =w\left[y_{n-1}-F(y_{n-1})\frac{y_{n-1}-y_{n-2}}{F(y_{n-1})-F(y_{ n-2})}\right] \tag{111}\] \[+(1-w)y_{n-1}, \tag{112}\]
where \(y_{N}=(k^{\alpha}_{i})^{N}\), \(F^{\alpha}_{i}(k^{\alpha}_{i})=f^{\alpha}(k^{\alpha}_{i})-k^{\alpha}_{i}\) and \(w\in[0,1]\) is the weight. We chose \(w=0.05\). At each time step we try the simple iteration method first, jumping to the Secant method only if the simple iteration fails to reach a given tolerance (we chose fractional error of \(10^{-10}\)) within a fixed number of iterations (we chose \(10^{3}\)).
### Adaptive time step
For most portions of a given evolution the main timescales are \(\tau_{E}\) and \(\tau_{B}\), defined in Eqs. (64) and (63). However, for some initial conditions (e.g., Fig. 4) there can be abrupt changes in Lorentz factor on the much smaller timescale \(\tau_{\rm drop}\) defined in Eq. (68). In most cases we use an adaptive time step scheme, beginning with \(h=\min(\tau_{\rm B},\tau_{\rm E})\times 1\%\) and updating as follows.
We focus on the Lorentz factor as a simple scalar representative of the solution. Suppose we are at time \(t\) in the evolution and consider the value at time \(t+2h\), where \(h\) is the current step size. We can compute this either by taking a single step of size \(2h\) or by taking two steps of size \(h\). We will denote the resulting values for \(\gamma\) by \(\gamma^{(2h)}_{\rm num}\) and \(\gamma^{(h)}_{\rm num}\), respectively. Since our method has fourth-order accuracy, these are related to the true value \(\gamma_{\rm true}\) by
\[\gamma^{(h)}_{\rm num} \approx\gamma_{\rm true}(t+2h)+\phi h^{4} \tag{113}\] \[\gamma^{(2h)}_{\rm num} \approx\gamma_{\rm true}(t+2h)+16\phi h^{4}, \tag{114}\]
where \(\phi\) is some unknown constant. We can estimate \(\phi\) by solving,
\[\phi\approx\frac{\gamma^{(2h)}_{\rm num}-\gamma^{(h)}_{\rm num}}{15h^{4}}. \tag{115}\]
Suppose we instead consider a new time step \(h_{0}\) and evolve forward one step to \(\gamma_{\rm num}^{(h_{0})}\). Now we have
\[\gamma_{\rm num}^{(h_{0})}\approx\gamma_{\rm true}(t+2h)+\phi h_{0}^{4}, \tag{100}\]
If we want to achieve an accuracy of \(\epsilon=\left|(\gamma_{\rm num}^{(h_{0})}-\gamma_{\rm true})/\gamma_{\rm true}\right|\), the time step we should use satisfies
\[h_{0}^{4}=\frac{\epsilon\gamma_{\rm num}^{(h)}}{|\phi|}. \tag{101}\]
Using the formula (100) for \(\phi\), we conclude that an appropriate time step is
\[h_{0}=\zeta h\left(\frac{15\epsilon\gamma_{\rm num}^{(h)}}{|\gamma_{\rm num}^{ (2h)}-\gamma_{\rm num}^{(h)}|}\right)^{1/4}. \tag{102}\]
The factor of \(\zeta<1\) is to guarantee that \(h_{0}\) is smaller than that which is expected to exactly achieve the desired error tolerance \(\epsilon\), which corresponds to \(\zeta=1\). We choose \(\zeta=(14/15)^{1/4}\) in our code, and \(\epsilon=10^{-6}\).
At each step in the evolution we calculate the candidate new time step \(h_{0}\) according to (102). Before adopting this as the new time step we consider two potential adjustments. First, we ensure that \(h\) does not change by more than a factor of two. That is, if \(h_{0}\) is larger than \(2h\), then we use \(2h\) instead; similarly, if \(h_{0}\) is smaller than \(h/2\), we use \(h/2\) instead. Finally, we ensure that the candidate new time step satisfies the condition for the convergence of fixed point iteration, namely that the spectral radius of the (six-dimensional) Jacobian matrix of \(f_{i}^{\alpha}(k_{j}^{\beta})\) is strictly less than \(1\), i.e., the maximum of the absolute values of the eigenvalues of this matrix is less than \(1\). (Here \(f_{i}^{\alpha}\) denotes the RHS of (100).) If this test fails, then we divide the step size in half and try again, iterating until a step size satisfying the condition has been found.
### Convergence test
We performed a convergence test for the code using the uniform field setup, because it has exact analytical solutions. We choose the field strengths as \(\tilde{E}_{0}=0.1,\tilde{B}_{0}=1\). The initial value we used is \(\tilde{p_{0}}=\{100,400,-300\}\). The Lorentz factor Eq. (58) at \(\tau=2\tau_{E}\) was used as the reference value. The code was executed at different time steps \(h\). The fractional difference:
\[\epsilon=\left|\frac{\gamma_{\rm numeric}-\gamma_{\rm analytic}}{\gamma_{\rm analytic }}\right| \tag{103}\]
was then calculated. The result of the convergence test is shown in Fig. 8. It is clear that our code exhibits 4th-order convergence.
|
2305.18922 | Topological Nanophononic Interface States Using High-Order Bandgaps in
the One-Dimensional Su-Schrieffer-Heeger Model | Topological interface states in periodic lattices have emerged as valuable
assets in the fields of electronics, photonics, and phononics, owing to their
inherent robustness against disorder. Unlike electronics and photonics, the
linear dispersion relation of hypersound offers an ideal framework for
investigating higher-order bandgaps. In this work, we propose a design strategy
for the generation and manipulation of topological nanophononic interface
states within high-order bandgaps of GaAs/AlAs multilayered structures. These
states arise from the band inversion of two concatenated superlattices that
exhibit inverted spatial mode symmetries around the bandgap. By adjusting the
thickness ratio of the unit cells in these superlattices, we are able to
engineer interface states in different bandgaps, enabling the development of
versatile topological devices spanning a wide frequency range. Moreover, we
demonstrate that such interface states can also be generated in hybrid
structures that combine two superlattices with bandgaps of different orders
centered around the same frequency. These structures open up new avenues for
exploring topological confinement in high-order bandgaps, providing a unique
platform for unveiling and better understanding complex topological systems. | Anne Rodriguez, Konstantinos Papatryfonos, Edson Rafael Cardozo de Oliveira, Norberto Daniel Lanzillotti-Kimura | 2023-05-30T10:25:15Z | http://arxiv.org/abs/2305.18922v1 | Topological Nanophononic Interface States Using High-order Bandgaps in the One-Dimensional Su-Schrieffer-Heeger Model
###### Abstract
Topological interface states in periodic lattices have emerged as valuable assets in the fields of electronics, photonics, and phononics, owing to their inherent robustness against disorder. Unlike electronics and photonics, the linear dispersion relation of hypersound offers an ideal framework for investigating higher-order bandgaps. In this work, we propose a design strategy for the generation and manipulation of topological nanophononic interface states within high-order bandgaps of GaAs/AlAs multilayered structures. These states arise from the band inversion of two concatenated superlattices that exhibit inverted spatial mode symmetries around the bandgap. By adjusting the thickness ratio of the unit cells in these superlattices, we are able to engineer interface states in different bandgaps, enabling the development of versatile topological devices spanning a wide frequency range. Moreover, we demonstrate that such interface states can also be generated in hybrid structures that combine two superlattices with bandgaps of different orders centered around the same frequency. These structures open up new avenues for exploring topological confinement in high-order bandgaps, providing a unique platform for unveiling and better understanding complex topological systems.
## 1 Introduction
A periodic lattice containing two elements per unit cell can be described by the one-dimensional Su-Schrieffer-Heeger (SSH) model in the tight-binding approximation. This description has been a significant breakthrough for developing materials with topological properties [1, 2]. Topological states have since been demonstrated for a wide variety of excitations, including photons [3, 4, 5, 6, 7, 8], phonons [9, 10, 11, 12, 13], vibrations [14, 15, 16, 17, 18, 19, 20, 21], polaritons [22, 23, 24], plasmons [25], and magnons [26]. In the context of periodic lattices, multilayered structures, such as distributed Bragg reflectors (DBRs), have high-reflectivity regions associated with bandgaps. The states at the edge of these bandgaps present different spatial symmetries [27]. When concatenating two DBRs with inverted spatial mode symmetries around a gap, a topological interface state emerges [6, 28, 29].
Nanophononics [30, 31, 32], i.e., the engineering of acoustic nanowaves, appears as a versatile simulation platform [9, 33]. Unlike in optics or electronics [34, 35, 36], the linear dispersion relation of acoustic phonons [37] allows for the study of topological interface states in a broad frequency range. In particular, nanoacoustic topological states have been evidenced in superlattices working at acoustic frequencies in the tens to hundreds of GHz range, and demonstrated exceptional agreement between theory and experiments. [38, 39] In these reported cases, the unit cell was formed by two materials whose thickness ratio was optimized to reverse the mode symmetries around a specific bandgap while keeping the acoustic thickness constant. These studies demonstrated the robustness of the topologically protected states against thickness perturbations that do not affect the Zak phase (\(\theta^{Zak}\)) [38, 39], which is a key parameter to characterize topological phases in the SSH model [40]. Formally, the Zak phase is defined as the integral of the displacement across the Brillouin zone, and it can be associated with the sign of the reflection phase of a finite-size DBR [6, 16]. Each individual band of the acoustic dispersion relation has an associated \(\theta^{Zak}\), which can take only two values, 0 or \(\pi\). In this work, we theoretically investigate topological interface states by concatenating DBRs with inverted spatial mode symmetries at high-order bandgaps. Based on the different Zak phase configurations for the different bandgaps at specific unit cell thickness ratios, we engineer interface modes at higher bandgap orders. Furthermore, we benefit from the linear dispersion relation to
generate hybrid topological resonators. We establish the interface states of these resonators by concatenating two superlattices with different bandgap orders that are overlapping in frequency. We thus show that the presence of the topological states does not depend only on the spatial mode symmetry but also on the relative bandgap order of the concatenated superlattices. This platform allows to experimentally map textbook cases, and test innovative and counterintuitive physical situations. Our findings unlock a new degree of freedom to design topological nanoacoustic resonators.
The paper is organized as follows. Section 2 describes the principle of band inversion in the context of nanoacoustics. In Section 3, we present the method for generating interface states at high-order bandgaps and discuss their robustness compared to Fabry-Perot resonators. Sections 4 and 5 introduce our new designs of topological acoustic resonators.
## 2 Principle of Band Inversion
In topological superlattices, band inversion refers to the situation where two modes with opposite symmetries at the edges of a bandgap exchange their ordering in energy. This inversion can be achieved by adjusting the relative thickness of the two layers in the unit cell of the superlattice. By concatenating two superlattices with different layer thicknesses, i.e., superlattices presenting inverted bands, topologically protected interface states can be created.
In the case of multilayered GaAs/AlAs structures composed of two concatenated distributed Bragg reflectors (DBRs), we introduce a parameter \(\delta\in[-1,:1]\) to represent the relative thickness of AlAs and GaAs. [28] In DBRs with a centro-symmetric unit cell centered around the AlAs layer, GaAs is distributed equally on both sides of AlAs as follows: \(\frac{\lambda_{\text{GaAs}}}{8}(1+\delta)\), \(\frac{\lambda_{\text{GaAs}}}{4}(1-\delta)\), \(\frac{\lambda_{\text{GaAs}}}{8}(1+\delta)\). Figure 1 illustrates the evolution of the first four bandgaps as a function of \(\delta\), while maintaining the unit cell acoustic thickness constant. The orange lines indicate modes symmetric with respect to the center of the unit cell, while the blue lines indicate anti-symmetric modes. It can be observed that the modes exhibit a sinusoidal dependence on the parameter \(\delta\). Starting from the second bandgap, there is a consecutive opening and closing of the bandgap as \(\delta\) increases, accompanied by an inversion of symmetry. The number of nodes in the modes is directly related to the order of the bandgap. The first bandgap (Fig.1(a)) opens and closes only once, with a maximum amplitude at \(\delta=0\). The second bandgap (Fig.1(b)) opens twice and closes at \(\delta=0\), with a symmetry inversion of the edge modes around this point. The third bandgap exhibits two symmetry inversions (Fig.1(c)), while the fourth gap undergoes three symmetry changes across the full range of \(\delta\) (Fig.1(d)). Generally, the \(n^{th}\) bandgap experiences \((n-1)\) symmetry inversions.
The topological properties of a multilayered acoustic device can be characterized using the Zak phase, which is the 1D equivalent of the Berry phase [41]. For a periodic phononic 1D system with a
Figure 1: Left: Frequency of the band-edges bounding the bandgap as a function of the parameter \(\delta\). (a)-(d) Band inversion of the acoustic bandgap around 9.3 GHz (a), 18.6 GHz (b), 28 GHz (c), and 37.3 GHz (d). The mode symmetries are indicated by orange (symmetric) and blue (anti-symmetric) lines. The thickness ratio of GaAs/AlAs is indicated by the vertical dashed line in each case. Right: The unit cell is displayed with light and dark gray colors, representing AlAs and GaAs, respectively. The band-edge modes are shown in the unit cells for each bandgap.
phase of the \(n^{th}\) band is calculated by integrating across the Brillouin zone:
\[\theta_{n}^{Zak}=\int_{-\pi/a}^{\pi/a}\left[i\int_{\text{unit cell}}\frac{1}{2 \rho(z)^{2}(z)}dzu^{*}n,k(z)\partial_{k}um,k(z)\right]dk, \tag{1}\]
where \(u_{n,k}(z)\) is the acoustic displacement of the \(n^{th}\) band and wave-vector \(k\) at position \(z\), and \(\rho(z)\) and \(v(z)\) correspond to the mass density and speed of sound in the materials. [16, 28, 40]
In a periodic system with inversion symmetry, where the unit cell is centro-symmetric around AlAs, the Zak phase can only take on two discrete values: 0 or \(\pi\)[6, 16]. It is associated with the symmetries of the Bloch modes at the band edge [6]. When the modes at both ends of the same \(n^{th}\) band (i.e., at the edge and center of the Brillouin zone) have the same symmetries, the Zak phase \(\theta_{n}^{Zak}\) for that band is 0. Conversely, if the band has edge modes with opposite symmetries, \(\theta_{n}^{Zak}=\pi\)[6]. This parameter is crucial for predicting interface states and characterizing one-dimensional topological systems.
## 3 Interface states at high-order bandgaps
Fundamentally, an interface state is formed whenever two DBRs with opposite reflection phase signs are concatenated [6, 28]. The reflection phase can be either positive or negative depending on the structure of the DBR. For the \(n^{th}\) bandgap, one can determine the sign of the reflection phase by evaluating the relation [6]:
\[sgn(\phi)=(-1)^{n}(-1)^{l}\times\exp\left\{\left(i\sum_{m=0}^{n-1}\theta_{m}^ {Zak}\right)\right\}, \tag{2}\]
where \(l\) is the number of closed bandgaps below the \(n^{th}\) bandgap. Therefore, for the \(n^{th}\) bandgap, there is an interface state at the condition that \(\sum_{m=0}^{n-1}\theta_{m}^{Zak}=0+2p\pi,p\in\mathbb{N}\) for one DBR and \(\sum_{m=0}^{n-1}\theta_{m}^{Zak}=\pi+2p\pi,p\in\mathbb{N}\) for the other. The creation of an interface state at the \(n^{th}\) bandgap does not necessarily imply the generation of an interface state in other bands, as the sum of \(\theta^{Zak}\) for each band might differ.
Figures 2 (a)-(c) display reflectivity spectra for the third bandgap obtained by combining two DBRs with different values of \(\delta\), indicated by the dashed lines on the corresponding top panels. In each case, \(\delta\) is chosen such that the amplitude of the bandgap of the corresponding superlattice is maximized, which corresponds to the values \(\delta=0\) and \(\pm 0.66\). In panels (a) and (c), the two DBRs have inverted symmetry around the bandgap. In both cases, the acoustic reflectivity contains a dip centered in the high reflectivity region featuring the interface state. In contrast, in panel (b), the two DBRs have the same symmetry around the bandgap. Thus, this structure acts as a standard DBR, with a high reflectivity region. Generally, for the third bandgap, by concatenating one DBR with \(\delta\in[-0.33,0.33]\) and another with \(\delta<0.33\) (Fig.2(a)) or \(\delta>0.33\) (Fig.2(c)), the band inversion is preserved, and an interface state is generated.
Regarding the fourth bandgap, Figs. 2 (d)-(i) show six possible combinations between the two DBRs. In all shown cases, the values of \(\delta\) are chosen such that the amplitude of the bandgap is maximized, resulting in high reflectivity regions centered at \(\sim\)37.3 GHz. Depending on the mode symmetries around the fourth gap, an interface state is either present or absent. In the cases shown in Figs. 2(e) and (h), the modes of the two concatenated DBRs have the same symmetry (both modes at the bottom band are symmetric in panel (e), while they are anti-symmetric in panel (h)). Thus, the acoustic reflectivity spectra present high reflectivity regions like a standard DBR. On the contrary, in Figs. 2 (d), (f), (g), and (i), the two DBRs that are concatenated have inverted mode symmetries. As a result, an interface state between the two concatenated DBRs is generated. Generally, the approach presented for the third and fourth bandgaps can be extended to higher bandgap orders. More specifically, an interface state is generated when concatenating any two DBRs corresponding to superlattices with an even and odd bandgap opening.
These interface states can be accessed experimentally through Brillouin scattering measurements. [10, 38] Nevertheless, the scattering cross-section (\(\sigma\)), which represents the magnitude of the scattered signal, relies on the relative thickness of GaAs and AlAs within the two unit cells constituting each superlattice. Consequently, not all theoretically predicted interface states can be accessed experimentally. To identify the superlattice combinations that yield experimentally accessible interface states, we employed the transfer matrix method and a photoelastic model to simulate the Brillouin cross-section of the interface states. [42, 43] The Brillouin cross-section is defined by the overlap integral between the incident laser electric field \(E(z)\), the strain, which is given by the derivative of
the displacement \(\frac{\partial u(\mathbf{\omega},z)}{\partial z}\), and the photoelastic constant \(p(z)\) over the whole structure in the form:
\[\sigma(\mathbf{\omega})=\int|E(z)|^{2}p(z)\frac{\partial u(\mathbf{\omega},z)}{\partial z }dz, \tag{3}\]
where \(p(z)\) is material dependent, being \(p=1\) (\(p=0\)) in Ga-rich (Al-rich) layers [38]. We considered here the whole acoustic structure as an optical \(\lambda\)-cavity embedded in vacuum, where \(\lambda_{\text{opt}}\sim 1600\) nm. The cross-section depends on the overlap between the electric and acoustic fields \(|E(z)|^{2}(\partial u(\mathbf{\omega},z)/\partial z)\) in the regions where the photoelastic constant distribution \(p(z)\) is non-zero [38].
The integrand of Eq. 3 is displayed in Fig. 3 for structures supporting interface states in the 3rd and 4th bandgaps. We can analyze the integrand by splitting it into quadrants: left/right superlattice and positive/negative amplitude contributions. The integrand features signals composed of either double peaks (thick lines) or single peaks (thin lines). Figs. 3 (a) and (b), display the integrand of the modes in the third bandgap, associated with the topological structures presented in Figs. 2(a) and (c), respectively. The peaks with the maximum amplitude at the interface between the two DBRs are the main contributors to the overall cross-section, resulting in a high Brillouin cross-section in panel (a). Conversely, the positive and negative contributions of the integrand displayed in panel (b) cancel each other, leading to a low Brillouin cross-section. The calculated Brillouin cross-sections for the mode in the third bandgap on both cases are, respectively, \(\sigma=8291\) and \(\sigma=4\).
Figs. 3(c)-(f) display the integrand of the modes in the fourth bandgap, associated to the cases of Figs. 2(d), (f), (g), and (i), respectively. The integrand in Figs. 3(c) and (f) exhibit positive contribution across the entire structure, while negative amplitude contribution is asymmetric between left and right quadrants. This results in an overall positive signal with high Brillouin cross-sections of \(\sigma=278720\) and \(\sigma=259010\), respectively. In contrast, the Brillouin cross-section in Figs. 3(d) and (e) is smaller; \(\sigma=328\) and \(\sigma=6\), respectively. A zoomed-in version of the graphs depicting the integrands in more detail is shown in the Supplementary Information. We note that although the apparent overall contribution is positive, the positive right quadrant contribution in panel (d) is not substantial, resulting in a fairly small cross-section. Similarly, in panel (e), even though the positive left quadrant displays double peaks, the negative peaks are thicker than the positive contributions, resulting in a small Brillouin cross-section as well.
Topological interface states have been demonstrated to exhibit exceptional robustness against disorder, which
Figure 2: Top panels: Band inversion of the (a)-(c) third acoustic bandgap around 28 GHz, and (d)-(i) fourth bandgap around 37.3 GHz. Bottom panels: calculated acoustic reflectivity spectra for the concatenated DBRs with the corresponding GaAs/AlAs thickness ratio \(\delta\) marked as dashed lines on the top panels.
is useful for transport and error-free data communication. [44; 45; 46]. The band inversion principle exploited here to build topological resonators preserves the center of the bandgap when varying \(\delta\). We have numerically demonstrated that the robustness of the interface mode applies to all bandgap orders when introducing fluctuations in the layer thickness ratio. In practice, such fluctuations might emerge, for example, by material intermixing or composition fluctuations [47; 48] during the epitaxial growth of the layers that form the superlattice. The fluctuations considered here, concern changes in the GaAs/AlAs ratio while maintaining a constant unit-cell acoustic thickness. In the model, we implement this by using a flat distribution of random numbers with an amplitude \(\Delta\delta/\delta\) ranging from zero (unperturbed system) to \(0.5\). We note that a noise amplitude set to \(0.999\) effectively closes the targeted bandgap for each designed superlattice.
In Figs. 4(a) and (b), we compare the topological interface state generated in the third bandgap (corresponding to the structure shown in Fig. 2(c)) to the confined mode in a Fabry-Perot (FP) cavity. The FP cavity considered here is formed by non-centrosymmetric unit cells of GaAs/AlAs with \(\delta=0.66\), chosen such that it preserves the maximum bandgap opening. The two DBRs surround a spacer of thickness \(\lambda\), and are embedded in a GaAs background. Figure 4(a) displays the acoustic frequencies of the topological (blue) and the Fabry-Perot (orange)
Figure 4: (a) Resonant frequency in the third bandgap under random perturbations, with a uniform distribution of width \(\Delta\delta/\delta\). The acoustic resonance frequency stays trapped at the bandgap center for the topological mode (blue) but undergoes variations in the Fabry–Perot (orange). (b) Acoustic quality factor under random perturbations. For both types of resonators, the acoustic quality factor drops by a factor of ten. Similarly, (c) and (d) display the same comparisons for the fourth bandgap.
Figure 3: Integrand of the Brillouin cross-section for interface states in the third bandgap (a),(b), associated to the structures shown in Figs. 2(a) and (c), respectively; and the fourth bandgap (c)-(f), associated to the structures of Figs. 2(d), (f), (g) and (i), respectively. The vertical red line indicates the interface between the two DBRs.
resonators as a function of \(\delta\)-fluctuations. In the FP resonator, the fluctuations are introduced in the thickness ratio of GaAs and AlAs constituting the DBR unit cell, while the spacer has no perturbation. In this case, the topological resonator maintains its resonance at the bandgap center. In contrast, the resonance frequency of the FP resonator undergoes large fluctuations, spanning up to 0.5 GHz away from the center frequency. Figure 4(b) shows the influence of thickness fluctuations on the acoustic quality factors of the two structures. As seen, both quality factors decrease by a factor of 10 at high perturbation amplitudes. This effect can be explained by the effective reduction in the width of the bandgap, which results in an increase of the evanescent decay length of the confined mode, and so an enhancement of the leakage through the DBRs towards the background [38]. Despite the decrease in quality factors for both structures with increasing perturbation amplitudes, the topological structure shows more consistent Q-factors compared to the FP resonator. This is related to the fact that the fluctuations in the topological mode frequency are less prominent, and so its associated Q-factor decreases at a slower pace.
Likewise, in Figs. 4(c) and (d), we compare the robustness of the topological interface state generated in the fourth bandgap to an FP cavity (\(\delta=0.75\)), with a spacer of thickness \(\lambda/2\). Figure 4(c) shows the resonant frequencies of both resonators as a function of the fluctuations. The observed behavior is similar to the case of the third bandgap: the frequency is clamped at the bandgap center for the topological structure and fluctuates for the Fabry-Perot resonator. These results show that the robustness characteristic of topological devices, protecting the acoustic resonance against disorder, is also preserved at high bandgap orders. However, as we see by comparing Figs. 4(b) and (d), the acoustic quality factor of the fourth bandgap (Fig. 4(d)) in both structures (FP and topological) is more sensitive to fluctuations in comparison to the third bandgap (Fig. 4(b)). This can be understood intuitively by comparing the opening and closing of the bandgaps as a function of unit cell composition, as shown, for example, in Fig. 1. When transitioning to higher-order bandgaps, the opening/closing of each bandgap necessitates gradually smaller adjustments in material thicknesses. Consequently, interface states at higher-order bandgaps become progressively more susceptible to inaccuracies in material thicknesses.
## 4 Multimode engineering
In the previous section, we showed that we could engineer the interface states at the \(n^{th}\) bandgap in topological acoustic resonators by carefully tuning the unit-cell material thickness ratios in both juxtaposed DBRs. In this section, we will show that we can also generate interface states at multiple bandgap orders simultaneously. By varying the unit-cell relative thickness ratio, \(\delta\), the bandgap amplitudes at all the orders are, in fact, simultaneously altered. However, the closing and reopening of the bandgaps are not coincident for every bandgap order. As a result, by changing \(\delta\), we can reach different combinations of bandgap symmetries, and so we can engineer the formation of interface states at different bands.
Figure 5 presents different conditions to generate interface states at different gaps. In the first case (Figs. 5(a)-(c)), we optimize the GaAs/AlAs thickness ratio to generate interface states at the second and fourth bandgaps, with \(\delta=\pm 0.33\) in each DBR. Fig. 5(a) shows the dependence of the bandgaps on \(\delta\), with the dashed vertical lines indicating the \(\delta\) values of each DBR (\(\delta=\pm 0.33\)), and the orange (blue) dots corresponding to the symmetric (anti-symmetric) modes. Fig. 5(b) shows the unit cell configurations at the interface for the chosen \(\delta\), in which the dark blue and green colors represent GaAs whereas light blue and green represent AlAs. The inset at the right side of Fig. 5(a) shows the mode symmetry of the two DBRs in each band. As shown, at the particular value of \(\delta\) chosen here, there is an inversion of symmetry at the second and the fourth bandgap, whereas the third bandgap is closed (see black dots). In the calculated acoustic reflectivity spectrum shown in Fig. 5(c), three bandgaps can be seen. In these bandgaps, the band-inversion interface states are present for the 2nd and 4th gaps, as indicated by the dips centered in the high reflectivity regions at \(\sim\)18 GHz and \(\sim\)37 GHz, respectively. As we already saw in Fig. 5(a), the third gap is closed for this combination of \(\delta\)s in the DBRs, and so that gap is absent in the reflectivity spectrum.
Figures 5(d)-(f) present the conditions that generate interface states in the third and fourth bandgap simultaneously. The structure is designed with \(\delta_{op/bottom}=-0.15/-0.85\) for the top and bottom DBRs. In comparison to the previous case, the four bandgaps are open for both superlattices (see panel (d)), resulting in four high reflectivity regions in the reflectivity spectrum, as shown in panel (f). For this configuration of unit cells, interface states are present in the third and fourth bandgaps. They are induced by the inversion of symmetry of the modes around the respective bandgaps, as shown in panel (d). On the contrary, there is no interface mode in the first and second bandgaps, as they have the same band-edge symmetries for both superlattices.
We note that the mode is not centered in the third bandgap, as shown in Fig. 5(f). To generate an interface that is
centered in the bandgap, there are two necessary conditions. First, the two DBRs should have a bandgap with equal central frequency and, second, they should have the same bandwidth. The first condition is required for topological robustness [38]. The second condition results in similar evanescent decay lengths into both DBRs. In case the values of \(\delta\) for each superlattice are not equidistant from a band inversion point, the interface state generated is not centered in the bandgap, as we can see in Figs. 5(f) and (i) at the third and second bandgaps, respectively. Figures 5(g)-(i) present the conditions to generate interface states in the second and third bandgaps, associated to \(\delta_{op/bottom}=-0.8/+0.2\). In Fig. 5(g) the two DBRs have inverted symmetry at the targeted bandgaps, whereas the mode symmetries at the fourth bandgap on both DBRs are the same, even though they fall into different bandgap openings. This results in two interface states at bandgap orders 2 and 3, as seen in Fig. 5(i). Figures 5(j)-(l) present the conditions to generate interface states only in the fourth bandgap, with \(\delta_{op/bottom}=+0.4/+0.6\). Despite the slight disparity between the unit cells of the two superlattices in this arrangement, the interface state is generated (Fig. 5(k)). Only the modes at the fourth bandgap have inverted symmetries, whereas all the other gaps exhibit the same mode symmetry. As a result, the acoustic reflectivity spectrum, displayed in Fig. 5(l), presents three high reflectivity regions and one interface mode in the fourth bandgap.
The first bandgap does not undergo any symmetry inversion of the topological phase over the entire \(\delta\) range, which makes it impossible to create an interface state at this band. It is important to point out that this does not mean that interface states cannot be conceived in the first bandgap. For instance, one can tune the impedance of the materials to switch the symmetry of the modes. [6, 29]
## 5 Hybrid topological resonators
The formation of interface states is not limited to the same bandgap order of the constituting DBRs. In fact, the general rule for engineering topological states is associated with the sign of the reflection phase as well as the overlap of the high reflectivity regions from both reflectors. So far, we have fulfilled these conditions and created interface modes at higher-order bandgaps by concatenating two DBRs designed at the same acoustic frequency. In this section, we extend this concept to generate topological states between two DBRs designed at different
Figure 5: Engineering of topological interface states. (a) Band inversion of the acoustic bandgaps. The dots show the bandgaps opening and symmetries at the given \(\delta\). Inset: Symmetry of the two DBRs concatenated. There is inversion only for the fourth bandgap. (b) Schematic of the unit cell configurations at the interface between the two DBRs, with the dark blue and green colors representing GaAs and the corresponding light colors representing AlAs. (c) Simulated acoustic reflectivity spectra for two topological resonators formed by two DBRs concatenated embedded in GaAs with \(\delta_{top}=-0.33\) and \(\delta_{bottom}=+0.33\). Likewise in (d)-(f) \(\delta_{top}=-0.15\) and \(\delta_{bottom}=-0.85\); (g)-(i) \(\delta_{top}=-0.8\) and \(\delta_{bottom}=+0.2\); and (j)-(l) \(\delta_{top}=+0.4\) and \(\delta_{bottom}=+0.6\).
fundamental frequencies, resulting in bandgaps of different orders sharing the same frequency range.
Figures 6(a) and (b) display the band inversion diagrams of two superlattices, S1 and S2, intended to have the first bandgap at different frequencies. The first superlattice (S1) is designed to have a fundamental bandgap centered \(\sim\)9.3 GHz (Fig. 6(a)), while the first bandgap of S2 is centered at \(\sim\)14 GHz (Fig. 6(b)). These designs result in the third bandgap of S1 centered at the same frequency as the second bandgap of S2, which is represented by the alignment of both bands in panels (a) and (b). Fig. 6(c) shows the acoustic reflectivity spectra of the two DBRs associated with the band structures in panels (a) and (b). The reflectivities are calculated for values of \(\delta\) in which all the bandgaps are open. We see that there is a complete overlap of the high reflectivity regions around 28 GHz. By concatenating two such superlattices with overlapping bandgaps of different order, we can generate an interface state at the frequency in which they overlap.
Figs. 6(d) and (e) depict the calculated acoustic reflectivity spectra for two different combinations of such DBRs. The chosen values of \(\delta\) are marked on the band inversion figures (Figs. 6(a) and (b)) with the vertical dashed lines. The matching labels between panels (a) and (b) indicate the corresponding acoustic reflectivity spectrum. The first superlattice on both structures is the same, where \(\delta=0.66\) corresponds to the thickness ratio for which the third bandgap opening is maximized. On the other hand, the thickness ratio of the second superlattice is chosen to maximize the second bandgap, where \(\delta=0.5\) (\(\delta=-0.5\)) correspond to the same (inverted) band edge mode symmetries compared to the third order bandgap of S1. In both cases (panels (d) and (e)), there are five high-reflectivity regions. The regions below 25 GHz and above 35 GHz correspond to the individual bandgaps of S1 and S2 that have no overlap. At 28 GHz, different bandgaps of the two DBRs do overlap, but an interface state is either present (panel (d)) or absent (panel (e)) in the two shown examples. Contrary to what one might have expected based on the results of the previous sections, the combination of superlattices supporting an interface state (panel (d)) have the same band edge mode symmetries. Likewise, when the symmetry of the modes between S1 and S2 is inverted, no interface state is generated. Therefore, the rule of creating an interface state by band inversion cannot be blindly applied for bandgaps of different orders.
To understand this, we investigate the relation between the generation of an interface state and the acoustic displacement field in the unit cell of each superlattice. In Fig. 6(f) and (g), we can see a schematic of the unit cells of superlattices S1 and S2, associated with the acoustic reflectivity spectra shown in Figs. 6(d) and (e). As before, the dark- and light-colored regions correspond to GaAs and AlAs, respectively. On top of these superlattices, we show the corresponding relevant acoustic displacements. More specifically, in the top (bottom) panels of
Figure 6: Hybrid topological acoustic resonator. (a),(b) Band inversion of the acoustic bandgaps associated to the two concatenated superlattices, named (a) S1 (green frame), and (b) S2 (blue frame). The third bandgap of S1 and the second bandgap of S2 share the same central frequency, around 28 GHz. (c) Simulated acoustic reflectivity of the two DBRs. The green (blue) line corresponds to the band inversion diagram displayed in panel (a) (panel (b)). (d),(e) Simulated acoustic reflectivity spectra for different concatenated DBRs embedded in GaAs. The relative thickness ratio \(\delta\) of the corresponding DBRs is marked by dashed lines on panels (a) and (b). (f),(g) Acoustic displacement \(|u(z)|^{2}\) of the modes at the edge of the bandgaps plotted on top of the unit cell for the two superlattices. The green (blue) unit cell corresponds to the band inversion diagram displayed in panel (a) (panel (b)), and the dark (light) colors represent GaAs and AlAs, respectively.
this schematic, we show the acoustic displacement of the modes at the higher (lower) frequency band edges of the bandgap centered at 28 GHz in each superlattice. As we can see in Fig. 6(f), there is a discontinuity of the displacement field at the interface between these superlattices. This discontinuity leads to the generation of an interface state, even though the band edge modes have the same symmetry. On the other hand, in Fig. 6(g), the displacement field at the interface between S1 and S2 is continuous, and so prevents the formation of an interface state. In general, when the order of one bandgap is even, and the other one is odd, an interface state is generated if both have the same symmetries. On the contrary, if two odd or even bandgaps are concatenated, they must have inverted symmetries to generate an interface state. The difference in phases of the reflection coefficient does not only depend on the sum of the Zak phases anymore but also on the order of each bandgap. The case of two DBRs with different lattice parameters was briefly considered in Ref. [6].
## 6 Conclusions
We theoretically presented a method to generate acoustic interface states in topological nanoacoustic resonators based on the band inversion principle. We simulated a series of topological optophononic resonators with different combinations of concatenated superlattices. By changing the thickness ratio of GaAs and AlAs in the unit cell, we were able to control the symmetries of the modes around each bandgap. In general, an interface state can be generated when two superlattices with inverted symmetries are concatenated. Here, we extended this principle to create interface states in high-order bandgaps. The modes that we presented can be accessed experimentally in a Brillouin or pump-probe experiment. We numerically discussed the Brillouin efficiency of different combinations of superlattices forming topological states at the third and fourth bandgap orders. The accessibility of these states in Brillouin scattering experiments is directly associated with the unit cell thickness ratio between GaAs and AlAs of both concatenated superlattices. In addition, we studied the robustness of our structures against disorder and compared them with Fabry-Perot resonators. The use of GaAs and AlAs enables the study of electronic and optical resonances effects, [49, 50] and is compatible with the integration with quantum dots and quantum wells. However, the concepts presented in this work can be easily extended to other materials platforms.
Moreover, we demonstrated that multiple topological acoustic interface states can be generated simultaneously at different bandgap orders. Importantly, the generated interface states are robust against disorder in the unit cell thickness ratio across a broad frequency range that does not affect the associated Zak phases. A promising extension of this system might thus arise by introducing spatial periodicity to the system. Then, the platform developed here could potentially enable the development and study of synthetic dimensions, where high-order interface states would serve as an additional lattice dimension for the topological system [51, 52]. Furthermore, we demonstrated the presence of interface states in hybrid structures, i.e., structures combining two superlattices with bandgaps of different orders centered at the same frequency. The interface states in high-order bandgaps presented here can potentially be a useful tool to explore a full class of hybrid topological resonators. One could, for instance, exploit the versatility of hybrid topological resonators to generate multiple interface states at higher-order frequency-matched bandgaps, that are difficult to access by electronics or optics due to their respective dispersion relations. Overall, our results constitute an important step in the development of nanophononics for robust noise-insensitive communication, data processing, and quantum technologies.
## 7 Acknowledgments
The authors gratefully acknowledge M. Esmann for fruitful discussions and support at an early stage of the project. The authors acknowledge funding from European Research Council Consolidator Grant No.101045089 (T-Recs). This work was supported by the European Commission in the form of the H2020 FET Proactive project No. 824140 (TOCHA).
|
2306.01923 | The Surprising Effectiveness of Diffusion Models for Optical Flow and
Monocular Depth Estimation | Denoising diffusion probabilistic models have transformed image generation
with their impressive fidelity and diversity. We show that they also excel in
estimating optical flow and monocular depth, surprisingly, without
task-specific architectures and loss functions that are predominant for these
tasks. Compared to the point estimates of conventional regression-based
methods, diffusion models also enable Monte Carlo inference, e.g., capturing
uncertainty and ambiguity in flow and depth. With self-supervised pre-training,
the combined use of synthetic and real data for supervised training, and
technical innovations (infilling and step-unrolled denoising diffusion
training) to handle noisy-incomplete training data, and a simple form of
coarse-to-fine refinement, one can train state-of-the-art diffusion models for
depth and optical flow estimation. Extensive experiments focus on quantitative
performance against benchmarks, ablations, and the model's ability to capture
uncertainty and multimodality, and impute missing values. Our model, DDVM
(Denoising Diffusion Vision Model), obtains a state-of-the-art relative depth
error of 0.074 on the indoor NYU benchmark and an Fl-all outlier rate of 3.26\%
on the KITTI optical flow benchmark, about 25\% better than the best published
method. For an overview see https://diffusion-vision.github.io. | Saurabh Saxena, Charles Herrmann, Junhwa Hur, Abhishek Kar, Mohammad Norouzi, Deqing Sun, David J. Fleet | 2023-06-02T21:26:20Z | http://arxiv.org/abs/2306.01923v2 | # The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
###### Abstract
Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity. We show that they also excel in estimating optical flow and monocular depth, surprisingly, without task-specific architectures and loss functions that are predominant for these tasks. Compared to the point estimates of conventional regression-based methods, diffusion models also enable Monte Carlo inference, e.g., capturing uncertainty and ambiguity in flow and depth. With self-supervised pre-training, the combined use of synthetic and real data for supervised training, and technical innovations (infilling and step-unrolled denoising diffusion training) to handle noisy-incomplete training data, and a simple form of coarse-to-fine refinement, one can train state-of-the-art diffusion models for depth and optical flow estimation. Extensive experiments focus on quantitative performance against benchmarks, ablations, and the model's ability to capture uncertainty and multimodality, and impute missing values. Our model, DDVM (Denoising Diffusion Vision Model), obtains a state-of-the-art relative depth error of 0.074 on the indoor NYU benchmark and an Fl-all outlier rate of 3.26% on the KITTI optical flow benchmark, about 25% better than the best published method. For an overview see diffusion-vision.github.io
## 1 Introduction
Diffusion models have emerged as powerful generative models for high fidelity image synthesis, capturing rich knowledge about the visual world [19; 46; 53; 60]. However, at first glance, it is unclear whether these models can be as effective on many classical computer vision tasks. For example, consider two dense vision estimation tasks, namely, optical flow, which estimates frame-to-frame correspondences, and monocular depth perception, which makes depth predictions based on a single image. Both tasks are usually treated as regression problems and addressed with specialized architectures and task-specific loss functions, _e.g._, cost volumes, feature warps, or suitable losses for depth. Without these specialized components or the regression framework, general generative techniques may be ill-equipped and vulnerable to both generalization and performance issues.
In this paper, we show that these concerns, while valid, can be addressed and that, surprisingly, a generic, conventional diffusion model for image to image translation works impressively well on both tasks, often outperforming the state of the art. In addition, diffusion models provide valuable benefits over networks trained with regression; in particular, diffusion allows for approximate inference with multi-modal distributions, capturing uncertainty and ambiguity (_e.g._ see Figure 1).
One key barrier to training useful diffusion models for monocular depth and optical flow inference concerns the amount and quality of available training data. Given the limited availability of labelled training data, we propose a training pipeline comprising multi-task self-supervised pre-training followed by supervised pre-training using a combination of real and synthetic data. Multi-task self-supervised pre-training leverages the strong performance of diffusion models on tasks like colorization and inpainting (e.g., 52). We also find that supervised (pre-)training with a combination of real and large-scale synthetic data improves performance significantly.
A further issue concerns the fact that many existing real datasets for depth and optical flow have noisy and incomplete ground truth annotations. This presents a challenge for the conventional training framework and iterative sampling in diffusion models, leading to a problematic distribution shift between training and inference. To mitigate these issues we propose the use of an \(L_{1}\) loss for robustness, infilling missing depth values during training, and the introduction of _step-unrolled denoising diffusion_. These elements of the model are shown through ablations to be important for both depth and flow estimation.
Our contributions are as follows:
1. [leftmargin=*]
2. We formulate optical flow and monocular depth estimation as image to image translation with generative diffusion models, without specialized loss functions and model architectures.
3. We identify and propose solutions to several important issues w.r.t data. For both tasks, to mitigate distribution shift between training and inference with noisy, incomplete data, we propose infilling, step-unrolling, and an \(L_{1}\) loss during training. For flow, to improve generalization, we introduce a new dataset mixture for pre-training, yielding a RAFT [72] baseline that outperforms all published methods in zero-shot performance on the Sintel and KITTI training benchmarks.
4. Our diffusion models is competitive with or surpasses SOTA for both tasks. For monocular depth estimation we achieve a SOTA relative error of 0.074 on the NYU dataset and perform competitively on KITTI. For flow, diffusion surpasses the stronger RAFT baseline by a large margin in pre-training and our fine-tuned model achieves an Fl-all outlier rate of 3.26% on the public KITTI test benchmark, \(\sim\)25% lower than the best published method [68].
5. Our diffusion model is also shown to capture flow and depth uncertainty, and the iterative denoising process enables zero-shot, coarse-to-fine refinement, and imputation.
## 2 Related work
Optical flow and depth estimation have been extensively studied. Here we briefly review only the most relevant work, and refer the interested readers to the references cited therein.
**Optical flow.** The predominant approach to optical flow is regression-based, with a focus on specialized network architectures to exploit domain knowledge,
Figure 1: **Examples of multi-modal prediction** on depth (NYU) and optical flow (Sintel and KITTI). Each row shows an input image (or overlayed two images for optical flow), a variance heat map from 8 samples, and 3 individual samples. Our model captures multi-modal samples on uncertain/ambiguous cases, such as reflective (_e.g._ mirror on NYU), transparent (_e.g._ vehicle window on KITTI), and translucent (_e.g._ fog on Sintel) regions. High variance also exists near object boundaries due to inaccurate estimates, which are often challenging cases for optical flow, and also partially originate from noisy ground truth measurements for depth. See Figures 8, 9, 10 and 11 for more examples.
_e.g._, cost volume construction [12; 20; 21; 36; 66; 79; 81; 83], coarse-to-fine estimation [66; 75; 80], occlusion handling [22; 25; 65], or iterative refinement [23; 24; 72], as evidenced by public benchmark datasets [4; 42]. Some recent work has also advocated for generic architectures: Perceiver IO [26] introduces a generic transformer-based model that works for any modality, including optical flow and language modeling. Regression-based methods, however, only give a single prediction of the optical flow and do not readily capture uncertainty or ambiguity in the flow. Our work introduces a surprisingly simple, generic architecture for optical flow using a denoising diffusion model.
We find that this generic generative model is surprisingly effective for optical flow, recovering fine details on motion boundaries, while capturing multi-modality of the motion distribution.
**Monocular depth.** Monocular depth estimation has been a long-standing problem in computer vision [56; 57] with recent progress focusing on specialized loss functions and architectures [1; 5; 13; 29] such as the use of multi-scale networks [10; 11], adaptive binning [3; 33] and weighted scale-shift invariant losses [11]. Large-scale in-domain pre-training has also been effective for depth estimation [47; 48; 50], which we find to be the case here as well. We build on this rich literature, but with a simple, generic architecture, leveraging recent advances in generative models.
**Diffusion models.** Diffusion models are latent-variable generative models trained to transform a sample of a Gaussian noise into a sample from a data distribution [19; 60]. They comprise a _forward process_ that gradually annihilates data by adding noise, as 'time' \(t\) increases from 0 to 1, and a learned _generative process_ that reverses the forward process, starting from a sample of random noise at \(t=1\) and incrementally adding structure (attenuating noise) as \(t\) decreases to 0. A conditional diffusion model conditions the steps of the reverse process (e.g., on labels, text, or an image).
Central to the model is a denoising network \(f_{\theta}\) that is trained to take a noisy sample \(y_{t}\) at some time-step \(t\), along with a conditioning signal \(x\), and predict a less noisy sample. Using Gaussian noise in the forward process, one can express the training objective over the sequence of transitions (as \(t\) slowly decreases) as a sum of non-linear regression objectives, with the L2 loss (here with the \(\epsilon\)-parameterization):
\[\mathbb{E}_{(\boldsymbol{x},\,\boldsymbol{y})}\,\mathbb{E}_{(t,\,\boldsymbol {\epsilon})}\,\bigg{\|}f_{\theta}(\boldsymbol{x},\,\underbrace{\sqrt{\gamma_{t }}\,\boldsymbol{y}+\sqrt{1\!-\!\gamma_{t}}\,\boldsymbol{\epsilon}}_{\boldsymbol {y}_{t}},\,t)-\boldsymbol{\epsilon}\,\bigg{\|}_{2}^{2} \tag{1}\]
where \(\boldsymbol{\epsilon}\sim\mathcal{N}(0,I)\), \(t\sim\mathcal{U}(0,1)\), and where \(\gamma_{t}>0\) is computed with a pre-determined noise schedule. For inference (i.e., sampling), one draws a random noise sample \(\boldsymbol{y}_{1}\), and then iteratively uses \(f_{\theta}\) to estimate the noise, from which one can compute the next latent sample \(\boldsymbol{y}_{s}\), for \(s<t\).
**Self-supervised pre-training.** Prior work has shown that self-supervised tasks such as colorization [31; 84] and masked prediction [78] serve as effective pre-training for downstream vision tasks.
Figure 2: **Training architecture. Given ground truth flow/depth, we first infill missing values using interpolation. Then, we add noise to the label map and train a neural network to model the conditional distribution of the noise given the RGB image(s), noisy label, and time step. One can optionally unroll the denoising step(s) during training (with stop gradient) to bridge the distribution gap between training and inference for \(y_{t}\).**
Our work also confirms the benefit of self-supervised pre-training [52] for diffusion-based image-to-image translation, by establishing a new SOTA on optical flow estimation and monocular depth estimation while also representing multi-modality and supporting zero-shot coarse-to-fine refinement and imputation.
## 3 Model Framework
In contrast to the conventional monocular depth and optical flow methods, with rich usage of specialized domain knowledge on their architecture designs, we introduce simple, generic architectures and loss functions. We replace the inductive bias in state-of-the-art architectures and losses with a powerful generative model along with a combination of self-supervised pre-training and supervised training on both real and synthetic data.
The denoising diffusion model (Figure 2) takes a noisy version of the target map (_i.e._, a depth or flow) as input, along with the conditioning signal (one RGB image for depth and two RGB images for flow). The denoiser effectively provides a noise-free estimate of the target map (_i.e._, ignoring the specific loss parameterization used). The training loss penalizes residual error in the denoised map, which is quite distinct from typical image reconstruction losses used in optical flow estimation.
### Synthetic pre-training data and generalization
Given that we train these models with a generic denoising objective, without task-specific inductive biases in the form of specialized architectures, the choice of training data becomes critical. Below we discuss the datasets used and their contributions in detail. Because training data with annotated ground truth is limited for many dense vision tasks, here we make extensive use of synthetic data in the hope that the geometric properties acquired from synthetic data during training will transfer to different domains, including natural images.
AutoFlow [67] has recently emerged as a powerful synthetic dataset for training flow models. We were surprised to find that training on AutoFlow alone is insufficient, as the diffusion model appears to devote a significant fraction of its representation capacity to represent the shapes of AutoFlow regions, rather than solving for correspondence. As a result, models trained on AutoFlow alone exhibit a strong bias to generate flow fields with polygonal shaped regions, much like those in AutoFlow, often ignoring the shapes of boundaries in the two-frame RGB inputs (_e.g._ see Figure 3).
To mitigate bias induced by AutoFlow in training, we further mix in three synthetic datasets during training, namely, FlyingThings3D [38], Kubric [17] and TartanAir [74]. Given a model pre-trained on AutoFlow, for compute efficiency, we use a greedy mixing strategy where we fix the relative ratio of the previous mixture and tune the proportion of the newly added dataset. We leave further exploration of an optimal mixing strategy to future work. Zero-shot testing of the model on Sintel and KITTI (see Table 1 and Fig. 3) shows substantial performance gains with each additional synthetic dataset.
We find that pre-training is similarly important for depth estimation (see Table 7). We learn separate indoor and outdoor models. For the indoor model we pre-train on a mix of ScanNet [7] and SceneNet RGB-D [39]. The outdoor model is pre-trained on the Waymo Open Dataset [69].
Figure 3: **Effects of adding synthetic datasets in pretraining**. Diffusion models trained only with AutoFlow (AF) tend to provide very coarse flow estimates and can hallucinate shapes. The addition of FlyingThings (FT), Kubric (KU), and TartanAir (TA) remove the AF-induced bias toward polygonal-shaped regions, and significantly improve flow quality on fine detail, e.g. trees, thin structures, and motion boundaries.
### Real data: Challenges with noisy, incomplete ground truth
Ground truth annotations for real-world depth or flow data are often sparse and noisy, due to highly reflective surfaces, light absorbing surfaces [63], dynamic objects [41], _etc_. While regression-based methods can simply compute the loss on pixels with valid ground truth, corruption of the training data is more challenging for diffusion models. Diffusion models perform inference through iterative refinement of the target map \(\mathbf{y}\) conditioned on RGB image data \(\mathbf{x}\). It starts with a sample of Gaussian noise \(\mathbf{y}_{1}\), and terminates with a sample from the predictive distribution \(p(\mathbf{y}_{0}\,|\,\mathbf{x})\). A refinement step from time \(t\) to \(s\), with \(s\!<\!t\), proceeds by sampling from the parameterized distribution \(p_{\theta}(\mathbf{y}_{s}\,|\,\mathbf{y}_{t},\mathbf{x})\); i.e., each step operates on the output from the previous step. During training, however, the denoising steps are decoupled (see Eqn. 1), where the denoising network operates on a noisy version of the ground truth depth map instead of the output of the previous iteration (reminiscent of teaching forcing in RNN training [77]). Thus there is a distribution shift between marginals over the noisy target maps during training and inference, because the ground truth maps have missing annotations and heavy-tailed sensor noise while the noisy maps obtained from the previous time step at inference time should not. This distribution shift has a very negative impact on model performance. Nevertheless, with the following modifications during training we find that the problems can be mitigated effectively.
**Infilling.** One way to reduce the distribution shift is to impute the missing ground truth.
We explored several ways to do this, including simple interpolation schemes, and inference using our model (trained with nearest neighbor interpolation). We find that nearest neighbor interpolation is sufficient to impute missing values in the ground truth maps in the depth and flow field training data.
Despite the imputation of missing ground truth depth and flow values, note that the training loss is only computed and backpropagated from pixels with known (not infilled) ground truth depth. We refer to this as the masked denoising loss (see Figure 2).
**Step-unrolled denoising diffusion training.** A second way to mitigate distribution shift in the \(y_{t}\) marginals in training and inference, is to construct \(y_{t}\) from model outputs rather than ground truth maps. One can do this by slightly modifying the training procedure (see Algorithm 1) to run one forward pass of the model and build \(y_{t}\) by adding noise to the model's output rather than the training map. We do not propagate gradients for this forward pass. This process, called _step-unrolled denoising diffusion_, slows training only marginally (\(\sim\)15% on a TPU v4). Interestingly, this problem of training / inference distribution shift resembles that of _exposure bias_[49] in autoregressive models, for which the mismatch is caused by _teacher forcing_ during training [77]. Several solutions have been proposed for this problem in the literature [2, 30, 82]. Step-unrolled denoising diffusion also closely resembles the approach in [55] for training denoising autoencoders on text.
We only perform step-unrolled denoising diffusion during model fine-tuning. Early in training the denoising predictions are inaccurate, so the latent marginals over the noisy target maps will be closer to the desired _true_ marginals than those produced by adding noise to denoiser network outputs. One might consider the use of a curriculum for gradually introducing step-unrolled denoising diffusion in the later stages of supervised pre-training, but this introduces additional hyper-parameters, so we simply invoke step-unrolled denoising diffusion during fine-tuning, and leave an exploration of curricula to future work.
\(L_{1}\)**denoiser loss.** While the \(L_{2}\) loss in Eqn. 1 is ideal for Gaussian noise and noise-free ground truth maps, in practice, real ground truth depth and flow fields are noisy and heavy tailed; _e.g._, for distant objects, near object boundaries, and near pixels with missing annotations. We hypothesize that the robustness afforded by the \(L_{1}\) loss may therefore be useful in training the neural denoising network. (See Tables 10 and 11 in the supplementary material for an ablation of the loss function for monocular depth estimation.)
### Coarse-to-fine refinement
Training high resolution diffusion models is often slow and memory intensive but estimation accuracy has been shown to improve with resolution [16]. A simple solution is to perform inference in a coarse-to-fine manner, first estimating the flow over the entire field of view at low resolution, and then refining the estimates in a patch-wise manner. For refinement we first up-sample the low-resolution map to the target resolution using bicubic interpolation. Patches are cropped from the up-scaled map, denoted \(z\), along with the corresponding RGB inputs. Then we run diffusion model inference starting at time \(t^{\prime}\) with a noisy map \(y_{t^{\prime}}\sim\mathcal{N}(y_{t^{\prime}};\sqrt{\gamma_{t^{\prime}}}\,z,(1 -\gamma_{t^{\prime}})I)\). For simplicity, \(t^{\prime}\) is a fixed hyper-parameter, set based on a validation set. This process is carried out for multiple overlapping patches. Following Perceiver IO [26], the patch estimates are then merged using weighted masks with lower weight near the patch boundaries since predictions at boundaries are more prone to errors. (See Section H.5 for more details.)
## 4 Experiments
As our denoiser backbone, we adopt the Efficient UNet architecture [53], pretrained with Palette [52] style self-supervised pretraining, and slightly modified to have the appropriate input and output
Figure 4: **Visual results comparing RAFT with our method after pretraining. Note that our method does much better on fine details and ambiguous regions.**
Figure 5: **Visual results comparing RAFT with our method after finetuning. Ours does much better on fine details and ambiguous regions.**
channels for each task. Since diffusion models expect inputs and generate outputs in the range \([-1.,1.]\), we normalize depths using max depth of 10 meters and 80 meters respectively for the indoor and outdoor models. We normalize the flow using the height and width of the ground truth. Refer to Section H for more details on the architecture, augmentations and other hyper-parameters.
**Optical flow.** We pre-train on the mixture described in Section 3.1 at a resolution of 320\(\times\)448 and report zero-shot results on the widely used Sintel [4] and KITTI [42] datasets. We further fine-tune this model on the standard mixture consisting of AutoFlow [67], FlyingThings [38], VIPER [51], HD1K [28], Sintel and KITTI at a resolution of 320\(\times\)768 and report results on the test set from the public benchmark. We use a standard average end-point error (AEPE) metric that calculates L2 distance between ground truth and prediction. On KITTI, we additionally use the outlier rate, Fl-all, which reports the outlier ratio in \(\%\) among all pixels with valid ground truth, where an estimate is considered as an outlier if its error exceeds 3 pixels and \(5\%\) w.r.t. the ground truth.
**Depth.** We separately pre-train indoor and outdoor models on the respective pre-training datasets described in Section 3.1. The indoor depth model is then finetuned and evaluated on the NYU depth v2 dataset [59] and the outdoor model on the KITTI depth dataset [15]. We follow the standard evaluation protocol used in prior work [33]. For both NYU depth v2 and KITTI, we report the absolute relative error (REL), root mean squared error (RMS) and accuracy metrics (\(\delta_{1}<1.25\)).
### Evaluation on benchmark datasets
**Depth.** Table 3 reports the results on NYU depth v2 and KITTI (see Section D for more detailed results and Section B for qualitative comparison with DPT on NYU). We achieve a state-of-the-art absolute relative error of 0.074 on NYU depth v2. On KITTI, our method performs competitively with prior work. We report results with averaging depth maps from one or more samples. Note that most prior works use post processing that averages two samples, one from the input image, and the other based on its reflection about the vertical axis.
**Flow.** Table 1 reports the zero-shot results of our model on Sintel and KITTI Train datasets where ground truth are provided. The model is trained on our newly proposed pre-training mixtures
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Sintel.clean Sintel.final} & \multicolumn{2}{c}{KITTI} \\ \cline{3-6} & & & AEPE & AEPE & Fl-all \\ \hline FlowFormer & Chairs\(\rightarrow\)Things & **1.01** & 2.40 & 4.09 & 14.72\% \\ RAFT & Chairs\(\rightarrow\)Things & 1.68 & 2.80 & 5.92 & - \\ \hline Perceiver IO & AutoFlow & 1.81 & 2.42 & 4.98 & - \\ RAFT & AutoFlow & 1.74 & 2.41 & 4.18 & 13.41\% \\ \hline RAFT (ours) & A\(\rightarrow\)AFF+FF+KU+TA & 1.27 & 2.28 & 2.71 & 9.16\% \\
**DDVM (ours)** & A\(\rightarrow\)AFF+FF+KU+TA & 1.24 & **2.00** & **2.19** & **7.58\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Zero-shot optical flow estimation results on Sintel and KITTI. We provide a new RAFT baseline using our proposed pre-training mixture and substantially improve the accuracy over the original. Our diffusion model outperforms even this much stronger baseline and achieves state-of-the-art zero-shot results on Sintel.final and KITTI.**
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Architecture} & \multicolumn{2}{c}{NYU-Depth-v2} & \multicolumn{2}{c}{KITTI} \\ \cline{3-6} & & \(\delta_{1}\uparrow\) & REL\(\downarrow\) & RMS\(\downarrow\) & \(\delta_{1}\uparrow\) & REL\(\downarrow\) & RMS\(\downarrow\) \\ \hline TransDepth [85] & Res-50+ViT-B\({}^{\dagger}\) & 0.900 & 0.106 & 0.365 & 0.956 & 0.064 & 2.755 \\ DPT [48] & Res-50+ViT-B\({}^{\dagger}\)\({}^{\dagger}\)\({}^{\dagger}\) & 0.904 & 0.110 & 0.357 & 0.959 & 0.062 & 2.573 \\ RTS [32] & DenseNet-161\({}^{\dagger}\) & 0.885 & 0.110 & 0.392 & 0.956 & 0.059 & 2.756 \\ AdaBins [3] & E-B5+Mini-ViT\({}^{\dagger}\) & 0.903 & 0.103 & 0.364 & 0.964 & 0.058 & 2.360 \\ BinsFormer [33] & Swin-Large\({}^{\dagger}\) & 0.925 & 0.094 & 0.330 & 0.974 & 0.052 & 2.098 \\ PixelFormer [1] & Swin-Large\({}^{\dagger}\) & 0.529 & 0.090 & 0.322 & 0.976 & 0.051 & 2.081 \\ MIM [78] & SwinV2-L\({}^{\dagger}\) & 0.949 & 0.083 & 0.287 & **0.977** & **0.050** & **1.966** \\ AIT-P [44] & SwinV2-L\({}^{\dagger}\) & **0.953** & 0.076 & **0.279** & - & - & - \\ \hline \multirow{2}{*}{**DDVM**} & samples=1 & Efficient U-Net\({}^{\top}\)\({}^{\ddagger}\) & 0.944 & 0.075 & 0.324 & 0.964 & 0.056 & 2.700 \\ & samples=2 & Efficient U-Net\({}^{\top}\)\({}^{\ddagger}\) & 0.944 & **0.074** & 0.319 & 0.965 & 0.055 & 2.660 \\ \cline{1-1} & samples=4 & Efficient U-Net\({}^{\top}\)\({}^{\ddagger}\) & 0.946 & **0.074** & 0.315 & 0.965 & 0.055 & 2.613 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Optical flow finetuning evaluation on public benchmark datasets (AEPE\(\downarrow\) for Sintel and Fl-all\(\downarrow\) for KITTI). Bold indicates the best and undering the \(2^{\text{nd}}\)-best. \({}^{\ddagger}\) uses extra datasets (AutoFlow and VIPER) on top of defaults (FlyingThings, HD1K, KITTI, and Sintel). \({}^{*}\)uses warm start on Sintel.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Sintel.clean Sintel.final} & \multicolumn{2}{c}{KITTI} \\ \cline{3-6} & & AEPE & AEPE & F1-all \\ \hline FlowFormer & Chairs\(\rightarrow\)Things & **1.01** & 2.40 & 4.09 & 14.72\% \\ RAFT & Chairs\(\rightarrow\)Things & 1.68 & 2.80 & 5.92 & - \\ \hline Perceiver IO & AutoFlow & 1.81 & 2.42 & 4.98 & - \\ RAFT & AutoFlow & 1.74 & 2.41 & 4.18 & 13.41\% \\ \hline RAFT (ours) & A\(\rightarrow\)AFF+FF+KU+TA & 1.27 & 2.28 & 2.71 & 9.16\% \\
**DDVM (ours)** & A\(\rightarrow\)AFF+FF+KU+TA & 1.24 & **2.00** & **2.19** & **7.58\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Performance comparison on the NYU-Depth-v2 and KITTI datasets. \(\top\) indicates method uses unsupervised pretraining, \(\dagger\)indicates supervised pretraining and \(\ddagger\) indicates use of auxilliary supervised depth data. Best / second best results are bold / underlined respectively. \(\downarrow\): lower is better \(\uparrow\): higher is better.**
(AutoFlow (AF), FlyingThings (FT), Kubric (KU), and TartanAir (TA)). For a fair comparison, we re-train RAFT on this pre-training mixture; this new RAFT model significantly outperforms the original RAFT model. And our diffusion model outperforms the stronger RAFT baseline. It achieves the state-of-the-art zero-shot results on both the challenging Sintel Final and KITTI datasets.
Figure 4 provides a qualitative comparison of pre-trained models. Our method demonstrates finer details on both object and motion boundaries. Especially on KITTI, our model recovers fine details remarkably well, _e.g_. on trees and its layered motion between tree and background.
We further finetune our model on the mixture of the following datasets, AutoFlow, FlyingThings, HD1K, KITTI, Sintel, and VIPER. Table 2 reports the comparison to state-of-the-art optical flow methods on public benchmark datasets, Sintel and KITTI. On KITTI, our method outperforms all existing optical flow methods by a substantial margin (even most scene flow methods that use stereo inputs), and sets the new state of the art. On the challenging Sintel final, our method is competitive with other state of the art models. Except for methods using warm-start strategies, our method is only behind FlowFormer which adopts strong domain knowledge on optical flow (_e.g_. cost volume, iterative refinement, or attention layers for larger context) unlike our generic model. Interestingly, we find that our model outperforms FlowFormer on 11/12 Sintel test sequences and our overall worse performance can be attributed to a much higher AEPE on a single (possibly out-of-distribution) test sequence. We discuss this in more detail in Section I. On KITTI, our diffusion model outperforms FlowFormer by a large margin (30.34\(\%\)).
### Ablation study
**Infilling and step-unrolling.** We study the effect of infilling and step-unrolling in Table 4. For depth, we report results for fine-tuning our pre-trained model on the NYU and KITTI datasets with the same resolution and augmentations as our best results. For flow, we fine-tune on the KITTI train set alone (with nearest neighbor resizing to the target resolution being the only augmentation) at a resolution of 320\(\times\)448 and report metrics on the KITTI val set [37]. We report results with a single sample and no coarse-to-fine refinement. We find that training on raw sparse data without infilling and step unrolling leads to poor results, especially on KITTI where the ground truth is quite sparse. Step-unrolling helps to stabilize training without requiring any extra data pre-processing. However, we find that most gains come from interpolating missing values in the sparse labels. Infilling and step-unrolling compose well as our best results use both; infilling (being an approximation) does not completely bridge the training-inference distribution shift of the noisy latent.
**Coarse-to-fine refinement.** Figure 6 shows that coarse-to-fine refinement (Section 3.3) substantially improves fine-grained details in estimated optical flow fields. It also improves the metrics for zero-shot optical flow estimation on both KITTI and Sintel, as shown in Table 5.
**Datasets.** When using different mixtures of datasets for pretraining, we find that diffusion models sometimes capture region boundaries and shape at the expense of local textural variation (eg see Figure 3). The model trained solely on AutoFlow tends to provide very coarse flow, and mimics the object shapes found in AutoFlow. The addition of FlyingThings, Kubric, and TartanAir removes this
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Dataset & Sintel.clean & Sintel.final & KITTI & KITTI Fl-all \\ \hline AF pretraining & 2.04 & 2.55 & 4.47 & 16.59\% \\ AF\(\rightarrow\)AF+FT & 1.48 & 2.22 & 3.71 & 14.07\% \\ AF\(\rightarrow\)AF+FT+KU & 1.33 & 2.04 & 2.82 & 9.27\% \\ AF\(\rightarrow\)AF+FT+KU+TA & **1.24** & **2.00** & **2.19** & **7.58\%** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **The addition of optical flow synthetic datasets substantially improves the zero-shot results on Sintel and KITTI.**
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & NYU val & KITTI val & KITTI val \\ \cline{2-5} & REL & RMS & REL & RMS & AEPE & Fl-all \\ \hline Baseline & 0.079 & 0.331 & 0.222 & 3.770 & - - \\ Step-unroll & 0.076 & **0.324** & 0.085 & 2.844 & 1.84 & 6.16\% \\ Infill & 0.077 & 0.338 & 0.057 & 2.744 & 1.53 & 5.24\% \\
**Step-unroll \(\&\) infill** & **0.075** & **0.324** & **0.056** & **2.700** & **1.47** & **4.74\%** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation on infilling and step-unrolling. Without either one, performance deteriorates. Without both, optical flow models fail to train on KITTI.**
\begin{table}
\begin{tabular}{l r r r} \hline \hline Coarse-to-fine & Sintel.clean & Sintel.final & KITTI \\ \cline{2-5} refinement & AEPE & AEPE & AEPE & Fl-all \\ \hline Without & 1.42 & 2.12 & 2.35 & 8.65\% \\
**With** & **1.24** & **2.00** & **2.19** & **7.58\%** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **The addition of synthetic depth data in pre-training substantially improves fine-tuning performance on NYU.**
\begin{table}
\begin{tabular}{l r r} \hline \hline Dataset & REL & RMS \\ \hline SceneNet RGB-D & 0.089 & 0.362 \\ ScanNet & 0.081 & 0.346 \\ SceneNet RGB-D + ScanNet & **0.075** & **0.324** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **The addition of synthetic depth data in pre-training substantially improves fine-tuning performance on NYU.**
hallucination and significantly improves the fine details in the flow estimates (eg, shadows, trees, thin structure, and motion boundaries) together with a substantial boost in accuracy (Table 6). Similarly, we find that mixing SceneNet RGB-D [39], a synthetic dataset, along with ScanNet [7] provides a performance boost for fine-tuning results on NYU depth v2, shown in Table 7.
### Interesting properties of diffusion models
**Multimodality.** One strength of diffusion models is their ability to capture complex multimodal distributions. This can be effective in representing uncertainty, especially where there may exist natural ambiguities and thus multiple predictions, _e.g_. in cases of transparent, translucent, or reflective cases. Figure 1 presents multiple samples on the NYU, KITTI, and Sintel datasets, showing that our model captures multimodality and provides plausible samples when ambiguities exist. More details and examples are available in Section A.
**Imputation of missing labels.** A diffusion model trained to model the conditional distribution \(p(y|x)\) can be zero-shot leveraged to sample from \(p(y|x,y_{partial})\) where \(y_{partial}\) is the partially known label. One approach for doing this, known as the _replacement method_ for conditional inference [61], is to replace the known portion the latent \(y_{t}\) at each inference step with the noisy latent built by applying the forward process to the known label. We qualitatively study the results of leveraging replacement guidance for depth completion and find it to be surprisingly effective. We illustrate this by building a pipeline for iteratively generating 3D scenes (conditioned on a text prompt) as shown in Figure 7 by leveraging existing models for text-to-image generation and text-conditional image inpainting. While a more thorough evaluation of depth completion and novel view synthesis against existing methods is warranted, we leave that exploration to future work. (See Section C for more details and examples.)
**Limitations.** We adopt standard practices from image-generation models, leading to larger models and slower running times than RAFT. However, we are excited by the recent progress on progressive distillation [40; 54] and consistency models [62] to improve inference speed in diffusion models.
Figure 6: **Visual results with and without coarse-to-fine refinement**. For our pretrained model, refinement helps correct wrong flow and adds details to correct flow.
Figure 7: **Application of zero-shot depth completion** with our model by incorporating it into an iterative 3D scene generation pipeline. Starting with a initial image (optionally generated from a text-to-image model), we sample an image-only conditioned depth map using our model. The image-depth pair is added to a point cloud. We then iteratively render images and depth maps (with holes) from this point cloud by moving the camera. We then fill image holes using an existing image inpainter (optionally text conditioned), and then use our model with replacement guidance to impute missing depths (conditioned on the filled RGB image and known depth).
Conclusion
We introduced a simple denoising diffusion model for monocular depth and optical flow estimation using an image-to-image translation framework. Our generative approach obtains state-of-the-art results without task-specific architectures or loss functions. In particular, our model achieves an Fl-all score of 3.26% on KITTI, about 25% better than the best published method [68]. Further, our model captures the multi-modality and uncertainty through multiple samples from the posterior. It also allows imputation of missing values, which enables iterative generation of 3D scenes conditioned on a text prompt. Our work suggests that diffusion models could be a simple and generic framework for dense vision tasks, and we hope to see more work in this direction.
#### Acknowledgements
We thank Ting Chen, Daniel Watson, Hugo Larochelle and the rest of the Brain team for feedback on this work. Thanks to Klaus Greff and Andrea Tagliasacchi for their help with the Kubric generator, and to Chitwan Saharia for help training the Palette model.
|
2308.08515 | Investigation of Magnesium Silicate as an Effective Gate Dielectric for
AlGaN/GaN Metal Oxide High Electron Mobility Transistors (MOSHEMT) | In this study, a 6 nm layer of Magnesium Silicate (Mg-Silicate) was deposited
on AlGaN/GaN heterostructure by sputtering of multiple stacks of MgO and
SiO$_{2}$, followed by rapid thermal annealing in a nitrogen (N$_{2}$)
environment. The X-ray photoelectron spectroscopy (XPS) analysis confirmed the
stoichiometric Mg-Silicate (MgSiO$_{3}$) after being annealed at a temperature
of 850 $^\circ$C for 70 seconds. Atomic force microscopy (AFM) was employed to
measure the root mean square (RMS) roughness (2.20 nm) of the Mg-Silicate. A
significant reduction in reverse leakage current, by a factor of three orders
of magnitude, was noted for the Mg-Silicate/AlGaN/GaN metal-oxide-semiconductor
(MOS) diode in comparison to the Schottky diode. The dielectric constant of
Mg-Silicate($\mathcal{E}_{Mg-Silicate}$) and the interface density of states
(D$_{it}$) with AlGaN were approximated at $\sim$ 6.6 and 2.0 $\times$
10$^{13}$ cm$^{-2}$eV$^{-1}$ respectively, utilizing capacitance-voltage (CV)
characteristics. | Seshasainadh Pudi, Navneet Bhardwaj, Ritam Sarkar, V S Santhosh N Varma Bellamkonda, Umang Singh, Anshul Jain, Swagata Bhunia, Soumyadip Chatterjee, Apurba Laha | 2023-08-16T17:10:41Z | http://arxiv.org/abs/2308.08515v1 | Investigation of Magnesium Silicate as an Effective Gate Dielectric for AlGaN/GaN Metal Oxide High Electron Mobility Transistors (MOSHEMT)
###### Abstract
In this study, a 6 nm layer of Magnesium Silicate (Mg-Silicate) was deposited on AlGaN/GaN heterostructure by sputtering of multiple stacks of MgO and SiO\({}_{2}\), followed by rapid thermal annealing in a nitrogen (N\({}_{2}\)) environment. The X-ray photoelectron spectroscopy (XPS) analysis confirmed the stoichiometric Mg-Silicate (MgSiO3) after being annealed at a temperature of 850\({}^{\circ}\)C for 70 seconds. Atomic force microscopy (AFM) was employed to measure the root mean square (RMS) roughness (2.20 nm) of the Mg-Silicate. A significant reduction in reverse leakage current, by a factor of three orders of magnitude, was noted for the Mg-Silicate/AlGaN/GaN metal-oxide-semiconductor (MOS) diode in comparison to the Schottky diode. The dielectric constant of Mg-Silicate (\(\xi_{\text{Mg-Silicate}}\)) and the interface density of states (D\({}_{\text{it}}\)) with AlGaN were approximated at \(\sim\)6.6 and 2.0 \(\times\) 10\({}^{13}\) cm-2eV-1 respectively, utilizing capacitance-voltage (CV) characteristics.
GaN, AlGaN interface, metal-oxide-semiconductor (MOS)-Diode, Magnesium Silicate (Mg-Silicate).
## 1 Introduction
The field of high frequency and power electronics has significantly advanced over the past few decades, thanks to the development of GaN-based high electron mobility transistors (HEMTs). The unique features of GaN material, such as a high breakdown field, high saturation velocity,
high thermal conductivity, high electron mobility, and the formation of a two-dimensional electron gas density (2-DEG), make it a superior candidate for high frequency and high-power devices [1]. Consequently, AlGaN/GaN HEMTs have become the workhorse of high-power, high frequency, and power switching applications. Nevertheless, gate leakage issues in conventional AlGaN/GaN HEMTs tend to lower their performance considerably. To mitigate this, metal-insulator-semiconductor HEMTs (MIS-HEMTs) have been proposed and demonstrated by various research groups. Essential attributes of the insulator used for MIS-HEMTs include a high dielectric constant, a smooth interface with the AlGaN surface, and a high band gap. Numerous oxides and insulators have been explored as gate dielectrics for MIS-HEMTs, including SiO\({}_{2}\) (\(\xi_{\rm SiO2}\)=3.9) [2-3], Al\({}_{2}\)O\({}_{3}\) (\(\xi_{\rm Al203}\)=10) [6-10], HfO\({}_{2}\) (\(\xi_{\rm HfO2}\) = 20) [11-14] ZrO\({}_{2}\) (\(\xi_{\rm ZrO2}\)=23) [15-16], Ta\({}_{2}\)O\({}_{5}\) (\(\xi_{\rm Ta2O5}\) =11.8) [17-18], TiO\({}_{2}\) (\(\xi_{\rm TiO2}\) = 25) [19-22], Ga\({}_{2}\)O\({}_{3}\)[23], Gd\({}_{2}\)O\({}_{3}\)[24-26], Sc\({}_{2}\)O\({}_{3}\)[27], Nb\({}_{2}\)O\({}_{5}\)[28-29], and Si\({}_{3}\)N\({}_{4}\)[30]. The silicate is an insulator material, which could be a suitable candidate for a gate dielectric. Magnesium silicate has high dielectric constant (\(\sim\)6.6) and wide band gap, due to which it has the potential to use a as gate dielectric [31-32].
This study focuses on the formation of Mg-Silicate on AlGaN/GaN hetero-structure by annealing MgO/SiO\({}_{2}\) stacks at 850 \({}^{\circ}\)C, and an analysis of its physical, structural, and electrical properties. The formation of Mg-Silicate and the stoichiometry of the silicate are confirmed by XPS surface analysis. The average surface roughness of the oxide films is determined AFM analysis. The MIS diode current (I-V) and capacitance-voltage (C-V) characteristics have measured on the fabricated MIS diodes to check the electrical properties of the Mg-Silicate. The \(\xi_{\rm ox}\) and interface trap density (D\({}_{\rm it}\)) have estimated from the C-V analysis.
## 2 Experimental Details
### Formation of Magnesium Silicate (Mg-Silicate)
The AlGaN/GaN heterostructure, grown by Plasma Assisted Molecular Beam Epitaxy (PAMBE) technique, consists of a 2 nm GaN cap layer, a 30 nm Al\({}_{0.28}\)Ga\({}_{0.72}\)N barrier layer, and a 0.5 nm AlN spacer layer. It also includes a 160 nm GaN buffer and a 1500 nm AlGaN transition layer grown over 4H-SiC substrate. For the formation of magnesium silicate, a six-time repeated deposit of MgO/SiO\({}_{2}\) (0.5 nm each) stack was deposited using sputtering technique, followed by annealing at 850 \({}^{\circ}\)C in N2 ambiance for 70 seconds. The physical
characterization of both annealed and un-annealed samples was carried out using XPS, and AFM.
### Device fabrication
The MOS diode fabrication process involved creating ohmic and Schottky contacts. Six layers of MgO/SiO\({}_{2}\) stack (0.5 nm each) were blanket deposited on the heterostructure using a sputter tool and subsequently annealed at 850 \({}^{\circ}\)C in a N\({}_{2}\) environment for 70 seconds to form Mg-silicate. Following this, lithography patterns were utilized to etch Mg-Silicate with BHF to establish ohmic contact. The ohmic contacts at source and drain were formed by depositing a Ti/Au/Al/Ni/Au metal stack in an electron-beam evaporator under high vacuum followed by annealing at 850 \({}^{\circ}\)C for 30 seconds in an N\({}_{2}\) environment [33].
The Ni/Au metal stack was deposited on the patterned Magnesium-silicate using an electron-beam evaporator. All the process steps are described in Fig. 2(a)-(f). The electrical characteristics were carried out using an Agilent B1500A semiconductor device analyser.
Figure 1: Cross-sectional schematic of (a) control sample (b) MOS diode on AlGaN/GaN heterostructure.
## 3 Results and discussion
### Physical characteristics of thermal oxide
The XPS analysis of Mg-silicate is shown in Fig. 3. We have taken C-1s peak at 284.8 as a reference for all the XPS data [see Figs. 3(a)-(f)].
Figure 3: (a)-(b) XPS data of Mg 2s and (c)-(d) O\({}_{\rm 1s}\) peaks and (e)-(f) shows Si 2p of the annealed and un-annealed samples (MgO/SiO\({}_{2}\)).
Both the samples (annealed and un-annealed of MgO/SiO\({}_{2}\)) show Mg 2s peaks. The Mg 2s peaks for annealed and un-annealed samples are located at 89.20 and 89.50 eV, respectively as
Figure 2: (a)-(f) Process flow for the formation of Mg-Silicate in area selective regions.
shown in fig 3 (a)-(b). The Mg-O O1s peak for the annealed (Mg-Silicate) sample is observed at 531.43 eV, as shown in Fig. 3(c), while the Mg-O O1s peaks for the un-annealed (MgO/SiO\({}_{2}\)) samples are seen at 532.40 eV, depicted in Fig. 3(d). In the SiO\({}_{2}\) structure, the Si-O bond demonstrates an O1s peak range of 532.5 - 533.4 eV, and the Mg-O bond in MgO exhibits an O1s peak at 530.0 eV. The un-annealed sample (MgO/SiO\({}_{2}\)), due to its multiple stacks of MgO and SiO\({}_{2}\), displays a combined effect in the O1s peak at 532.40 eV. Meanwhile, in the annealed sample (Mg-silicate), the MgO and SiO\({}_{2}\) react at 850 \({}^{\circ}\)C to form Mg-Silicate, resulting in an O1s peak at 531.43 eV. The Si 2p peak range in silicates is between 102-103 eV. The Si 2p peak of the annealed sample appears at 102.60 eV, suggesting that the MgO/SiO\({}_{2}\) stack undergoes a reaction post-annealing to form Mg-silicate as indicated in Fig 3(e)-(f). The un-annealed sample, on the other hand, exhibits an Si 2p peak at 103.3 eV, attributable to the SiO\({}_{2}\) layer in the MgO/SiO\({}_{2}\) stack. The stoichiometry for the Mg-silicate is determined by integrating the corresponding peaks after normalization with atomic sensitivity factors (ASF).
The elemental atomic percentages of Mg-Silicate are calculated by normalizing the area under Mg 2s, O 1s, and Si 2p peaks accounting ASF. In Mg-silicate, the elemental atomic percentages of Mg, O, and Si are respectively 16.80%, 62.60%, and 20.60%, as outlined in Table I. The ratio of Mg/Si/O in the Mg-Silicate sample stands at 1.0/1.20/3.70. The ideal stoichiometry of Mg silicate is MgSiO\({}_{3}\), but we obtained Mg silicate stoichiometry MgSi\({}_{1.2}\)O\({}_{3.7}\). The root mean square (RMS) surface roughness for both the annealed sample (Mg-silicate) and the control sample (GaN/AlGaN/GaN/SiC), as determined by AFM, is 2.2 nm and 2.7 nm respectively, as depicted in Figs. 4(a)-(b). There is a slight increase in the RMS roughness of Mg-silicate compared to the control sample, attributable to the annealing process.
**Table I**
**Elemental Atomic Percentages in Mg-silicate**
\begin{tabular}{|c|c|c|c|} \hline _Elements_ & \(Mg\) & \(O\) & \(Si\) \\ \hline Percentage (\%) & 16.80 & 62.60 & 20.60 \\ \hline Ratio & 1.00 & 3.70 & 1.20 \\ \hline \end{tabular}
### Electrical characteristics
The capacitance-voltage (C-V) and current-voltage (I-V) measurements were performed on large area Schottky diodes (140 \(\mu\)m) to scrutinize the gate characteristics of the MOS-HEMTs and evaluate the quality of the silicate as a dielectric. In the MOS-Diode sample, AlGaN and Mg-Silicate capacitances are in series. Fig. 5 illustrates the C-V characteristics of both control and Mg-silicate samples at a frequency of 1 MHz The dielectric constant of Mg-silicate (\(\xi\)Mg-silicate), approximately 6.6, was determined from the inversion region capacitance using the formulas \(\frac{1}{c_{T}}=\frac{1}{c_{control}}+\frac{1}{c_{Mg-silicate}}\) and \(C_{Mg-silicate}=\frac{\varepsilon_{Mg-silicate}\times\varepsilon_{0}}{d} \times A\), here C\({}_{\rm T}\), C\({}_{\rm control}\), C\({}_{\rm Mg-silicate}\), \(\xi_{0}\), A and d are total capacitance (0 V), control sample capacitance (0 V), permittivity of free space, area of Schottky contact of MOS-diode and thickness of the Mg-silicate respectively. The control sample capacitance (C\({}_{\rm control}\)) and total capacitance of Mg-silicate/AlGaN heterostructure (C\({}_{\rm T}\)) at applied bias 0 V were observed to be 318 and 240nF/cm\({}^{2}\) respectively as shown in Fig. 5.
Figure 4: AFM images of (a) control sample (GaN/AlGaN/SiC), (b) Mg-silicate sample.
The C-V graphs for the Mg-silicate sample exhibit two sharp changes in slope when voltage is applied. The first rise relates to the formation of 2 DEG at the AlGaN/GaN interface and the second rise relates to electron accumulation at the Mg-silicate/AlGaN interface as shown in Fig 6 (a).
Figure 6: (a) Frequency dependent _C–V_ characteristics of Mg-silicate sample, (b) Dit calculation of Mg-silicate with AlGaN from _C-V_.
The interface trap density (\(\rm{D_{it}}\)) has been calculated from the C-V dispersion utilizing the equation \(D_{it}=\frac{C_{Mg-silicate}}{q\times\Delta E}\times\Delta V\)[34-35], where \(\Delta\)V represents the voltage difference between the frequencies (f\({}_{1}\) and f\({}_{2}\)) for a fixed capacitance, corresponding to an interface charge density (\(\rm{Q_{it}}\)) between energies \(\rm{E_{1}}\) and \(\rm{E_{2}}\). The frequency dispersion at the second rise of the C-V curve is illustrated in Fig. 6(b). The calculated \(\rm{D_{it}}\) for the Mg-silicate is approximately 2.0 \(\times 10^{13}\) cm\({}^{-2}\)eV\({}^{-1}\). A frequency dispersion is also observed in the first rise of the C-V plot, potentially attributable to the charge in Mg-silicate shifting the CV curve to the right as frequency increases. Fig.7 presents the I-V characteristics of the MOS diode compared to the control sample. A decrease in reverse leakage current from 8.7 \(\times\) 10\({}^{-3}\) A/cm\({}^{2}\) in the control sample to 1.6 \(\times\) 10\({}^{-6}\) A/cm\({}^{2}\) in Mg-silicate-based MOS diodes at -7 V is observed, as displayed in Fig.7.
## 4 Conclusion
Figure 7: I-V characteristics for the Mg-silicate and control samples.
This study successfully demonstrates fabrication of high-quality Mg-silicate through the annealing of an MgO/SiO\({}_{2}\) stacks, characterized by a surface roughness of 2.2 nm, dielectric constant of approximately 6.6. A notable three orders of magnitude reduction in the reverse leakage current was observed in the Mg-silicate-based MOS diode, as compared to the Schottky diode (control sample). An interface trap density of approximately \(2.0\times 10^{13}\,\mathrm{cm}^{-2}\mathrm{eV}^{-1}\) was estimated. The results suggest the promising potential of Mg-silicate as an alternative gate dielectric for high-performance AlGaN/GaN MOSHEMTs.
## 5. Acknowledge
The authors would like to thank the Ministry of Electronic and Information Technology (MEITY), Govt. of India (Project: NNetRA) for financial support and IIT Bombay Nanofabrication (IITBNF) for providing fabrication and characterization facility.
|
2310.13419 | Low Cross-Talk Optical Addressing of Trapped-Ion Qubits Using a Novel
Integrated Photonic Chip | Individual optical addressing in chains of trapped atomic ions requires
generation of many small, closely spaced beams with low cross-talk.
Furthermore, implementing parallel operations necessitates phase, frequency,
and amplitude control of each individual beam. Here we present a scalable
method for achieving all of these capabilities using a novel integrated
photonic chip coupled to a network of optical fibre components. The chip design
results in very low cross-talk between neighbouring channels even at the
micrometre-scale spacing by implementing a very high refractive index contrast
between the channel core and cladding. Furthermore, the photonic chip
manufacturing procedure is highly flexible, allowing for the creation of
devices with an arbitrary number of channels as well as non-uniform channel
spacing at the chip output. We present the system used to integrate the chip
within our ion trap apparatus and characterise the performance of the full
individual addressing setup using a single trapped ion as a light-field sensor.
Our measurements showed intensity cross-talk below $10^{-3}$ across the chip,
with minimum observed cross-talk as low as $O\left(10^{-5}\right)$. | A. S. Sotirova, B. Sun, J. D. Leppard, A. Wang, M. Wang, A. Vazquez-Brennan, D. P. Nadlinger, S. Moser, A. Jesacher, C. He, F. Pokorny, M. J. Booth, C. J. Ballance | 2023-10-20T11:00:03Z | http://arxiv.org/abs/2310.13419v1 | # Low Cross-Talk Optical Addressing of Trapped-Ion Qubits Using a Novel Integrated Photonic Chip
###### Abstract
Individual optical addressing in chains of trapped atomic ions requires generation of many small, closely spaced beams with low cross-talk. Furthermore, implementing parallel operations necessitates phase, frequency, and amplitude control of each individual beam. Here we present a scalable method for achieving all of these capabilities using a novel integrated photonic chip coupled to a network of optical fibre components. The chip design results in very low cross-talk between neighbouring channels even at the micrometre-scale spacing by implementing a very high refractive index contrast between the channel core and cladding. Furthermore, the photonic chip manufacturing procedure is highly flexible, allowing for the creation of devices with an arbitrary number of channels as well as non-uniform channel spacing at the chip output. We present the system used to integrate the chip within our ion trap apparatus and characterise the performance of the full individual addressing setup using a single trapped ion as a light-field sensor. Our measurements showed intensity cross-talk below \(10^{-3}\) across the chip, with minimum observed cross-talk as low as \(\mathcal{O}\left(10^{-5}\right)\).
## Introduction
Since their original proposal as a platform for quantum information processing [1], trapped ions have emerged as one of the leading contenders for building a useful quantum computer. To date, small-scale trapped-ion systems have demonstrated the highest single- and two-qubit gate fidelities [2, 3, 4, 5], longest coherence times [6], and lowest state preparation and measurement errors [7] of any quantum computing platform. For realising a functional large-scale quantum computer, this level of control needs to be extended to a large number of qubits. This includes the implementation of all necessary quantum operations on the qubit register with very high fidelity as well as the execution of targeted (addressed) operations on specific subsets of qubits within the register with minimal effect on the unused (idle) qubits in the computation.
Individual addressing in most types of ion trap quantum computing architectures requires locally modifying the interaction between the ions and the radiation used to perform operations with very low cross-talk, where the natural inter-ion spacing is a few micrometres [8, 9]. This can be done by altering the properties of the magnetic and/or electric fields experienced by each of the ions to change their response to a globally applied field [10, 11, 12, 13, 14, 15], by focusing down the radiation used to perform quantum gates on each ion [16, 17, 18, 19, 20, 21, 22] as shown in fig. 1, or by using a combination of the two approaches. The former approach works with both laser and microwave radiation but requires intricate local control of electric and magnetic fields. Hence it is only compatible with microfabricated traps with numerous electrodes. The latter approach is only viable when optical wavelengths are used for operations, either when the qubit transition is driven with a two-photon Raman process or the qubit transition itself corresponds to an optical wavelength. It requires focusing down laser beams to micrometre scales, therefore working close to the diffraction limit. However, this method is applicable to all ion trap architectures.
Several methods have been developed for achieving individual optical addressing in ion trap systems, including micro-mirror beam steering [16, 17, 18], acousto-optic deflectors [19], multi-channel acousto-optic modulators [20] (AOMs), and micro-lens arrays [21]. These methods vary in cross-talk performance and scalability. Using micro-mirror arrays to steer the lasers onto the target ions offers individual addressing with very low cross-talk. However, the time required to reconfigure the beam positions or to tune the amplitude of the beams is comparable to the timescale of the gate operations, therefore accounting for a significant amount of the sequence run time. Acousto-optic deflectors offer similarly low cross-talk but also enable fast intensity control of the individual beams. However, they lack individual beam frequency control, therefore limiting the set of unitaries that can be implemented
in parallel. Furthermore, only a few ions can be addressed at any one time due to power losses in higher diffraction orders. Multi-channel AOMs solve the problem of fast parallel control. However, they exhibit an order of magnitude larger cross-talk compared to the previous two approaches due to electronic cross-talk between channels in existing devices. Additionally, they are only manufactured with a fixed and equal inter-channel spacing, and readily available with only up to 32 channels, limiting both scalability and ability to address non-evenly spaced chains. The micro-lens array approach solves the majority of the problems listed above by enabling parallel operations with very low cross-talk. However, the system presented in ref. [21] has a very high insertion loss limiting the obtainable light intensity at the ion, and a fixed, uniformly spaced output pattern that cannot be easily adapted to non-uniform ion crystals.
In this work, we demonstrate a novel approach to individual optical addressing in trapped-ion chains with minimal cross-talk using a network of fibre-coupled modulators connected to a high-performance photonic chip. We employ spherical phase-induced multiscan waveguides [25] (SPIM-WGs) that provide precise control over the optical mode and enable much higher refractive index (RI) contrast modifications in optical glass compared to conventional ultrafast laser-written waveguides. A similar approach using laser-written waveguide devices has been reported [22], however matching the output of these waveguides to the ion chain while maintaining a good spot quality and low cross-talk is yet to be demonstrated.
The photonic chip that we present here adopts individual adiabatic mode converters as light guiding channels that exhibit excellent optical mode confinement and low cross-talk even at a channel separation of a few micrometres. This ensures that the errors due to nearest-neighbour cross-talk do not limit the performance of the trapped-ion device. Furthermore, the manufacturing process of the SPIM-WGs offers high flexibility, facilitating easy modification of the number of channels, the channel positions, and the mode shapes in the chip design, therefore making the chip easily adaptable to the ion configurations found in most ion trap experiments. The use of a fibre network for light delivery to the photonic chip further simplifies the process of exchanging devices by minimising the necessary realignment, thus allowing for rapid iteration of the system design.
## Results
### Photonic chip design
Performing laser-driven targeted operations in long chains of trapped ions imposes several competing requirements on the individual addressing setup. First, to minimise errors on the idle qubits, it is crucial to minimise the cross-coupling between neighbouring ion sites. Second, the spacing between neighbouring channels must match the ion-ion spacing, typically on the order of a few micrometres [8, 9]. This requires generating a series of closely spaced beams, each focused to an \(\mathcal{O}\left(1\,\mathrm{\SIUnitSymbolMicro m}\right)\) waist radius as shown in fig. 1(b). As a result, this approach necessitates focusing the beams near the diffraction limit. Working with tightly focused beams
Figure 1: Individual optical addressing requirements in typical ion trap experiments. (a) A linear chain of \({}^{137}\mathrm{Ba}^{+}\,\mathrm{ions}\) confined in a 3D segmented trap [23, 24]. The ions in this image are uniformly spaced with a mean ion-ion separation of \(9.6(3)\,\mathrm{\SIUnitSymbolMicro m}\). The image was taken using an sCMOS camera detecting light scattered from the ions. (b) Requirements for individual optical addressing in chains of trapped ions. The spacing between the individual laser beams must match the ion separation, typically \(4-10\,\mathrm{\SIUnitSymbolMicro m}\)[8, 9]. The spot size needs to be small enough to minimise the intensity at the neighbouring ions, but larger than the diffraction limit for the laser wavelength in use.
can also increase errors in the quantum operations due to intensity, phase, and/or polarisation modulation of the light at the ion position. This modulation can be caused by mechanical drifts in the optical system or by the secular motion of the ions [26, 27]. Hence it is desirable to make the beam waist radius as large as possible. The requirements on the beam spacing and beam waist radius constrain the waist-to-spacing ratio of the device output. A larger ratio makes integration into the trapped-ion system easier and more robust to errors but also increases the cross-talk within the device. Furthermore, to ensure that the required input laser power scales favourably with the qubit register size, it is important to maintain a high throughput efficiency of the system. Finally, the ability to control the intensity, phase, and frequency of individual channels in parallel enables the simultaneous application of different unitary operations on different target qubits, therefore reducing the algorithm run time.
To address these challenges, we developed a novel photonic chip connected to a network of optical fibre components. In our system the source laser is coupled into a series of fibre splitters, such that the light is split up into the required number of channels. Each channel is then connected to a fibre AOM that allows for individual phase, frequency, and amplitude control, and for switching of each beam. The fibre AOMs are connected to a fibre V-groove array (VGA) whose output is an array of fibre cores with a mode field diameter (MFD) of 3.5 \(\upmu\)m and a uniform spacing of 127 \(\upmu\)m. The VGA is then coupled to the photonic chip as shown in fig.2 and fixed in place with glue (details in "Materials and methods") to avoid misalignment during operation.
The photonic chip performance benefited from several advanced design and fabrication techniques, as detailed in the following sections. It was designed to convert the input optical modes and spacing of the VGA channels into closely spaced (8 \(\upmu\)m) modes suitable for trapped-ion addressing. Each channel incorporated a high-efficiency adiabatic mode converter to transform the optical
Figure 2: A diagram of the photonic chip design. The input light from the V-groove array (VGA), whose output channels are spaced by 127 \(\upmu\)m, is coupled into the photonic chip as shown in the top left. Shown in the middle is a 3D representation of a photonic chip featuring eight channels. Each channel serves as a high-efficiency adiabatic mode converter. The optical modes are precisely engineered to satisfy the trapped-ion addressing requirements at the output of the photonic chip, where the channel separation is brought down to 8 \(\upmu\)m. The grayscale microscopic images of the channel structures were acquired from the top of the photonic chip. The scale bar is 50 \(\upmu\)m. The dark regions at the chip facets were due to reduced microscopic illumination. The device has two straight regions, input (2.2 mm) and output (0.2 mm), which are connected by a curved region (7.2 mm). The adiabatic mode conversion is performed over the 2.2 mm at the channel input and the corresponding region is highlighted at top right of the diagram.
mode from the VGA, equivalent to a single-mode fibre (SMF), into the smaller optical mode required to address the ions. The adiabatic mode conversion was implemented over a distance of \(2.2\,\mathrm{mm}\), starting from the chip's input facet. Subsequently, curved routing of the channels was used to reduce the channel spacing, while maintaining their cross-sectional shape at the output facet of the chip. The microscopic images in fig. 2 obtained from the top of the photonic chip, capturing the input facet, bending region, and output facet, illustrate the progression of the channel spacing along the chip. Even though the channels were closely stacked with \(8\,\mathrm{\SIUnitSymbolMicro m}\) spacing over a distance of \(0.2\,\mathrm{mm}\), we were able to maintain a low nearest-neighbour cross-talk at a level of \(\mathcal{O}\left(10^{-4}\right)\) as demonstrated in the following sections.
### High-contrast refractive index modification
As outlined in the previous section, individual optical addressing of trapped ions requires the generation of closely spaced beams with maximum waist-to-spacing ratio and minimum cross-talk. That way, the errors due to unwanted operations on the idle qubits are minimised, while maximising the robustness of the system. This translates to the need for a very high level of light confinement in the channels of the photonic chip. Existing methods of producing micro-waveguides in photonic chips require a very small waist-to-spacing ratio to satisfy the low cross-talk requirement. An ideal fabrication method would allow an increase in waveguide size while maintaining single-mode operation and low cross-talk. The key to this is a stronger confinement of the mode by increasing the RI contrast between the core and the cladding. This level of RI control has not yet been possible with conventional ultrafast laser writing, where high RI contrast is typically accompanied by poor control of the mode shape.
In order to address this challenge, in this work we fabricated SPIM-WGs using a multiscan scheme to precisely control the shape and the size of the channel cross-section, enabling single-mode operation and a large core diameter. To achieve high-precision RI modification, the SPIM-WGs were fabricated with combined spherical aberrations of first-order Zernike mode 11 and third-order Zernike mode 37 introduced into the focused laser [28], with root mean square (RMS) amplitudes of \(-1\,\mathrm{rad}\) and \(-0.3\,\mathrm{rad}\), respectively. The SPIM-WG design incorporated four scans with a core separation of \(0.4\,\mathrm{\SIUnitSymbolMicro m}\) forming a single light guiding channel. The RI contrast measured using high-resolution quantitative phase microscopy was approximately \(0.015\) (with a core RI of \(1.525\) and a cladding RI of \(1.51\)), which is two to three times higher than for a waveguide created by conventional ultrafast laser writing [25]. The horizontal core size of one complete SPIM channel was measured to be about \(1.8\,\mathrm{\SIUnitSymbolMicro m}\), which was approximately the maximum diameter that maintained single-mode operation with an RI contrast of \(0.015\) at \(532\,\mathrm{nm}\)[29]. Figure 3(a) presents a comparison between LED microscopic images and \(532\,\mathrm{nm}\) laser mode profiles for a conventional laser-written waveguide, a single-scan SPIM-WG, and a four-scan SPIM-WG. Figure 3(b) illustrates the scanning scheme and the approximate RI profiles. Additionally, fig. 3(c) shows COMSOL-simulated mode profiles for a conventional laser-written waveguide and a four-scan SPIM-WG. We observed a slight mode elongation along the vertical direction. However, this does not affect the mode quality or the cross-talk in the direction parallel to the ion chain. In fact, it is advantageous for optical addressing of trapped ions, as it improves the robustness of the system to intensity modulation in that direction.
We observed a bright lobe above the four-scan SPIM-WG as shown in fig. 3(a). Such lobes were significantly weaker in the single-scan SPIM-WG. Close examination showed that these lobes possessed a low RI contrast and exhibited high guiding losses. Notably, there was a negative relative RI modification between the main lobe and the upper lobe. In this application, the main lobe was used as the light guiding channel, and we observed no adverse effects arising from the presence of the upper lobe. The strong confinement to the main guiding channel could be attributed to the presence of the region with negative RI contrast between the two lobes, as well as the high transmission loss associated with the upper lobe.
To illustrate the light confinement capabilities, we performed high dynamic range measurements (HDRMs) of the guided laser modes at \(532\,\mathrm{nm}\). Figure 3(d) illustrates measured and simulated HDRMs for a single channel. We observed a close agreement between the simulations and the experiments. A clear difference in the degree of mode confinement was observed between the conventional laser-written waveguide and the SPIM-WG. At distances of \(2-8\,\mathrm{\SIUnitSymbolMicro m}\) from the centre, the intensity from the SPIM-WG was around an order of magnitude lower compared to that of the conventional laser-written waveguides we tested.
Eight-channel chips were designed according to the specifications outlined in the previous section and illustrated in fig. 2. In fig. 4(a) we show LED microscopic images of the output chip facet. The outer channels appear dimmer than the central channels due to the lower LED intensity away from the centre. We conducted measurements to assess the overall loss (including coupling and propagation losses) across all waveguide channels and found the bending losses in all channels to be negligible, owing to the high RI contrast resulting in a strong mode confinement.
Channel cross-talk was assessed by coupling a \(532\,\mathrm{nm}\) laser through a single-mode fibre to one central channel at the chip input facet, then observing the full 8-channel intensity distribution at the chip output. High dynamic range measurements were compiled from a sequence of images taken with different calibrated neutral density filters. As shown in fig. 4(a), the SPIM-WGs showed far better confinement of the laser light to the vicinity of the channel compared to conventional laser-written waveguides. The light intensity at the position of the neighbouring channels, i.e. \(8\,\mathrm{\SIUnitSymbolMicro m}\) away from the channel, was one to two orders of magnitude lower for the SPIM-WGs. Further details are provided in fig. 4(b). The red curve, corresponding to conventional laser-written waveguides, clearly shows coupling into the neighbouring channels, evident from the multiple high light intensity peaks at \(8\,\mathrm{\SIUnitSymbolMicro m}\)
intervals corresponding to the channel separation. In contrast, the SPIM-WGs exhibited minimal coupling to adjacent channels. We measured nearest-neighbour cross-talk of \(\approx 3\times 10^{-2}\) for the conventional laser-written waveguides and \(\approx 5\times 10^{-4}\) for the SPIM-WG channels. These measurements were repeated for multiple different chips, all of which showed consistent results, confirming the high reliability of the chip fabrication.
### Advanced mode matching and adiabatic mode conversion
To ensure that the required laser input power to the addressing setup scales favourably with the number of qubits, it is crucial to maintain a high throughput efficiency within the chip-based system. This has been a problem in previous single-ion addressing implementations, where losses of more than 10 dB have been observed due to poor coupling between system components [21]. The photonic chip must therefore implement high-efficiency coupling between the set of SMFs delivering the light, held in a VGA, to the input of the ion-trap lens system. This can only be achieved through effective mode matching between the components and adiabatic conversion between the modes. Figure 5(a) shows calculated maximum diameters for the waveguide core and guiding mode to maintain single-mode operation for different RI contrasts. Commercial SMFs typically have an MFD of \(3.5-3.7\,\mathrm{\SIUnitSymbolMicro m}\) at a
Figure 3: Optical modes at the photonic chip output. (a) Comparison of waveguide fabrication techniques: conventional ultrafast laser-written waveguide (left), single-scan SPIM waveguide (middle), and multiscan SPIM waveguide (right). Top: varying optical phases applied to the ultrafast laser system for waveguide inscription. Middle: microscope images of the waveguide facets at the chip output under broadband LED illumination. Bottom: 532 nm laser mode profiles of the waveguides at the chip output (intensity normalised individually). (b) Scanning scheme and approximate RI profile for a single SPIM-WG channel at the chip output. (c) COMSOL-simulated mode profiles for a conventional laser-written waveguide and for the designed SPIM-WG output (intensity normalised individually). (d) HDRMs and COMSOL-simulated optical modes for a conventional laser-written waveguide and for the designed SPIM-WG.
wavelength of 532 nm, while the SPIM-WG channel has a maximum single-mode MFD of 1.9 \(\upmu\)m along the horizontal direction, owing to the high RI contrast of 0.015. Significant coupling losses would arise if the larger SMF modes are coupled directly to the smaller, high RI SPIM-WG channel modes.
An additional concern relates to position uncertainties in the VGA. The commercial VGAs we used were designed to have a core separation of 127 \(\upmu\)m, but exhibited variable position offsets between the fibre cores of up to 0.7 \(\upmu\)m along the \(x\)-direction (parallel to the linear fibre core array) and up to 0.3 \(\upmu\)m along the \(y\)-direction (orthogonal to the linear fibre core array). These offsets significantly reduced the mode overlaps between the VGA and the photonic chip and increased the coupling losses.
We developed a novel design for advanced mode matching that effectively mitigates the impact of the VGA channel position variability. The design, presented in fig. 5(b), was built upon the central region of the four scans discussed earlier [fig. 3(b)]. We introduced two additional scans on either side of the 4-scan SPIM-WG in order to increase the lateral size of the waveguide mode. The spacing of these additional scans was 1.5 times larger than that of the four central scans. As shown in fig. 5(b), this design considerably extended the lateral size of the dominant mode while effectively suppressing higher-order modes (e.g., the TE01 mode). Figure 5(d) includes broadband-light-illuminated microscope images of the channel input and output cross-sections, further highlighting the difference between RI modifications.
Figure 4: Light confinement properties of the SPIM-WG channels. (a) LED-illuminated microscope images and laser mode profiles for two 8-channel chips fabricated using conventional laser-written (left) and the SPIM-WG (right) methods. One SMF was coupled to the fourth channel (left to right) of the chip’s input facet. To show effects over a high dynamic range, three laser intensities (\(1\times I_{0}\), \(100\times I_{0}\), and \(2000\times I_{0}\), adjusted using neutral density filters) were applied. (b) HDRMs for two 8-channel chips, each fabricated using conventional laser-written or the SPIM-WG methods. The red arrows mark the positions of the remaining channels.
Figure 5(c) presents the simulated guiding modes for the SMF, the output channel (as described in the previous section), and the input channel (dominant mode only). The input channel mode was nearly identical to that of the SMF, but considerably larger than that of the output channel. The experimentally measured mode profiles, summarized in fig. 5(d), agreed well with the simulations. The measured mode for the input channel in fig. 5(d) appeared slightly larger than the simulated dominant mode in fig. 5(c) because the measured mode profile contained a superposition of multiple modes, while the simulation included only the dominant mode. Loss measurements indicated that this specialized design enhanced the coupling efficiency from less than 60% to approximately 80% and significantly improved the mode uniformity across the channels.
To address the disparity in cross-section between the channel input mode and the required output mode, we incorporated adiabatic mode conversion by changing the waveguide properties along the chip. Starting from the input facet, the RI profile was gradually changed over a total length of 2.2 mm, transitioning from the design illustrated in fig. 5(b) to the design described in fig. 3(b). The RI profile from fig. 3(b) was then maintained throughout the bending region until the chip output. A 3D render of the adiabatic mode conversion is presented in the "Materials and methods" section. The mode conversion efficiency was investigated
Figure 5: Enhancing chip coupling efficiency through advanced mode matching and adiabatic mode conversion. (a) Calculated maximum waveguide core diameter to maintain single-mode guiding at 532 nm as a function of the RI contrast (core RI minus cladding RI). The maximum MFDs were determined from a COMSOL simulation. The cladding RI was 1.51 for borosilicate Eagle glass at 532 nm. Commercial SMFs typically have a cladding RI of 1.455 and a core RI of 1.4607. (b) Scanning scheme, approximate RI profile, and COMSOL-simulated guiding modes for a single SPIM-WG channel at the chip’s input facet. (c) Top: COMSOL-simulated laser guiding modes for an SMF, the output channel, and the input channel of the chip. Bottom: the dominant mode of the chip’s input channel compared to the SMF mode and the chip’s output mode. (d) Top: LED-illuminated microscope images of the chip’s output and input channels. Middle: experimentally measured 532 nm laser guiding modes for an SMF and for the SPIM channels. Bottom: intensity plots of the measured modes.
through measurements of the losses of straight SPIM-WG channels with and without adiabatic mode conversion. Negligible differences in loss were found between the two designs, proving the practical effectiveness of adiabatic mode conversion using SPIM-WGs.
### Integration with a trapped-ion quantum system
In our experiment we use trapped \({}^{137}\)Ba\({}^{+}\) ions as qubits. The ions are confined in a 3D monolithic microfabricated trap [23, 24] that allows for generation of deep confining potentials while maintaining low heating rates, which is crucial for storing long ion crystals. The segmented electrode structure provides sufficient degrees of freedom for both ion shuttling across the trap and generation of anharmonic potential shapes. The latter is particularly important for maintaining a uniform ion-ion spacing in large registers [30], therefore increasing the minimum distance between neighbouring ions compared to harmonically spaced chains [8]. As explained in the previous sections, a larger ion-ion spacing reduces the cross-talk between neighbouring channels in the addressing setup.
The SPIM-WGs used for individual addressing in our setup were optimised for 532 nm. This wavelength enables driving Raman transitions within both the ground and the metastable level manifolds of the \({}^{137}\)Ba\({}^{+}\) ions as shown in fig. 6. The output of the photonic chip is mapped on the ion chain using a 2:1 lens relay, such that the distance between neighbouring beams at the ion position is reduced to 4 um, matching our target ion spacing.
An outline of the optical system used to reimage the SPIM-WG output on the ions is shown in fig. 7. The output of the chip is collimated using a high-quality commercial microscope objective to minimise aberrations. Another microscope objective is used to both refocus the 532 nm control light from the waveguide on the ions, and to collect the 493 nm photons scattered by the ions. The latter objective is chromatically corrected for visible wavelengths, and glass-thickness compensated for the glass window on the vacuum chamber. This allows us to both image and address the ions with minimal aberrations.
The VGA and SPIM-WG assembly is mounted on a stainless steel plate before integration with the rest of the optical system (see "Materials and methods"). This, combined with the fibre network used to interface between the laser source and the photonic chip as outlined in fig. 2, enables simple exchange of chips in the ion trap system with minimal realignment.
### Cross-talk measurement using a single trapped ion
To characterise the performance of the fully integrated individual addressing setup, we measured the beams' spatial profiles by using a single ion as a point-like sensor. When a single 532 nm beam is directed at the ion, it introduces an AC Stark shift on the quadrupole transition frequencies between states in the S\({}_{1/2}\) and D\({}_{5/2}\) levels. This shift is proportional to the intensity of the 532 nm light experienced by the ion. To characterise the spatial intensity distribution from the SPIM-WG output, we transported the ion from the trap centre to an axially displaced position \(x\) (see fig. 7) and measured the quadrupole frequency shift as a function of \(x\). We were thus able to measure the beam profiles, the beam spacing, and the cross-talk of the system.
Figure 6: Level structure of \({}^{137}\)Ba\({}^{+}\). Both the ground S\({}_{1/2}\) and the metastable D\({}_{5/2}\) level qubit transitions can be driven with a two-photon Raman process using 532 nm light. The lifetime of the metastable level is 30.14 s [31], much longer than the typical timescale of the operations, making it a suitable place to encode qubits in addition to the ground level. The ions are detected by repeatedly exciting the transition between the S\({}_{1/2}\)and the P\({}_{1/2}\)levels and collecting the scattered 493 nm light.
For the purpose of this measurement we used the \(|S_{1/2},F=2,m_{F}=0\rangle\leftrightarrow|D_{5/2},F=4,m_{F}=+1\rangle\) transition as it exhibits the lowest magnetic field sensitivity of all available transitions for the magnetic field direction and the beam geometry shown in fig. 7. The frequency shift introduced by the 532 nm beam can be seen as a \(Z_{\phi}\) rotation on the qubit state with the phase \(\phi\) proportional to the magnitude of the AC Stark shift, and hence proportional to the beam intensity. To estimate this phase in a way that is robust to additive errors, such as those accumulated during state preparation and measurement, we used a robust phase estimation protocol [32, 33] (RPE). To enhance the system's coherence time and hence increase the available probe duration (therefore also increasing the dynamic range of the measurement), we embedded the RPE sequence inside a Knill dynamical decoupling sequence [34] (KDD). Further details on the sequences used can be found in the "Materials and methods" section.
The results from this measurement are shown in fig. 8. We measured a mean beam waist radius of \(0.67(6)\,\mathrm{\SIUnitSymbolMicro m}\) and a mean channel spacing of \(3.95(3)\,\mathrm{\SIUnitSymbolMicro m}\). For all channels, the cross-talk level was below \(10^{-3}\), with a lowest measured cross-talk of \(\mathcal{O}\left(10^{-5}\right)\). The variation in cross-talk across the chip was most likely due to light leakage through the fibre AOMs as well as aberrations and/or scatter from the optical components in the beam path that affect the channels non-uniformly.
In fig. 9 we plot the error introduced to the state of a target ion's nearest-neighbours for the data shown in fig. 8. We consider two cases: when a single beam is focused down on the target ion, and the case of a Raman transition where both beams are focused on the target ion. The former is relevant when the qubit transition is itself an optical transition, or when a two-photon Raman process is used and one of the beams is global for the entire qubit register. For each channel we sum the nearest-neighbour contributions (except for the edge channels where there is a single contribution). In both cases this error is much lower than, or comparable to, state-of-the-art two-qubit gate errors [18, 5, 3] and will therefore not limit the device performance. The cross-talk performance can be enhanced even further with well-known techniques such as coherent cancellation [35] or composite pulse sequences [36].
## Discussions
We have presented a scalable and configurable way for individual optical addressing in chains of trapped atomic ions. We have designed and fabricated an SPIM-WG chip that offers a very high RI contrast between the core and the cladding, therefore achieving very low cross-talk at a small channel separation. The photonic chip is coupled with low loss to a fibre network that enables individual phase, frequency, and amplitude control of each of the channels, thus enabling parallel operations.
Measurements of the performance of our individual addressing system, optimised for 532 nm light, using a single trapped \({}^{137}\)Ba\({}^{+}\) ion showed cross-talk well below \(10^{-3}\) across the chip. The corresponding estimated worst-case nearest-neighbour error is \(\mathcal{O}\left(10^{-6}\right)\) if both beams are focused on a single ion or \(\mathcal{O}\left(10^{-3}\right)\) if one beam is focused on the ion and the second beam is global
Figure 7: An overview of the trapped-ion setup. The ions are confined in a 3D monolithic microfabricated trap [23, 24]. The 493 nm light scattered by the ions is collimated using a commercial NA0.5 microscope objective with an effective focal length (EFL) of 4 mm and refocused on an sCMOS camera for spatially resolved readout. The 532 nm light from the photonic chip is first collimated using a commercial microscope objective with an EFL of 20 mm. The focal lengths and positions of the subsequent \(f=250\) mm and \(f=100\) mm lenses are chosen to achieve the required 2:1 demagnification while also satisfying the geometrical constraints of the optical system. A dichroic mirror (DM) is used to overlap the 532 nm beam path with the 493 nm fluorescence beam path. This allows us to focus the 532 nm light on the ions with the same NA0.5 objective we use for imaging them.
for the register. In the latter case, well-known techniques such as composite pulses or coherent cancellation can be used to further reduce this error.
The procedure used to manufacture the SPIM-WGs is highly flexible and so it can be used to create devices with a large number of different devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices. The device is designed to be able to operate the device with a large number of devices.
of channels, as well as devices with non-uniform channel spacing. The latter capability is important for applications in ion traps that do not have sufficient degrees of freedom to generate anharmonic potentials to keep the ion spacing uniform across the ion chain, or where harmonic potentials are desired. Preliminary results from the characterisation of devices with 32 channels show no degradation of performance in terms of cross-talk and minimal change in propagation losses. In addition, the same technique can be used to manufacture devices optimised for use at other optical wavelengths and therefore be integrated into setups using different ion species and/or different gate mechanisms.
In conclusion, we have presented a novel method to individually optically address chains of trapped atomic ions. Our method achieves significantly lower cross-talk compared to existing methods integrated with trapped-ion setups, while maintaining high scalability and flexibility.
## Materials and methods
### Mode progression in the SPIM-WG chip
### SPIM-WG chip integration into the ion trap setup
To couple light from the VGA to the photonic chip, we kept the chip fixed and placed the VGA on a 6-axis positioning stage as shown in fig. 11. We imaged the chip output on a camera to evaluate the amount of light coupled from the VGA. We optimised the position of the VGA to maximise the average coupling efficiency, while simultaneously keeping the coupling efficiency across all channels as uniform as possible. Due to tolerances in the VGA spot size and spacing, the average coupling efficiency we achieved when optimising for all channels was lower than the maximum coupling efficiency that we could achieve for a single channel. In the devices used for the demonstration here, we achieved an average coupling efficiency of \(45(3)\%\).
As outlined in the main text, the coupling between the VGA and the chip is extremely sensitive to relative position changes between the VGA and the chip. In addition to changes in the power in each channel, changes in the coupling efficiency can also modify the phase and/or polarisation of the light that lead to gate errors. To ensure the coupling between the VGA and the chip stays constant during system operation, after optimising the coupling between the two components, the VGA was glued to the chip using the NOA061 UV curing glue as shown in fig. 11(b). To avoid changes in the coupling while the glue was curing, we cured at a gradually increasing UV power over \(2-3\) hours.
The VGA and photonic chip assembly was then glued to a stainless steel plate and was transported to the ion trap system. This stainless steel plate was then bolted on a positioning stage that is part of the setup as shown in fig. 11(c). Upon exchanging chips, the stainless steel plate with the glued VGA-photonic chip assembly is the only component that needs to be exchanged, while the remaining optical setup is left unchanged. Therefore only minimal realignment is necessary following chip exchange.
### Robust phase estimation for an AC Stark shift measurement
We used the AC Stark shift induced by a single \(532\,\mathrm{nm}\) beam on the \(|S_{1/2},F=2,m_{F}=0\rangle\leftrightarrow|D_{5/2},F=4,m_{F}=+1\rangle\) transition to measure the intensity of the beams and hence characterise the chip output at the ion position. This AC Stark shift introduces a relative phase between the two states that is proportional to the beam intensity. We used robust phase estimation [32, 33] (RPE) as shown in fig. 12(a) to estimate that phase. This significantly reduced our sensitivity to other systematic errors such as state preparation and measurement errors. To reduce the sensitivity of the system to decoherence and enable longer probing durations
Figure 10: A 3D diagram of a single adiabatic mode converter channel in the photonic chip. The adiabatic mode conversion was implemented at the straight input region, while the channel cross-section at the bending region was kept constant until the channel output.
for the RPE sequence (hence also increasing the measurement dynamic range) we embedded the RPE sequence as part of a Knill dynamical decoupling sequence [34] (KDD) as shown in fig. 12(b). For probe durations shorter than the system coherence time, the RPE sequence was embedded in a spin-echo sequence instead. This avoided 532 nm pulses with durations comparable to the AOM settling time constants. For probe durations longer than the system coherence time, the KDD sequence in fig. 12(b) was used and the number of KDD pulses \(N_{\text{KDD}}\) was tuned to ensure that the maximum duration between consecutive \(\pi\)-pulses did not exceed the system coherence time. In this experiment this duration was set to 500 \(\upmu\)s.
Figure 11: Coupling between the VGA and the photonic chip. (a) Setup used to couple light from the VGA into the photonic chip. The VGA was placed on a 6-axis positioning stage (right) while the photonic chip was held stationary (centre). The objective on the left was used to image the photonic chip output on a camera to evaluate the quality of the coupling. (b) A photo of the VGA glued to the photonic chip after the coupling was optimised. (c) A photo of the VGA and photonic chip assembly glued on a stainless steel plate prior to integration with the rest of the optical system in the ion trap setup.
Figure 12: Pulse sequence for an AC Stark shift measurement. Unless otherwise specified, all \(\pi\) and \(\pi/2\) pulses are rotations about the \(x\)-axis of the Bloch sphere. (a) Robust phase estimation (RPE) sequence for estimating the phase introduced on the \(\ket{\rightarrow}=\left(\ket{0}+i\ket{1}\right)/\sqrt{2}\) state as a result of the AC Stark shift from a single 532 nm beam. The ion is prepared in the state \(\ket{0}\) and the subsequent 1762 nm \(\pi/2\) pulse prepares the state \(\ket{\rightarrow}\). Then a single 532 nm beam is turned on for a duration \(\tau_{532}=2^{k}\tau_{0}\) where \(\tau_{0}\) is the _base_ duration defining the maximum observable AC stark shift of \(\delta_{\text{ac, max}}=1/\tau_{0}\). The final 1762 nm \(\left(\pi/2\right)_{\phi}\) pulse defines the measurement basis. The measurement is performed for \(\phi=0,\pi/2,\pi,3\pi/2\) to minimise the effect of polarising noise in one direction. (b) The sequence used to measure the AC Stark shift introduced by a single 532 nm beam on the \(\ket{0}\leftrightarrow\ket{1}\) transition, consisting of the RPE protocol embedded into a KDD sequence to extend the system coherence time.
## Acknowledgements
We thank A. Sinclair, G. Wilpers, and team at NPL for providing the ion trap and vacuum package used in this work. We thank Vivene Dela Cruz for contributing to the SPIM-WG characterisation. This work was supported by a UKRI FL Fellowship (MR/S03238X/1); the US Army Research Office (W911NF-20-1-0038); the UK EPSRC Hub in Quantum Computing and Simulation (EP/T001062/1); EPSRC Fellowship (EP/T00326X/1); Marie Curie Fellowship UKRI guarantee (EP/X024296/1); Austrian Science Fund (I3984-N36). C.H. acknowledges St. John's College, Oxford for support through a Junior Research Fellowship. D.P.N. thanks Merton College, Oxford for the same. A.S.S. acknowledges funding from the JT Hamilton scholarship from Balliol College, Oxford.
## Author contributions
B.S. designed and fabricated the SPIM-WG photonic chips. B.S. conducted various simulations and performed the classical tests of the chips. A.S.S. designed and implemented the systems and procedures used to integrate the chips with the ion trap apparatus. A.S.S. and D.P.N. took the single-ion measurements. A.S.S. performed the data analysis. M.W. and A.W. assisted with the photonic chip fabrication, characterization and waveguide mode analysis. C.H. analysed the polarisation and phase effects of the waveguide coupling errors, and assisted with microscopy. A.J. and S.M. performed the measurements and the analysis of the waveguide refractive index properties. A.S.S. and F.P. set up the ion trap experiment. J.D.L., A.V.B., and F.P. contributed to the experiment apparatus and supported the operation of the experiment. M.J.B.and C.J.B. obtained funding and supervised the project. A.S.S. and B.S. wrote the manuscript, with assistance from M.J.B and C.J.B. A.W. assisted with figure preparation. All authors reviewed the manuscript.
## Conflict of interest
The authors declare no competing interests.
|
2303.13521 | Scamming the Scammers: Using ChatGPT to Reply Mails for Wasting Time and
Resources | The use of Artificial Intelligence (AI) to support cybersecurity operations
is now a consolidated practice, e.g., to detect malicious code or configure
traffic filtering policies. The recent surge of AI, generative techniques and
frameworks with efficient natural language processing capabilities dramatically
magnifies the number of possible applications aimed at increasing the security
of the Internet. Specifically, the ability of ChatGPT to produce textual
contents while mimicking realistic human interactions can be used to mitigate
the plague of emails containing scams. Therefore, this paper investigates the
use of AI to engage scammers in automatized and pointless communications, with
the goal of wasting both their time and resources. Preliminary results showcase
that ChatGPT is able to decoy scammers, thus confirming that AI is an effective
tool to counteract threats delivered via mail. In addition, we highlight the
multitude of implications and open research questions to be addressed in the
perspective of the ubiquitous adoption of AI. | Enrico Cambiaso, Luca Caviglione | 2023-02-10T08:54:05Z | http://arxiv.org/abs/2303.13521v1 | # Scamming the Scammers: Using ChatGPT
###### Abstract
The use of Artificial Intelligence (AI) to support cybersecurity operations is now a consolidated practice, e.g., to detect malicious code or configure traffic filtering policies. The recent surge of AI, generative techniques and frameworks with efficient natural language processing capabilities dramatically magnifies the number of possible applications aimed at increasing the security of the Internet. Specifically, the ability of ChatGPT to produce textual contents while mimicking realistic human interactions can be used to mitigate the plague of emails containing scams. Therefore, this paper investigates the use of AI to engage scammers in automatized and pointless communications, with the goal of wasting both their time and resources. Preliminary results showcase that ChatGPT is able to decoy scammers, thus confirming that AI is an effective tool to counteract threats delivered via mail. In addition, we highlight the multitude of implications and open research questions to be addressed in the perspective of the ubiquitous adoption of AI.
This paper has been submitted for publication in
ITASEC23 - The Italian Conference on Cybersecurity, May 3rd - 5th, 2023, Bari, Italy.
## 1 Introduction
The use of mails to perform scams, drop attack payloads, deliver malicious URLs, and distribute unwanted spam messages has been a prime vector used by attackers since the early days of the Internet. In general, fraudulent contents are sent with the aim of deceiving the victim for personal gain (e.g., to receive moneys) or to force some behaviour (e.g., to install an executable). With the increasing diffusion of the Internet, the impact of threats delivered via mail is now very relevant, both considering the economic losses for the victims and the effort dedicated to detect harmful messages or attachments [15]. As today, the overall fraction of mails supporting frauds and criminal activities is up to the 90% of the total exchanged volume and this trend is expected to grow in the near future [3, 8, 19]. Therefore, mitigating the impact of malicious and unwanted mails is a crucial activity, not only limited to human aspects but also to prevent waste of resources (e.g., bandwidth and storage of mail servers). Among the various techniques proposed to counteract the plague of frauds or attacks delivered through mails, a vast corpus of works dealing with the use of Artificial Intelligence (AI) has emerged [17]. For instance, AI can be used to detect malicious mails, create filters, or even generate automatic replies. In this vein, our work aims at evaluating whether some form of AI can be used to interact with scammers and attract them in unproductive conversations.
Specifically, engaging scammers requires to generate suitable replies. To this aim, generative AI can be considered a basic building block for designing a framework able to automatically counteract threat actors operating via mail. In fact, generative techniques are capable of exploiting a knowledge set to generate novel contents [10]. For instance, models like Stable Diffusion or Dall-E 2 can produce images starting from text [2], whereas other tools can be used to create multimedia objects, such as music or videos [23]. With the goal of generating convincing replies to scam messages, ChatGPT ([https://chat.openai.com](https://chat.openai.com)) seems one of the most promising
and interesting methods. In essence, it implements a Natural Language Processing (NLP) generative algorithm developed by OpenAI to mimic realistic interactions during general purpose conversations [10]. Launched in November 2022, ChatGPT quickly gained popularity, reaching 1 million of total users in just 5 days1. In the wake of its popularity, ChatGPT has been investigated both by the industry and academia to create a wide range of contents. For instance, it has been used to write convincing scientific papers [32], to support medical patients by providing easy to understand reports [13], to act as a network honeypot [22], as well as for specific tasks such as the generation of code snippets or the early detection of security vulnerabilities [1].
Footnote 1: To roughly quantify the disruptive potential of ChatGPT, its diffusion can be compared with other Internet-wide services. Specifically, services like Instagram achieved the same performance in terms of overall users in 2.5 months, whereas Netflix took 3.5 years. For a detailed report, see: [https://www.statista.com/chart/29174/time-to-one-million-users/](https://www.statista.com/chart/29174/time-to-one-million-users/) (Last accessed on February 10, 2023)
Owing to its flexibility, this paper aims at evaluating the use of ChatGPT as a synecdoche of generative techniques to counteract the plague of mail scams. Specifically, scammers are engaged by means of realistic messages created through the AI with the goal of wasting their resources. Even if the limit of our investigation relies on the small number of considered attacks, the main goals of the paper are understanding the feasibility of the approach and outlining the perspective issues and research gaps to be addressed in the near future. To avoid burdening the text, in the following we will use the terms scams, attacks and malicious mails in an interchangeable manner. However, when doubts may arise, we will specify the type of threat, e.g., spam or phishing.
Summing up, the contributions of this work are: _i_) understanding the feasibility of using ChatGPT as a "security" tool to counteract malicious mail messages, _ii_) providing a preliminary quantitative assessment of the effectiveness of the AI-based approach, and _iii_) shaping the main research questions and engineering challenges to be addressed in the perspective of using generative methods to counteract mail-based scams.
The rest of the paper is structured as follows. Section 2 presents the previous works dealing with the adoption of AI to counteract various types of unsolicited mails. Section 3 discusses the framework and methodology used to prove the effectiveness of ChatGPT to generate coherent answers, while Section 4 showcases numerical results obtained via preliminary tests. Section 5 deals with some research questions that should be addressed and, finally, Section 6 concludes the paper and outlines possible future works.
## 2 Related Work
Mail messages are regularly abused to deliver a wide range of threats and they are one of the preferred vector to deploy ransomware attacks [24]. Besides, the majority of messages are devoted to support phishing campaigns or spam communications [8]. Indeed, mails are the main mechanism for implementing different and sophisticated fraud schemes to extort money [12]. In more detail, scam attempts tend to cluster into several, recurrent categories, such as messages threatening the victim or asking for charity [5]. However, the most popular and effective scam messages refer to large winnings notifications [6]. As a consequence of the massive diffusion of mail communications, the design and deployment of efficient protection mechanisms have been prime research topics for several decades and still pose many open research challenges, especially due to adversaries continuously evolving and adapting their offensive strategies.
As regards the mitigation of unwanted mails and scam messages, the literature proposes several approaches. For instance, [26, 27] showcase a challenge-response scheme that the sender has to complete before contacting the recipient, i.e., to be whitelisted and avoid further checks.
Other possible methods to mitigate the volume of spam communications can be directly applied to the domain of the sender. In more detail, the Sender Policy Framework and the DomainKeys Identified Mail can be used to prevent spammers from sending messages through a well-defined domain also by means of spoofed identities [17]. Unsolicited and malicious contents can also be counteracted at a protocol-level. In this case, [25] proposes an extension to the SMTP to automatically check whether the domain of the sender corresponds to a valid DNS entry. The impact of fraud mails can also be assessed by considering the content of the message. As an example, [18] identifies scam communications through text analysis, i.e., inappropriate statements are identified.
More recently, techniques to reduce the impact of malicious mails are increasingly exploiting AI or machine-learning-capable approaches. In this regard, [31] considers deep learning algorithms to classify spam messages through word embedding techniques, while [28] proposes a real-time detection system for the identification of phishing attacks. To the best of our knowledge, previous techniques for mitigating the impact of scam attempts via mail do not consider the use of generative AI-based schemes to engage scammers. The only notable exception is [19], although it adopts a long short-term memory approach to generate basic questions and consume the time of the attacker. Concerning AI techniques to implement spam/scam countermeasures, they have been primarily used to automatically inspect various parts of a message in order to detect spam or phishing mails, i.e., for classification purposes. Specifically, the AI can be used to check the headers, the SMTP envelope, or different portions of SMTP data [8, 17].
Employing AI to generate mail contents or to face some security issues has already been partially investigated, even with scopes different from those addressed in this paper. In more detail, [16] exploits natural language models (i.e., GPT-2 and GPT-3) to generate email phishing messages to conduct tests. Besides, [22] showcases how ChatGPT can be adopted to simulate Linux/Windows terminals within a honeypot.
## 3 Methodology
To evaluate the feasibility of taking advantage of AI to interact with scammers, we prepared a simple testbed. First, we selected a mail account with a realistic domain (i.e., @cnr.it), which has been publicly available on the Internet for years. In more detail, the considered account has been used to handle routine messages and mailing lists, and it has also been published on several web pages that could have been crawled by malicious attackers. To operate the mail account we used the Microsoft Office365 platform, which includes an anti-spam filter.
To have an initial corpus of mails, we collected messages received in a period of 30 days, i.e., from 12th of November 2022 to 12th of December 2022. The overall experimentation lasted 60 days, i.e., from 12th of November 2022 to 11th of January 2023. Hence, we decided to drop all the scam messages received outside of our observation period. Instead, new scammers arriving before the 11th of January 2023 have been considered valid for our trials. To identify scammers, we used the following approach. Mails flagged as malicious by the Office365 platform have been manually inspected to evaluate their inclusion in our test set. For the sake of our investigation, we did not consider mails containing phishing attempts or those mimicking popular services or HTML pages requiring to directly follow a link [17]. Instead, we only considered plain-text messages asking for a direct interaction, i.e., a reply.
To generate replies we used ChatGPT. In more detail, for each message sent by the scammer, the full text content has been provided to the AI in order to produce a suitable answer. Unfortunately, at the time of our experiments, ChatGPT does not allow to perform tweaks or to alter its normal behavior, i.e., it must be considered as a black-box solution. As a con
sequence, directly feeding the AI with scam messages led to a warning without providing a suitable answer. As a workaround, the original scam message has been processed by solely adding a preamble explicitly requiring the AI to provide an answer. Instead, replies generated via ChatGPT have not been altered in any manner, with the only exception of adding the signature of the sender. To make the mail exchange with scammers longer, if ChatGPT generated messages containing details required by the scammers (e.g., bank account information, postal addresses, or telephone numbers), we again tweaked the preamble to instruct the AI to not provide any personal detail.
For reducing the chances that the scammer could spot the "unmanned" nature of the replies, we mimicked the presence of a human endpoint by delaying the various answers. Then, in our trials we provided replies by randomly waiting for a period, which ranges from minutes to weeks.
Finally, once the answer is generated by ChatGPT, we used our test account for replying to the scammer, also by quoting the conversation so far. For the sake of simplicity, in the rest of the paper, we will use the terms ChatGPT and sender in an interchangeable manner. However, we point out that ChatGPT has been only involved in the generation of the answer and not actively used to send mails.
## 4 Preliminary Results
Table 1 contains volume statistics of the various messages exchanged between a specific scammer and the ChatGPT instance. Concerning Scammer 1, 2, 3, and 9, the mail thread stopped after a single reply from ChatGPT. As shown, for Scammer 1, 2, and 3, the sent messages have not been received, since we obtained an error, i.e., SMTP status 5.2.1. However, this behavior is something that could be expected and should not be considered a flaw in the "credibility" of the message generated through the AI. Moreover, the original messages from Scammer 1, 2, 3, and 9, contained a.pdf attachment, thus they were probably intended as a one-shot communication to drop a payload on the host of the victim, e.g., a keylogger, or to support a web phishing campaign [17]. A similar explanation holds for Scammer 11. In this case, no errors were generated by the mail provider, but the original communication contained a link to a website (i.e., http://KAV[SANITIZED].NET) and a set of credentials. For the case of Scammer 5 and 7, we did not receive any answer as well, but the replies have been successfully delivered.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline
**Scammer** & **SMTP Status** & **Thread** & \multicolumn{2}{c|}{**Scammer Msg. Len.**} & \multicolumn{2}{c}{**ChatGPT Msg. Len**} \\ ID & Code & No. Mails & Avg. Chars & Avg. Sent. & Avg. Chars & Avg. Sent. \\ \hline
1 & Failed (5.2.1) & 2 & 331 & 2 & 333 & 3 \\
2 & Failed (5.2.1) & 2 & 261 & 1 & 323 & 4 \\
3 & Failed (5.2.1) & 2 & 291 & 3 & 362 & 4 \\
4 & - & 12 & 1,487 & 13 & 536 & 6 \\
5 & - & 2 & 4,572 & 48 & 319 & 5 \\
6 & - & 10 & 987 & 6 & 526 & 6 \\
7 & - & 2 & 11,094 & 108 & 487 & 5 \\
8 & - & 14 & 1,382 & 12 & 473 & 5 \\
9 & - & 2 & 120 & 1 & 277 & 5 \\
10 & - & 18 & 474 & 7 & 432 & 6 \\
11 & - & 2 & 207 & 7 & 292 & 7 \\ \hline \end{tabular}
\end{table}
Table 1: Overall volume statistics of the threads between scammers and ChatGPT. Scammers have been sorted according to the date of the first received mail message.
Most probably, the malicious actor stopped surveying the address in the meantime, for instance to avoid detection or because he/she has been neutralized.
With Scammer 4, ChatGPT exchanged 6 different mails, leading to an overall thread of 12 messages, while with Scammer 6 the AI has been used to lock the malicious actor in a thread of 10 mails. Concerning the effectiveness of ChatGPT to generate realistic messages, it is worth considering the conversation with Scammer 8, which happened across Christmas holidays. After exchanging 12 mails, the attacker wrote a message for making wishes. Then, he/she asked for a telephone number to switch the conversation from mail to voice2. See Table 2 for details on the textual form/content of the various mail messages. It is worth underlining that organizing a phone meeting has been proposed by ChatGPT in its reply. This is surely a limit of using a tool trained over a general corpus of text, rather than on a dataset properly suited for counteracting malicious mails. Lastly, Scammer 10 is the one exhibiting in the highest number of replies, i.e., we exchanged 18 mails in total. It is worth mentioning that after some mails generated via ChatGPT, the malicious actor decided to ask money by sending pointers to a Western Union account. As today, the scammer is still hanged waiting for a reply or an acknowledgement of the wire transfer. Similarly, Scammer 4 is waiting for a feedback about a money transfer through SilverBank.
Footnote 2: Actually, the scammer further replied, but we did not consider the message since it was outside our window of observation.
As shown in Table 1, each thread could also be characterized by using simple metrics. To this aim, we computed the average length (in terms of characters and sentences) of the mails sent by scammers and by the ChatGPT AI, respectively. In more detail, the average length of messages sent by Scammer 5 and 7 is \(4,572\) and \(11,094\), respectively. However, the threads stopped after only two messages, probably since the malicious actors expected a different type of answers despite their length (the highest in our trials). Therefore, answer messages generated by ChatGPT should be somewhat also tweaked to further engage the scammer or not raise
\begin{table}
\begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{**ChatGPT-generated Message**} & **Scammer Reply** \\ \hline \hline \multicolumn{1}{c|}{Dear Mr. [OMITTED],} & \multicolumn{1}{c|}{Dear [OMITTED],} \\ \multicolumn{1}{c|}{Thank you for reaching out to me regarding the cash} & \multicolumn{1}{c|}{Compliments of the Season.} \\ \multicolumn{1}{c|}{deposit and properties left behind by my late relative.} & \multicolumn{1}{c|}{I am happy hearing from you as regards} \\ \multicolumn{1}{c|}{I am interested in proceeding with making a claim on} & \multicolumn{1}{c|}{to this process. Send to me your number} \\ \multicolumn{1}{c|}{these assets and would appreciate any assistance you} & \multicolumn{1}{c|}{to call you for us to talk in details} \\ \multicolumn{1}{c|}{can provide.} & \multicolumn{1}{c|}{regarding to this transaction.} \\ \multicolumn{1}{c|}{I understand that you need my personal information} & \multicolumn{1}{c|}{thanks,} \\ \multicolumn{1}{c|}{in order to facilitate the process. Please note that} & \multicolumn{1}{c|}{Kar[OMITTED].} \\ \multicolumn{1}{c|}{I am unable to disclose my full name, address, or any} & \multicolumn{1}{c|}{other personal information via email. However, I am} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{c|}{happy to speak with you by phone or schedule a} & \multicolumn{1}{c|}{meeting in person to discuss this further.} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{c|}{Please let me know how you would like to proceed.} & \multicolumn{1}{c|}{} \\ \multicolumn{1}{c|}{Best regards,} & \multicolumn{1}{c|}{[OMITTED]} & \multicolumn{1}{c|}{} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mail exchange between ChatGPT and Scammer 8 across the holiday season. (Notice: minor formatting has been done to better fit the text in the table).
suspects. By analysing the content of both the emails received, the malicious senders suggest to invest moneys in specific stocks, thus asking for precise witnesses that the victim has been successfully decoyed. For the other threads, we did not found any relevant correlation between the length of the messages and the behavior of the scammer (specifically, in terms of the number of exchanged mails).
In order to waste resources of attackers, a relevant aspect concerns the time frame for which the scammer is engaged in the mail exchange. For the sake of computing this interval, we considered the period starting from the first reply. Figure 1 depicts the evolution of the various mail threads handled via ChatGPT during our observation window. In general, messages prepared with ChatGPT engaged spammers for \(\sim\)18 days, on average. Specifically, the exchange with Scammer 10 was the shorter and lasted \(\sim\)6 days. In this case, the mail flow was quite tight (i.e., 2 mails per day) to pursue the malicious goal of providing coordinates for the wired transfer as soon as possible. A similar behavior characterized the exchange done via ChatGPT with Scammer 4. To evaluate the impact of the initial response time, in this case we delayed the reply to the first message of 17 days. Instead, for Scammer 8 and 6 the conversations were longer, i.e., \(\sim\)27 days. With Scammer 8 we exchanged mails in a "bursty" manner interleaved with two stops period of 8 and 16 days, respectively. For Scammer 6 the exchange was more regular with two bursts of messages sent after a stop period of 9 and 14 days.
## 5 Open Research Questions
As hinted, the use of some form of AI to mitigate the various types of threats targeting mails (e.g., fraud, phishing, spam, or the drop of malicious payloads) has become a vivid research
Figure 1: Overview of the conversations with scammers who “play the game” during the observation period.
topic [17]. Roughly speaking, the most recent efforts seem to cluster around two major topics. The first aims at advancing in filters used to classify and detect mails, especially with the aim of preventing attacks or feeding a cybersecurity framework, for instance to automatically quarantine attachments or bounce messages (see [8] for the case of phishing). The second aims at understanding the potential of AI when deployed to support or substitute humans. As an example, phishing messages can be automatically generated by using AI to train users against social engineering attacks [11]. Despite the goal, a relevant shared portion of research requires to fully assess the multifaceted set of implications of mixing machine learning with security frameworks or countermeasures for inter-personal communications [8; 11; 17].
In this perspective, the use of ChatGPT opens different research questions often requiring to deal with the multifaceted flavor of AI and its rapidly-evolving pace. Thus, successfully incorporating AI in production-quality security frameworks requires to consider human and ethical aspects, computational optimizations and explainability constraints [7]. Specifically, the main open research questions that we identified when conducting our experimentation are:
* **Specialized and as-a-Service Implementations**: in general, it is hard to forecast a one-size-fits-all mechanism able to face with the various hazards delivered via mail, e.g., whale phishing or drop of malicious payloads. Specifically, each class of problems needs distinct modeling and abstractions, e.g., the allotted vocabulary (see [14] for specific traits of mails contributing to automatically generated FAQs). Unfortunately, the deployment of a framework for answering to an overwhelming amount of scam messages in an automatic manner could not be feasible for many small/medium-sized entities. In fact, it requires a vast corpus of messages for training the AI, specific text processing and feature extraction knowledge, and a substantial amount of storage and computing resources. Thus, industrial and academics should work towards implementations offered as-a-Service to take advantage of scale factors, especially to have enough data to train and tweak models.
* **Modeling the Human Behavior**: even if the text provided by ChatGPT (or other forms of AI) could appear as sound and valid, the scammer could detect the lack of a human counterpart due to patterns in text, absence (or presence) of grammatical errors, or too fast replies. In this vein, inspecting the received mail messages could be used by attackers to perform reconnaissance and fingerprint AI endpoints [21]. Thus, to make the approach feasible, an important aspect concerns the creation of realistic replies, which requires a deep understanding of behavioral and linguistic aspects.
* **Privacy and Forensics**: automatic and AI-driven mail answering requires to gather a relevant amount of real messages in order to generate suitable replies. This could clash with privacy requirements and regulations such as the General Data Protection Regulation increasingly pushing to the minimum the needed information [9]. Moreover, increased volumes of AI-generated replies could lead to difficulties in performing forensics investigations or tracing scammers across multiple services producing messages [4]. Thus, suitable tradeoffs between privacy, rights of users, and performances should be searched for.
* **Avoid Unwanted Traffic**: the massive deployment of AI-based countermeasures is expected to exacerbate the automation of many security processes, while minimizing the presence of a human in the loop. At the same time, it is unlikely that threat actors will not take advantage of AI or machine-learning-capable tools to generate messages or handle responses. Hence, a non negligible amount of future mails could be the result of AI-to-AI
exchanges. As a consequence, part of the ongoing research should also consider suitable techniques to mitigate the plague of unwanted traffic accounting for resource wastes and economical losses [20].
* **Ethical Implications**: interacting with humans and handling people-centric data and communications rise several ethical concerns. First, the idea of using AI to "scam the scammers" is somewhat intrinsically a fraud, since it goes beyond the classification of messages or the detection of malicious contents. Second, a plausible corpus of mails could contribute to spread untrue statements or exacerbate issues in discriminating contents created by humans from those generated by machines (see, e.g., the case of using ChatGPT for online exams or general education duties [29, 33]).
Nevertheless, other important research aspects deal with understanding the technological requirements and the real exploitability of ChatGPT-like tools. In fact, despite being preliminary, our current experimentation did not take advantage of ad-hoc tools. Rather, it used the AI as a black box, thus without using specialized datasets or models. At the same time, both the research and industrial communities should take in high regards the open points outlining the shape of AI-based generative mechanisms. For instance, Internet services should be able to block messages generated by the AI to not spread unrealistic messages (e.g., as it happens for StackOverflow posts [30]) or to avoid that an attacker can steal the model and use it for weaponizing his/her attack campaigns.
## 6 Conclusions and Future Works
In this paper we presented the use of ChatGPT to generate email messages to engage scammers and waste their resources. Results indicated that AI can be a valuable and effective tool, as we were able to exchange up to 18 mails with a single scammer, or to trick attackers for up to 27 days. Even if our experimentation was limited, it allowed to highlight the multitude of research and ethical questions that the use of a framework like ChatGPT rises. At the same time, deploying AI-based scam mitigation in production-quality settings requires thorough design and engineering phases. This paper should then be considered a sort of "manifesto" of the multifaceted and complex alchemy arising from the mix of personal mails messages and generative AI.
Future works aim at extending the scope of the experimentation both in terms of volume of mails considered (e.g., threads and senders) and the impact of the semantic/contents of the used text. Another relevant part of our research is devoted to improve our testbed to reduce at the minimum the need of human support, e.g., by integrating the mail management framework with ChatGPT/AI.
## Acknowledgment
This work has been partially supported by SERICS - Security and Rights in CyberSpace ([https://serics.eu](https://serics.eu)), within the Piano Nazionale di Ripresa e Resilenza, funded by the NextGenerationEU framework (No. 341 - 15/03/2022). |
2306.16287 | A Review on Optimality Investigation Strategies for the Balanced
Assignment Problem | Mathematical Selection is a method in which we select a particular choice
from a set of such. It have always been an interesting field of study for
mathematicians. Accordingly, Combinatorial Optimization is a sub field of this
domain of Mathematical Selection, where we generally, deal with problems
subjecting to Operation Research, Artificial Intelligence and many more
promising domains. In a broader sense, an optimization problem entails
maximising or minimising a real function by systematically selecting input
values from within an allowed set and computing the function's value. A broad
region of applied mathematics is the generalisation of metaheuristic theory and
methods to other formulations. More broadly, optimization entails determining
the finest virtues of some fitness function, offered a fixed space, which may
include a variety of distinct types of decision variables and contexts. In this
work, we will be working on the famous Balanced Assignment Problem, and will
propose a comparative analysis on the Complexity Metrics of Computational Time
for different Notions of solving the Balanced Assignment Problem. | Anurag Dutta, K. Lakshmanan, A. Ramamoorthy, Liton Chandra Voumik, John Harshith, John Pravin Motha | 2023-06-28T15:08:16Z | http://arxiv.org/abs/2306.16287v1 | # A Review on Optimality Investigation Strategies for the Balanced Assignment Problem
###### Abstract
Mathematical Selection is a method in which we select a particular choice from a set of such. It have always been an interesting field of study for mathematicians. Accordingly, Combinatorial Optimization is a sub field of this domain of Mathematical Selection, where we generally, deal with problems subjecting to Operation Research, Artificial Intelligence and many more promising domains. In a broader sense, an optimization problem entails maximising or minimising a real function by systematically selecting input values from within an allowed set and computing the function's value. A broad region of applied mathematics is the generalisation of metaheuristic theory and methods to other formulations. More broadly, optimization entails determining the finest virtues of some fitness function, offered a fixed space, which may include a variety of distinct types of decision variables and contexts. In this work, we will be working on the famous Balanced Assignment Problem, and will propose a comparative analysis on the Complexity Metrics of Computational Time for different Notions of solving the Balanced Assignment Problem.
Combinatorial Optimization, Branch and Bound, Brute Force, Hungarian Algorithm, Artificial Intelligence
## I Introduction
A foundational combinatorial problem formulation is the assignment problem. The challenge, in its simplest and general form, is as described in the following - "There are several agents and tasks in the problem under consideration. Any operative can indeed be delegated to undertake any task, at a cost that varies based on the operative assessment. It is necessary to complete as many tasks as feasible by allocating no more than one operative to each job at hand and no more than one job to each operative, to ensure that the total cost of the assessment is minimised.". Mathematically, the Problem Statement is as,
**General Assignment Problem:**_Optimal assignment for \(\mathcal{K}\) workers, \(\mathcal{W}_{\forall}\forall i=1,2,3,...,\mathcal{K}\) having an equal number of Jobs, \(\mathcal{J}_{i}\forall i=1,2,3,...,\mathcal{K}\) with associated Job Cost, \(\mathcal{C}_{i}\forall i=1,2,3,...,\mathcal{K}\)_
A good example for the demonstration of the Problem Statement would follow as, Assume a cab company does have 3 cabs (operatives) obtainable as well as 3 clients (jobs) who want to be scooped up as quickly as possible. The company takes dignity in quick pickups, so the expense of pulling up a specific client for each cab will be determined by the time it necessitates the cab to arrive at the destination port. This is a concern of balanced assignment. Its workaround is that whichever taxi-customer pairing outcomes in the lowest overall cost.
Numerous business organizations worldwide of trade strive to make the best use of their constrained resources across numerous activities. They can only do so by employing the assignment problem procedure. It is a subset of transportation problems in the trade industry, with the main objective of assigning an equal proportion of beginnings to an equivalent assortment of destinations. This method entails assigning people to diverse programs, job prospects to machineries, and educators to classrooms, among other things. Assessment |
2302.13608 | DeepSeq: Deep Sequential Circuit Learning | Circuit representation learning is a promising research direction in the
electronic design automation (EDA) field. With sufficient data for
pre-training, the learned general yet effective representation can help to
solve multiple downstream EDA tasks by fine-tuning it on a small set of
task-related data. However, existing solutions only target combinational
circuits, significantly limiting their applications. In this work, we propose
DeepSeq, a novel representation learning framework for sequential netlists.
Specifically, we introduce a dedicated graph neural network (GNN) with a
customized propagation scheme to exploit the temporal correlations between
gates in sequential circuits. To ensure effective learning, we propose to use a
multi-task training objective with two sets of strongly related supervision:
logic probability and transition probability at each node. A novel dual
attention aggregation mechanism is introduced to facilitate learning both tasks
efficiently. Experimental results on various benchmark circuits show that
DeepSeq outperforms other GNN models for sequential circuit learning. We
evaluate the generalization capability of DeepSeq on a downstream power
estimation task. After fine-tuning, DeepSeq can accurately estimate power
across various circuits under different workloads. | Sadaf Khan, Zhengyuan Shi, Min Li, Qiang Xu | 2023-02-27T09:17:35Z | http://arxiv.org/abs/2302.13608v2 | # DeepSeq: Deep Sequential Circuit Learning
###### Abstract
Circuit representation learning is a promising research direction in the electronic design automation (EDA) field. With sufficient data for pre-training, the learned general yet effective representation can help to solve multiple downstream EDA tasks by fine-tuning it on a small set of task-related data. However, existing solutions only target combinational circuits, significantly limiting their applications. In this work, we propose _DeepSeq_, a novel representation learning framework for sequential netlists. Specifically, we introduce a dedicated graph neural network (GNN) with a customized propagation scheme to exploit the temporal correlations between gates in sequential circuits. To ensure effective learning, we propose to use a multi-task training objective with two sets of strongly related supervision: logic probability and transition probability at each node. A novel _dual attention_ aggregation mechanism is introduced to facilitate learning both tasks efficiently. Experimental results on various benchmark circuits show that _DeepSeq_ outperforms other GNN models for sequential circuit learning. We evaluate the generalization capability of _DeepSeq_ on a downstream _power estimation_ task. After fine-tuning, DeepSeq can accurately estimate power across various circuits under different workloads.
Representation Learning, Sequential Circuits, GNNs
## I Introduction
With the breakthroughs in deep learning (DL) in the past decade, its application in the electronic design automation (EDA) field has become a hot research topic [1]. Many DL-based techniques are proposed to improve circuit design and test solutions, and they fall into two categories.
The first class of solutions targets different EDA problems independently and solves them from scratch [2, 3, 4]. Specifically, these solutions either learn a policy to replace the empirical decision-making choices in traditional heuristics [3] or model the circuit directly for performance prediction and/or optimization [2, 4]. Despite showing promising results, these solutions require careful model design and tuning for every problem from scratch.
The second class of solutions is motivated by the transfer learning paradigm in DL where a pre-trained model is employed to solve multiple downstream tasks [5]. First, a generic representation of circuit netlists is learned. Next, the model is fine-tuned with a small amount of task-specific data for various downstream EDA tasks [6, 7]. For example, [6] learns a generic gate-level representation of combinational circuits, and it is later used in [8] to solve the test point insertion (TPI) task. This class of solutions is more appealing than the previous one since the learned representations are capable of dealing with a vast set of EDA problems with limited effort in fine-tuning, instead of solving each problem from scratch.
However, existing works in circuit representation learning are only applicable for combinational circuits. Due to the presence of memory elements (e.g., flip-flops - FFs) in sequential netlists, the circuit behavior is reflected as state transitions at each clock cycle. Consequently, it is essential to capture such behavior by learning effective embeddings on the memory elements, which is an important yet challenging problem to resolve.
To this end, we propose _DeepSeq_, a novel representation learning framework based on graph neural networks (GNNs) for sequential netlists. For the combinational components of the sequential netlists, we follow the DeepGate framework [6] and convert them into an optimized and-inverter graph (AIG) format. Consequently, the transformed sequential circuits contain only 2-input AND gates, inverters, and FFs, which are represented as directed acyclic graphs (DAGs) with four type of nodes (primary inputs are treated as a special type of nodes). DeepSeq employs a novel DAG-GNN architecture equipped with a customized propagation scheme and a dedicated aggregation mechanism named _Dual Attention_ to effectively learn the temporal correlations between gates and FFs in the sequential netlists.
Moreover, to effectively learn sequential circuit behavior, we introduce a multi-task [9] training objective. Specifically, we use two sets of strongly related supervision: state transition probabilities and logic probabilities (the probability of logic being 1) on each node. Such a joint supervision scheme helps to direct DeepSeq towards learning informative representations that reflect the true logical behavior of the underlying sequential netlist. In this way, we effectively encode the computational and structural information of sequential circuits into the embeddings of logic gates and FFs.
DeepSeq has the potential to facilitate many downstream EDA tasks in sequential circuit design and analysis. In this work, we evaluate it on the _power estimation_ task. Experimental results show that we can accurately estimate the dynamic power across different designs under diverse set of workloads.
We summarize the contributions of our work as follows:
* To the best of our knowledge, this is the first work to learn a generic representation of sequential circuits. We propose a novel DAG-GNN architecture to effectively model the working mechanism of sequential circuits.
* We design a multi-task learning objective with two sets of related supervision: i) transition probabilities and ii) logic probabilities, which facilitate to capture the behavior of sequential netlists.
* We propose a dedicated aggregator function, i.e., _dual attention_ that mimics the logic computation and transition probabilities calculation in sequential circuits.
* We demonstrate the efficacy and generalization capability of pre-trained DeepSeq on a downstream power estimation task, and it is almost faithful to the results from simulation based power estimation.
We organize the remainder of this paper as follows. We review related works in Section II. Section III introduces the DeepSeq framework, while in Section IV, we present the experimental results of transition and logic probabilities prediction. In Section V, we show the results on the downstream power estimation task. Finally, Section VI concludes this paper.
## II Related Work
### _Graph Neural Networks_
In recent years, graph neural networks (GNNs) [10, 11] have emerged as the de-facto standard for processing irregular structured data. They propagate the node features by exchanging the information with neighbor nodes and learn the representations/hidden states of
nodes. Given a graph \(\mathcal{G}\), and a \(L\) layer GNN model, message passing at every layer \(l\) is given by:
\[\mathbf{h}_{v}^{\ell}=Combine^{\ell}(\mathbf{h}_{v}^{\ell-1},Aggregate^{\ell}( \{\mathbf{h}_{u}^{\ell-1}|u\in\mathcal{N}(v)\})),\ell=1,.,L \tag{1}\]
\[\mathbf{h}_{\mathcal{G}}=Readout(\{\mathbf{h}_{v}^{L},v\in\mathcal{V}\}) \tag{2}\]
where \(\mathcal{N}(v)\) is the set of neighboring nodes of node \(v\). The function \(Aggregate^{\ell}\) aggregates messages from neighboring nodes \(\mathcal{N}(v)\) during message passing. Different solutions have been proposed to instantiate the \(Aggregate^{\ell}\) function, such as convolution sum [10], and attention [12], thus generating different flavors of GNNs. \(Combine^{\ell}\) computes an updated hidden state of node \(v\) after aggregation. Finally, the function \(Readout^{\ell}\) collects the states of all nodes \(\mathcal{V}\) and computes the neural representation of whole graph.
Directed Acyclic Graph (DAG) is an important type of graph, commonly used in many domains such as communication system and decentralized finance. Recently, [13, 14] propose DAG-GNN designs, which follow the topological order of nodes for feature propagation and perform \(Aggregate^{\ell}\) operation on the set of predecessors nodes only. Furthermore, an \(L\)-layer DAG-GNN model can be applied for \(T\) times in a recursive manner to stabilize the final representations [15]. In this work, we refer to the recursive variant of the DAG-GNN model as DAG-RecGNN, while the non-recursive DAG-GNN model as DAG-ConvGNN.
### _Representation Learning in EDA_
Existing representation learning solutions for gate-level netlists in EDA target combinational circuits only [6, 7]. For example, DeepGate [6] models the combinational netlists as directed graphs and exploits unique inductive biases in circuits with a dedicated attention-based aggregation function, and _skip connections_ for reconvergence fan-out structures. DeepTPI [8] shows that using DeepGate as the pre-trained model helps to solve a node-level classification task, i.e., _Test Point Insertion_ efficiently.
[7] proposes a contrastive learning based _functionality graph neural network_ (FGNN) that encodes the functionality of a combinational netlist as graph level vector and demonstrates its potential on netlist classification and sub-netlist identification tasks.
### _DL based Power Estimation_
Existing DL-based power estimation solutions use an end-to-end learning flow. Grannite [16] proposes a DAG-GNN based power estimation solution for gate-level netlist based on register transfer level (RTL) simulations. Register states and unit inputs from RTL simulations are used as node features. They predict the average toggle rates for combinational gates.
PowerGear [17] conducts the power estimations using GNN for FPGA high-level synthesis (HLS). They propose a graph construction flow to convert HLS design into graph-structured data and use an edge-centric GNN model which replicates the formulation of dynamic power to predict the power estimates. Primal [4] is a convolutional neural network (CNN) based power estimation solution for ASIC designs using RTL simulation traces. It provides cycle-by-cycle power inference for different workloads.
These solutions particularly focus on the power estimation task and are not transferable to related problems. Besides, they require heavy feature engineering (e.g., multiple nodes and edge features are used in [16]). However, our work focuses on learning a generic representation for sequential netlist that is useful for multiple tasks, such as power estimation and latency analysis, based on gate type as the only node feature.
## III Methodology
Fig. 1 shows the overview of DeepSeq model. To prepare the sequential circuit dataset, we extract sub-circuits of sizes in range 150 to 300 nodes from open source benchmarks [18, 19, 20]. The use of small sub-circuits helps to accelerate the DeepSeq training process. After training, DeepSeq can generalize to much larger circuits due to generalization capability of GNNs from small scale to large scale graphs [21], as demonstrated in Sec. V. Similar to [6], we pre-process the combinational part of the circuits into an and-invert graph (AIG) format to have a uniform design distribution. In this way, all circuits from different benchmarks end up with only two gate types: 2-input AND gate and inverter. Furthermore, to ensure the effective sequential circuit learning, we propose a multi-task training objective. Specifically, we simulate a random workload for every netlist and generate the two sets of strongly related node-level supervision: 1) transition probabilities (_TR_) 2) logic probability (_LG_) (more details are given in Sec. III-A).
In the next step, we design DeepSeq, a novel DAG-GNN model for sequential circuits. Specifically, we model the circuits as directed graphs and propose a customized information propagation scheme to encode the temporal correlations caused by FFs in sequential netlists. Moreover, to achieve our multi-task training objective, we design a dedicated aggregation function: _dual attention_, that learns transition
Fig. 1: The overview of DeepSeq framework for sequential circuit representation learning.
and logic probabilities in a more accurate manner (Sec. III-B provides more details about DeepSeq design).
### _Training Objective: Multi-Task Learning_
We opt a multi-task learning (MTL) [9] paradigm for DeepSeq as shown in Fig 1. Specifically, given an input circuit in AIG format, the objective of DeepSeq is to predict transition probabilities (\(\mathcal{T}^{TR}\)) and logic probability: the probability of node being logic 1 (\(\mathcal{T}^{LG}\)) simultaneously. Due to the presence of FFs in the sequential circuit, its behavior is reflected in terms of the temporal correlation between gates and FFs. Adding or removing a FF can heavily affect the properties of the sequential circuit, such as latency. Arguably, the transition probabilities are the most effective way to capture this information. Hence, using it as a learning objective helps to learn an effective and accurate representation of sequential circuits. Therefore, we supervise each node with a 2-d vector representing the probabilities of 0\(\rightarrow\)1 and 1\(\rightarrow\)0 transitions. We ignore the 0\(\rightarrow\)0 and 1\(\rightarrow\)1 transition probabilities because they do not reflect any information about transition in a node state.
Besides transition probabilities, we also supervise each node with another 1-d vector representing the logic probability. The reasons for using the logic probability as the supervision are two-fold: (i). It encodes the true structural information and computational behavior of the combinational part of the sequential circuit. (ii). The computation of transition probabilities of a gate or FF in a sequential circuit depends upon the logic probability of that gate or FF on two consecutive clock cycles. Consequently, using logic probability as another supervision helps to learn a more informative and accurate sequential circuit representation.
We learn both tasks together by minimizing the sum of L1 losses of individual tasks as shown in Eq. (3).
\[\mathcal{L}=L^{TR}+L^{LG} \tag{3}\]
### _The Proposed Model_
We now elaborate the GNN design of DeepSeq. Due to the presence of Flip-Flops (FFs), the behavior of sequential circuits depends upon the current input pattern and current state (previous output) of the circuit. In other words, it depends upon the sequence of input patterns applied at circuit inputs over a period of time. The existing DAG-GNN models such as DAG-ConvGNNs [13, 14] and DAG-RecGNNs [15] are infeasible for learning periodic information updates in circuit graphs. Since DeepGate [6] and FGNN [7] are designed for representation learning of combinational circuits only, they also cannot deal with temporal correlation in sequential circuits. Moreover, the existence of FFs can cause cycles in the circuit. So, it is not straightforward to apply a DAG-GNN model to a directed cyclic graph.
To address the above concerns, we propose a novel DAG-GNN architecture for sequential circuit learning. More precisely, given an input circuit in AIG format, we first map it into a directed graph \(\mathcal{G}\). We use the one-hot encoding of gate type as the node features \(x_{v}\) for each node \(v\). To be specific, the sequential AIG circuit contains only AND gate, NOT gate, Primary Inputs (PIs), and Flip-Flops (FFs). Hence, a 4-d vector is used as a node feature on each node according to its node type. We also assign an embedding vector \(h_{v}\) to every node \(v\) that is learned during training to encode the representation of the sequential circuit. We simulate every circuit with a random workload to generate the required supervision. Since the workload is defined in terms of PIs' behavior of the circuit in the testbench, for every circuit, first we randomly generate logic-1 probabilities for all PIs. Then based on these probabilities, we generate a sequential pattern with 10,000 cycles. In this way, the generated sequential patterns represent random workloads for the corresponding circuits. We include the workload information in each circuit during the learning process by initializing the \(h_{v}\) of its PIs with the logic-1 probability of the sequential pattern applied on them. For example, if the logic-1 probability of a particular PI in a circuit is \(0.1\) according to the applied sequential pattern and \(h_{v}\) has \(64\) dimension, then all dimensions of \(h_{v}\) contain the value \(0.1\). The \(h_{v}\) of the remaining nodes are initialized randomly. To be noted, we keep the \(h_{v}\) of primary inputs (PIs) fixed and update the \(h_{v}\) of remaining nodes during the GNN propagation. In this way, DeepSeq learns to infer both tasks, i.e., \(\mathcal{T}^{TR}\) and \(\mathcal{T}^{LG}\) based on the given workload information and embeds the true computational information of the circuit on each node \(v\). Next, considering the presence of cycles (see example circuit in Fig. 2) and the periodic information processing in sequential circuits, we designed a customized sequential propagation scheme in DeepSeq as follows:
* Move FFs to logic level 1 (LL-1) by removing their incoming edges. This removes the cycles and makes the FFs pseudo primary inputs (PPIs).
* Propagate the information using the forward layer from PIs to POs in a levelized, sequential manner through the combinational part of the circuit only. Note that the current states of FFs are not updated but used as predecessors' information for their corresponding successor nodes during this propagation as in the sequential circuit, the present states of FFs are used as pseudo inputs at each clock cycle.
* In this step, we propagate information using the reverse layer. The reverse layer is similar to the forward layer except that it processes the graph in reverse topological order. The reason for including the reverse layer is that the information from the successor nodes is useful to learn the implications implicitly in the circuit graphs [6].
* Since the circuits in our dataset contains D-FFs only, in which the output follows the state of the input, in the last step, we copy the updated representations of FFs' predecessors to FFs. This step mimics the behavior that the FFs are only updated at each clock cycle.
Fig. 2 illustrates the above process. Similar to standard GNNs, we use \(Aggregate\) and \(Combine\) functions to compute the hidden states of each node except PIs during propagation. Hidden state \(h_{v}\) of a node \(v\) is computed as:
\[h_{v}^{t}=Combine^{t}(h_{v}^{t-1},Aggregate^{t}(\{h_{u}^{t}|u\in\mathcal{P}(v )\})),t=1,..,T \tag{4}\]
where \(P(v)\) is the set of predecessors of node \(v\). This model is
Fig. 2: The overview of customized propagation.
recursively applied for \(T\) iterations to generate the final hidden state for each node. The reason for using the recurrent architecture is that it is impractical for GNNs to capture the circuit's computational and structural information with a single propagation, as proved in [6].
\(Aggregate-DualAttention:\) The training objective defined in section III-A requires the model to differentiate and learn logic and transition probabilities simultaneously. Both probabilities differ in their computational behavior, which makes the use of existing aggregation infeasible as they target to capture a single behavior at a time [10, 12]. To solve this problem, we define _Dual Attention_ aggregation function in the additive form [14] to instantiate the \(Aggregate\) function in Eq. (4). It mimics and learns the computational behavior of logic and transition probabilities at the same time.
Specifically, for \(\mathcal{T}^{LG}\) we calculate the aggregated message similar to [6], i.e., for a node \(v\) at an iteration \(t\), we first compute the aggregated message \(\mathbf{m}_{v}^{LG}\) from \(v\)'s predecessors as follows:
\[\mathbf{m}_{v}^{LG}=\sum_{u\in\mathcal{P}(v)}\alpha_{uv}^{t}\mathbf{h}_{u}^{t }\quad\text{where}\quad\alpha_{uv}^{t}=softmax(w_{1}^{\top}\mathbf{h}_{v}^{t-1 }+w_{2}^{\top}\mathbf{h}_{u}^{t}) \tag{5}\]
\(\alpha_{uv}^{t}\) is a weighting coefficient that learns the impact predecessors' information on node \(v\), since the different inputs have different impact on determining the output of a logic gate. In this manner, Eq. 5 mimics the logic computation, and we learn the information required for \(\mathcal{T}^{LG}\).
After that, we perform another attention between \(\mathbf{m}_{v}^{LG^{t}}\) and \(\mathbf{h}_{v}^{t-1}\) as shown in Eq. (6).
\[\mathbf{m}_{v}^{TR^{t}}=\alpha_{uv}^{t}\mathbf{m}_{v}^{LG^{t}}\quad\text{where} \quad\alpha_{uv}^{t}=softmax(w_{1}^{\top}\mathbf{h}_{v}^{t-1}+w_{2}^{\top} \mathbf{m}_{v}^{LG^{t}}) \tag{6}\]
\(\mathbf{m}_{v}^{LG^{t}}\) represents the logic computation result of node \(v\) at \(t^{th}\) iteration and \(\mathbf{h}_{v}^{t-1}\) represents the computational state of node \(v\) at \(t^{th}-1\) iteration. The intuition is that transition probabilities depend upon the current state and previous state of the node. Correspondingly, the Eq. (6) mimics the transition probability computation for \(\mathcal{T}^{TR}\). Finally, we concatenate the results from Eq. (5) and Eq. (6) as our final aggregated message as shown in Eq. (7).
\[\mathbf{m}_{v}^{t}=\mathbf{m}_{v}^{TR^{t}}||\mathbf{m}_{v}^{LG^{t}} \tag{7}\]
\(Update-GRU:\) We use gated recurrent unit (GRU) to instantiate the \(Combine\) function in Eq. 4 as follows:
\[\mathbf{h}_{v}^{t}=GRU([\mathbf{m}_{v}^{t},\mathbf{x}_{v}],\mathbf{h}_{v}^{t- 1}) \tag{8}\]
where \(\mathbf{m}_{v}^{t}\), \(\mathbf{x}_{v}\) are concatenated together and treated as input, and \(\mathbf{h}_{v}^{t-1}\) is considered as the past state of GRU.
\(Regressor:\) After processing the input circuit graph through forward and reverse layers recursively for \(T\) times, we pass the final hidden states of nodes \(\mathbf{h}_{v}^{T}\) into two independent set of multi-layer perceptrons (MLPs), for prediction of \(\hat{y}^{TR}\) and \(\hat{y}^{LG}\) as shown in Figure 1. These MLPs do not share the weights and regress every node to predict their task-specific probabilities.
## IV Experiments
### _Experimental Settings_
#### Iv-A1 Dataset
To prepare the dataset, we extract \(10,534\) sequential sub-circuits from various open-sourced benchmarks: ISCAS'89 [18], ITC'99 [19], and Opencore [20]. The statistics of the training dataset is shown in Tab. I. We randomly generate one workload for each circuit as described in Sec. III-B. After that, we collect the transition and logic probability of each gate and FF in the circuit as the ground-truth by simulating the workload. It should be noted that the circuits in our dataset only contain D flip-flops (DFFs). Since the other kinds of FFs can be converted into a combination of DFF and other combinational logic, DeepSeq is also applicable to the circuits containing other kinds of FFs.
#### Iv-A2 Baseline Models
We compare the performance of DeepSeq with two baseline models, defined for directed graphs, i.e., DAG-ConvGNN [13, 14] and DAG-RecGNN [15]. Each model consists of one forward and one reverse layer. For both models, we try two different aggregation functions, i.e., convolutional sum (conv. sum) [10], attention [12, 14]. The combine functions in both baseline models are instantiated using GRU [13].
#### Iv-A3 Implementation Details
For DAG-RecGNN and DeepSeq, we set the number of iterations \(T=10\) to obtain the final embeddings. The \(\mathbf{h}_{v}\) has 64-bit dimensions and the regressor consists of 2 independent sets of 3-MLPs for the prediction of both tasks. The Rectified linear unit (ReLU) function is used as the activation function between MLP layers. All models are trained using the ADAM optimizer for \(50\) epochs. We use a learning rate of \(1\times 10^{-4}\). To speed up the training, we additionally use the topological batching method from [14].
#### Iv-A4 Evaluation Metric
We use the _average prediction error_ for both tasks to assess the effectiveness of different GNN models. It is defined as the average value of the absolute differences between the ground-truth probabilities and the predicted probabilities from GNN models, as indicated in Eq. (9). The smaller value indicates the better performance of the model.
\[Avg.\ Prediction\ Error=\frac{1}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}|y_{v}-\hat{y} _{v}| \tag{9}\]
### _Performance on Probabilities Prediction Task_
#### Iv-B1 Transition probabilities and Logic Probability Prediction
The results in Tab. II demonstrate the better performance of DeepSeq compared to the baseline models in terms of average prediction error for both. From the results, we observe that DAG-ConvGNN is always prone to poor performance. The reason is that a single propagation through the circuit graph can not capture the complex structural and computational information of the underlying circuit. Therefore, by using a recursive architecture, i.e., DAG-RecGNN, we can significantly improve the performance on both tasks.
With our dedicated dual attention aggregation function and customized propagation scheme, DeepSeq outperforms the best-performing baseline model, i.e., DAG-RecGNN with attention as the aggregation function. For both tasks \(\mathcal{T}^{TR}\) and \(\mathcal{T}^{LG}\), it achieves \(20.00\%\) and \(15.79\%\) relative improvement on avg. prediction error,
respectively. This proves that our proposed customized propagation scheme is more effective for sequential circuit learning than the simple propagation scheme used in baseline models. Also, our aggregation function _dual attention_ is more suitable for our multi-task objective. We discuss the performance gain due to individual components of DeepSeq in the following section.
#### Iv-A2 Effectiveness of different components of DeepSeq
Tab. III shows the effectiveness of different components of DeepSeq. From Table II, we know that the best-performing baseline model is DAG-RecGNN with the attention as the aggregation function. In this section, we compare DeepSeq with it. From our experiments, we observe that DeepSeq coupled with customized propagation brings \(11.43\%\) and \(2.11\%\) relative improvement on avg. prediction error for \(\mathcal{T}^{TR}\) and \(\mathcal{T}^{LG}\), respectively.
After using the _dual-attention_ aggregation function, we further gain \(9.68\%\) and \(13.98\%\) relative improvement on avg. prediction error for \(\mathcal{T}^{TR}\) and \(\mathcal{T}^{LG}\) respectively over the DeepSeq model with simple attention as the aggregation function. The major gain in error improvement for both tasks proves the effectiveness of our proposed aggregation scheme.
## V Case Study: Power Estimation Task
After designing a general model for sequential circuit learning, we apply it on a downstream power estimation task. Specifically, we describe the performance of DeepSeq on netlist-level power estimation. Formally, the circuit dynamic power is characterized as \(P=\frac{1}{2}CV_{dd}^{2}y_{avg}^{TR}\), wherein \(C\) is the capacitance, \(V_{dd}\) is the supply voltage, and \(y_{avg}^{TR}\) is the average transition probability of all gates.
The existing power estimation methods mainly fall into two categories: simulative and non-simulative. The simulative methods [22] rely on the simulation of a huge number of sequential patterns to calculate the switching activities. Although such simulation based power estimation is accurate, it is not practical for modern large-scale circuit designs due to the unacceptable runtime. The non-simulative methods [23, 24] are pattern-free and use heuristics to estimate the power in a polynomial time. However, they produce inaccurate results on structures such as reconvergence fanouts and cyclic FFs [23].
### _Fine-tuning of DeepSeq for Power Estimation_
We apply the pre-trained DeepSeq model for the power estimation on real sequential circuits that are substantially larger and different from our pre-training dataset. The accuracy of power estimation is subject to two factors, i.e., circuit structure and the impact of a workload on circuit gates and FFs. The pre-trained DeepSeq has already encoded the structural information and computational behavior of sequential circuits. We observe that the impact of random workloads (distribution of transition probabilities) on these large circuits are quite different than the small circuits used for pre-training. Particularly, when we simulate a workload, it only activates a few modules in the real applications. According to our empirical results, a large part (around \(70\%\) of total gates) in these large circuits show no transitional activities under a random workload due to low power design [25]. Therefore, to learn the impact of workloads on larger circuits (new distribution of transition probabilities) and accurately infer the power, we fine-tune DeepSeq on these large practical circuit designs. The fine-tuning dataset on the large circuit is generated with the same pipeline as Sec. III-B. Our empirical study shows that, after fine-tuning with \(1,000\) different workloads on a circuit, DeepSeq can generalize to arbitrary workloads for the circuit.
### _Experimental Settings_
In this experiment, we select six sequential circuits from Opencore [20] that are about \(1-2\) order-of-magnitude larger than the ones used during pre-training. The descriptions of these circuits are listed in Tab. IV. We select a non-simulative power estimation method [24] as baseline for comparison. By parsing the testbench file, we collect the transition probability and logic probability of each PI, which are the simplified workload information adopted by DeepSeq model and the baseline method. Our model only supports circuits in AIG format as input, therefore we decompose each gate into a combination of AND gates and NOT gates without any optimization. The fanout gate in the resulting combination has the same switching activity as the original node. We only record probabilities of the fanout gates in all converted combinations.
The power estimation pipeline is shown in Fig. 3. We set the transition probability achieved by the logic simulation as the ground truth. The baseline method [24] provides the approximated transition probability based on the non-simulative method. In comparison, fine-tuned DeepSeq predicts the gate-level transition probabilities. The resulting transition probabilities from all these three methods are translated into three Switching Activity Interchange Format (SAIF) files. After that, we input these files into a commercial power analysis tool that computes the average power with a TSMC 90nn standard cell library. Finally, we can get the ground-truth power consumption (GT Power), baseline estimated power (Baseline Power), and DeepSeq estimated power (DeepSeq Power).
### _Results_
#### V-C1 Power Estimation on the large real circuits
Table IV shows the accuracy achieved by DeepSeq over the baseline method, where the estimated power based on transition probabilities computed from ground-truth, baseline, and DeepSeq are abbreviated as GT, Baseline, and DeepSeq, respectively. The results show that the power estimation based on DeepSeq model has smaller error than the baseline method. Take the circuit _ptc_ as an example, our model estimates the power consumption only with \(3.20\%\) error, while the baseline method has \(25.55\%\) error. Overall, the baseline power estimation achieves \(16.35\%\) error on average, but DeepSeq only shows \(3.16\%\) error on average, which is much closer to the ground-truth power estimation. DeepSeq brings a significant improvement of \(80.67\%\) in error reduction.
Fig. 3: The pipeline of power estimation
#### V-C2 Power Estimation with different workloads
We take the _ac97_ctrl_ as an example to show that DeepSeq is generalizable to a new circuit under many different workloads. We assign \(5\) different workloads to this circuit and estimate the power consumption based on the same pipeline as Fig. 3. As illustrated in Tab. V, DeepSeq achieves only \(2.57\%\) error in comparison to GT power estimation, while the baseline method has \(15.51\%\) error.
### _Limitations_
DeepSeq embeds the true sequential circuit behavior and outperforms the baseline GNN models for both \(\mathcal{T}^{TR}\) and \(\mathcal{T}^{LG}\) tasks. After fine-tuning, it provides more accurate power estimates than the traditional non-simulative power estimation methods. However, currently, DeepSeq is \(3\times\) to \(4\times\) slower than the commercial simulation tool that employs many parallelization techniques. The main reason is that DeepSeq performs the message passing in a levelled, sequential manner. Similar problems have been observed in other asynchronous message-passing networks, such as D-VAE [13] and DAGNN [14]. One potential solution is to apply the parallelizable computation structure encoder (PACE) [26] to map the graph structure to sequences of node embeddings and then capture the relations between nodes in a parallel manner. Moreover, it is possible to extend DeepSeq to embed netlists at subcircuit level, thereby dramatically reducing the size of the learned model. We leave them as future works.
## VI Conclusion
In this paper, we present _DeepSeq_, a novel representation learning framework for sequential netlists. On the one hand, we introduce a multi-task learning objective in DeepSeq to effectively encode the computational and structural information of sequential circuits into the embeddings of logic gates and FFs. On the other hand, DeepSeq employs a novel DAG-GNN architecture equipped with a customized propagation scheme and a dedicated aggregation mechanism named _Dual Attention_ for effective learning. With the above techniques, DeepSeq outperforms state-of-art DAG-GNN models in terms of prediction accuracy for transition and logic probabilities. To evaluate the effectiveness and generalizability of pre-trained DeepSeq, we further apply it on power estimation task. After fine-tuning, it achieves \(80.67\%\) average error reduction compared to the baseline power estimation method.
|
2306.06094 | Leveraging Large Language Models for Scalable Vector Graphics-Driven
Image Understanding | Large language models (LLMs) have made significant advancements in natural
language understanding. However, through that enormous semantic representation
that the LLM has learnt, is it somehow possible for it to understand images as
well? This work investigates this question. To enable the LLM to process
images, we convert them into a representation given by Scalable Vector Graphics
(SVG). To study what the LLM can do with this XML-based textual description of
images, we test the LLM on three broad computer vision tasks: (i) visual
reasoning and question answering, (ii) image classification under distribution
shift, few-shot learning, and (iii) generating new images using visual
prompting. Even though we do not naturally associate LLMs with any visual
understanding capabilities, our results indicate that the LLM can often do a
decent job in many of these tasks, potentially opening new avenues for research
into LLMs' ability to understand image data. Our code, data, and models can be
found here https://github.com/mu-cai/svg-llm. | Mu Cai, Zeyi Huang, Yuheng Li, Utkarsh Ojha, Haohan Wang, Yong Jae Lee | 2023-06-09T17:57:01Z | http://arxiv.org/abs/2306.06094v2 | # Leveraging Large Language Models for Scalable Vector Graphics-Driven Image Understanding
###### Abstract
Recently, large language models (LLMs) have made significant advancements in natural language understanding and generation. However, their potential in computer vision remains largely unexplored. In this paper, we introduce a new, exploratory approach that enables LLMs to process images using the Scalable Vector Graphics (SVG) format. By leveraging the XML-based textual descriptions of SVG representations instead of raster images, we aim to bridge the gap between the visual and textual modalities, allowing LLMs to directly understand and manipulate images without the need for parameterized visual components. Our method facilitates simple image classification, generation, and in-context learning using only LLM capabilities. We demonstrate the promise of our approach across discriminative and generative tasks, highlighting its (i) robustness against distribution shift, (ii) substantial improvements achieved by tapping into the in-context learning abilities of LLMs, and (iii) image understanding and generation capabilities with human guidance. Our code, data, and models can be found here [https://github.com/mu-cai/svg-llm](https://github.com/mu-cai/svg-llm).
Figure 1: (a) The left example showcases an SVG representation, illustrating a golf course. Each geometric shape in the SVG code represents a distinct object or line within the two-dimensional graphics. For instance, the red polygon represents a flag in the graphical image. (b) On the right, we provide an example that shows that LLMs are able to understand and generate shape, color, and relationships between different elements in an interactive manner.
Introduction
Large language models (LLMs) like ChatGPT [26] and GPT-4 [27] have gained prominence for their remarkable reasoning, in-context learning, and open-ended task abilities [8]. While large vision models (LVMs) [13; 24; 9] have also achieved impressive results in various tasks, they appear to exhibit fewer of these abilities.
As we delve deeper into the distinct reasoning, in-context learning, and open-ended task abilities of LLMs and LVMs, it is essential to recognize that both the task complexity and the data structure they handle play a crucial role in shaping their capabilities. LLMs like GPT-4 utilize a decoder-only architecture to handle diverse tasks, facilitated by the shared textual modality of both input and output. Conversely, vision tasks, with their pixel-based inputs and highly variable outputs--from labels and bounding boxes to segmentation masks, generated images, or textual captions--impose greater complexity and constraints on the capabilities of LVMs. Moreover, LLMs leverage the internet's diverse textual data and the inherent sequential structure of language to learn complex relationships and generate contextually relevant responses. In contrast, the continuous nature of visual data may also make it more challenging for vision models to discern complex patterns and relationships compared to the discrete structure of language data [5; 12; 22].
Recently, various vision language models [27; 33; 23] have been developed to capitalize on the impressive abilities of LLMs by integrating them with vision techniques. These efforts aim to bridge the gap between the two modalities by incorporating vision-specific components, such as the Vision Transformer [13] (ViT), which maps input images into embeddings or a set of continuous tokens. This approach has advanced vision-language understanding, but it relies on an additional visual encoder to convert images into a latent representation that the LLM can comprehend and align with text-based embeddings. Additionally, generative tasks may necessitate a visual decoder to revert these latent representations back into images.
An intriguing question that follows is whether we can harness the impressive abilities of large language models, such as reasoning, in-context learning, and open-ended task abilities, to tackle vision tasks _without any visual components_. That is, can we represent images using text-based descriptions that detail shapes, edges, and colors, to enable LLMs to directly and effectively understand and manipulate images?
Scalable Vector Graphics (SVG) [16] is a format for describing two-dimensional graphics in a manner that integrates with the web. Unlike raster images (like JPEG, PNG, or BMP) that are pixel-based, SVGs are defined in XML and use mathematical equations to depict shapes, curves, lines, and colors, as shown in Figure 1. One of the key advantages of using SVG over rasterized images is the potential to leverage large language models like GPT-4 for understanding and manipulating image content based on text prompts, opening up unique opportunities and applications beyond the traditional realm of vision models.
In this paper, we explore the potential of leveraging LLMs to perform a variety of vision tasks. Specifically, we convert raster images into SVG format and input these, along with tailored prompts, into LLMs. Our method is evaluated across discriminative and generative visual tasks, with promising initial results. In discriminative tasks, the SVG representations demonstrate shape-biased robustness against distribution shifts, notably outperforming raster-based methods on distribution shift benchmarks. Additionally, we enhance image classification by harnessing the in-context learning capabilities of LLMs, exceeding the performance of zero-shot learning. In generative tasks, our approach enables image generation and editing based on interactive, chat-based feedback, refining generated results to meet human expectations. Our method excels in visual prompting tasks on synthetic benchmarks, outperforming state-of-the-art methods by utilizing the strong reasoning capabilities of LLMs. Through the use of human-engineered prompts, we showcase the ability of LLMs to identify and execute transformations related to color, shape, style, and content within SVGs, generating credible outcomes. Lastly, we present preliminary results suggesting the potential of LLMs to perform complex visual tasks, like segmenting real images.
In summary, our work makes the following contributions:
* We propose a new approach to processing images with LLMs, converting raster images into SVG format for direct processing, thereby creating new possibilities for image understanding and manipulation.
* We demonstrate that SVG representation is more of a shape-biased format, which shows robustness to distribution shifts and notably surpasses raster-based methods on a distribution shift benchmark. Furthermore, we can enhance image classification performance by leveraging the in-context learning abilities of LLMs, surpassing the results of zero-shot learning.
* We attempt to achieve modification and generation of images with interactive chat-based feedback by modifying and generating SVG with LLMs. Moreover, we show LLMs' ability to identify and apply transformations between SVG pairs, recognize diverse transformations, and generate valid outcomes, suggesting their potential in understanding complex visual tasks, such as real image segmentation.
While our research demonstrates the potential of using SVG with LLMs, there are a couple of key limitations. Specifically, the standard SVG representation is not as effective in handling photographic content due to the loss of fine-grained details. Naively countering this by incorporating more details in the SVG representation can lead to a sequence length that is prohibitively long for current Transformer-based LLMs. Despite these limitations, we believe that our work offers promising initial results for the integration of LLMs and SVG in visual tasks. Addressing the aforementioned limitations could lead to more powerful image representation algorithms and pave the way for more versatile and comprehensive artificial intelligence systems.
## 2 Related Work
### Scalable Vector Graphics
Vector graphics describe images as collections of parameterized shape primitives such as polygons, circles, and rectangles, rather than a regular raster grid of pixel values [28]. This representation is extensively supported by web browsers and can be rendered without any special software or plugins [3]. Primitives are usually characterized by a set of coordinates delineating their contour and the associated color. This leads to a compact and infinitely scalable representation where the appearance can be easily modified by adjusting stroke or color parameters. Consequently, vector graphics are the preferred choice among graphic artists and designers, as images maintain their sharpness regardless of the zoom level. Encapsulated PostScript (EPS) and Scalable Vector Graphics (SVG) are two notable vector-based formats [16].
SVG format stores images as XML-based text files that define geometrical objects and their properties [16], shown in Figure 1. This enables easy editing, manipulation, and embedding, which makes SVG particularly versatile for web applications and graphic design tasks [3]. EPS is another vector format for high-quality graphics that can be resized without losing quality [18]. In this paper, we employ large language models (LLMs) to understand images in the SVG format, achieving robust shape-color debiasing along with enhanced visual understanding and generation.
### In-Context Learning
In-context learning seeks to enhance model performance by providing additional context during inference, typically in the form of several "question-answer" pairs. Although initially proven effective in natural language processing (NLP) [29], in-context learning has only recently been introduced to the vision domain. Flamingo [2] was the first to apply in-context learning to image and video understanding tasks like Question-Answering (QA). Subsequently, [5] investigated in-context generation using an MAE-VQGAN architecture.
Specifically, [5] pretrained a neural network to fill in missing patches of grid-like images to enable in-context learning for unseen tasks such as image segmentation. However, the authors sourced the pretraining dataset from a large set of figures in computer vision papers, most of which were hand-crafted and lacked clear connections with natural images. Our paper directly applies in-context learning by feeding SVG data into the LLM, demonstrating the potential for robust image understanding and generation capabilities without the need for a trained visual encoder.
### Large Language Models
Large Language Models (LLMs) have attracted much attention in recent years due to their remarkable performance across numerous natural language processing tasks. GPT-3 [6], developed by OpenAI, is
a prime example of this category, boasting an immense scale of 175 billion parameters and human-like text generation capabilities. In a similar vein, BERT [11] (Bidirectional Encoder Representations from Transformers), introduced by Google, takes advantage of the transformer architecture and has substantially enhanced the state-of-the-art across various tasks by learning deep bidirectional representations. ChatGPT [26], another noteworthy model, is a GPT variant specifically designed for human-like conversational abilities. The most recent iteration, GPT-4 [27], succeeds GPT-3 [7] and carries on the LLM advancements in terms of scale and performance. These models lay the groundwork for our research, enabling us to investigate their potential in more complex tasks such as image processing and understanding. Our work effectively illustrates the applicability of LLMs to SVG-based image understanding and generation, paving the way for novel applications and research directions in the visual domain.
## 3 Tasks and Experiments
We first introduce the architecture, dataset, and implementation details in Section 3.1, 3.2, and 3.3. We then demonstrate LLMs have the capability to understand SVG representation for image recognition in Section 3.4, including zero-shot recognition, in-context learning, fine-tuning with LLMs, as well as robustness to distribution shift. Finally, we evaluate image generation and editing with interactive chat-based feedback by modifying and generating SVG with LLMs in Section 3.5.
### Architecture
Figure 2 illustrates the major difference between our method and the vision methods in solving vision tasks. In particular, we convert raster images into the SVG format, and then input these SVG images, coupled with thoughtfully crafted prompts, into the LLMs to accomplish a diverse range of vision tasks.
### Dataset
#### 3.2.1 Human Designed SVG Dataset
We collect a dataset from the public collection of SVG images.2 Specifically, we collect the digits and icons to demonstrate image recognition and generation capabilities. Examples are shown in Figure 3 (a) and (b).
Footnote 2: [https://www.svgrepo.com/](https://www.svgrepo.com/), [https://www.kaggle.com/datasets/victorcondino/svgicons](https://www.kaggle.com/datasets/victorcondino/svgicons)
#### 3.2.2 Convert Raster Images to SVG
Directly convert using curve tracing.Given the rich set of natural images in raster format, we utilize the curve tracing algorithm to convert RGB images into the SVG format.3 Specifically, we convert MNIST [10] to SVG format using this approach, shown in Figure 3 (c).
Footnote 3: [https://github.com/visioncortex/vtracer](https://github.com/visioncortex/vtracer)
Use segmentation prior knowledge for conversion.An image typically contains both high-level features such as object shape and low-level features such as texture. However, directly converting natural images into SVG format often leads to excessive attention to fine low-level details, resulting in very long SVG files. Therefore, biasing the representation on object shape can help build a more efficient SVG format. We use a universal segmentation model, Segment Anything Model (SAM) [21], to convert the image into a set of masks, which leads to a high-level abstraction of the scene.
Figure 2: Architecture of our model. We illustrate the major difference between standard vision methods (left) and our method (right) in solving vision tasks.
Specifically, we apply SAM to PASCAL-VOC dataset [15], then extract 20 masks with the largest area, where the color of each mask is the average pixel value within the mask, shown in Figure 3 (d).
### Implementation Details
The strong language understanding and reasoning capability of the LLM is the key to the success of SVG understanding. For zero-shot image recognition and generation, we adopt GPT-4 [27]. In addition, we also fine-tune Vicuna-7B [30] using the MNIST training set, and then evaluate classification performance on the test set.
### Image Recognition
We first conduct image recognition under various settings, which demonstrate the effectiveness of using LLM for SVG understanding. Due to resource (i.e., frequency and rate) constraints, we use a Mini-MNIST, which has 10 samples per class (i.e., overall 100 samples), to test the performance for zero-shot and in-context learning using GPT-4 [27]. For the fine-tuned Vicuna, we use the whole test set (\(\sim\)10k images).
Zero-shot image classification.Due to the natural textual representation of SVG, we can directly embed SVG within the prompt. For example, we can conduct zero-shot image classification by prompting What semantic category does this SVG image belong to? <SVG>, where <SVG> denotes the actual SVG code embedded within the prompt. We directly feed the prompt along with the query SVG to GPT-4 to get the zero-shot accuracy on MNIST. Shown in Table 1, we get 20% accuracy, meaning that GPT-4 owns a weak image prior to the SVG format.
In-context learning for image classification.Though LLM can solve the text understanding tasks in a zero-shot manner, recent studies reveal that in-context learning can effectively boost performance [6], where a list of input-output examples can significantly improve accuracy. However, it is still hard to conduct in-context learning for visual representations, especially for image generation [5]. On the contrary, in-context learning can be naturally conducted using SVG due to its textual representation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Method & ResNet18 & Zero-Shot & ICL (\#1) & ICL (\#3) & ICL (\#10) & Fine-tuning (Vicuna) \\ \hline \hline Image Format & PNG & SVG & SVG & SVG & SVG & SVG \\ \hline MNIST & 97.85\% & 20\% & 24\% & 26\% & 28\% & 97.24\% \\ \hline CMNIST-(A) & 72.48\% & 16\% & 19\% & 23\% & 26\% & 95.69\% \\ CMNIST-(B) & 29.59\% & 13\% & 20\% & 20\% & 23\% & 92.88\% \\ \hline \end{tabular}
\end{table}
Table 1: Image classification results using PNG and SVG format. CMNIST denotes the Colored-MNIST dataset. ICL denotes in-context learning. We use a Mini-MNIST dataset (100 images) to evaluate both zero-shot and in-context learning using the GPT4 API. ResNet and Vicuna are trained on the 60k black and white training set, and evaluated on the 10k test set.
Figure 3: Visualization of our datasets. (a) and (b) are human-designed SVG vectors and icons. (c) and (d) are converted from raster images. Specifically, (c) is generated using curve tracing from MNIST [10], while (d) is generated using SAM [21] and curve tracing sequentially.
We follow the typical settings for in-context learning, where the prompt is composed of the task instruction, a list of question-answer (a.k.a, input-output) pairs, and then the final question [32, 25]. For example, for MNIST [10] classification, given 2 in-context pairs, the exemplar prompt can be ''Instruction: Please predict the digit label for each of the following SVG images. <Q1>:<A1>; <Q2>:<A2>; <Q3>:''. Here <Q#>, denotes the SVG code, <A#> denotes the digit label. As a result, the LLM can give a more accurate answer based on the given input-output pairs.
We consider three representative cases here: (i) Randomly choose 1 class different from the class of the query sample, then randomly choose 1 sample from this class, (ii) Similar to the above case, but choose 3 classes (all classes but the query class), and (iii) Randomly choose a sample for all 10 classes. Shown in Table 1, even with 1 in-context sample, the model shows a 4% accuracy improvement compared to zero-shot classification, demonstrating the capability of LLM to learn the visual concepts within the context. With 1 sample per class, the test accuracy climbs to 28%, which is 8% higher than the zero-shot classification performance.
Fine-tuning Vicuna.The zero-shot and few-shot experiments demonstrate the existence of the visual understanding capability of LLM. However, its visual reasoning capability could be limited by the pretraining data, since SVG data may not be extensive in the pretraining stage of LLM. In order to explore the limit of the visual understanding capability of LLM, we fine-tune Vicuna on the training set of MNIST (60k images) as the instruction tuning dataset, where the SVG data is the query from the "human", and the label is the response of the "Assistant". After training for 3 epochs, we evaluate the MNIST test accuracy. Shown in Table 1, fine-tuned Vicuna achieves 97.24%, which is comparable to the 97.85% achieved by ResNet18 trained on the PNG version for 10 epochs. Our image recognition results validate the potential of understanding images via LLM using SVG, which could be further boosted by incorporating more supervised/unsupervised data into the pretraining stage.
Generalization: Colored-MNIST ClassificationOne desirable property of SVG is its shape-color disentanglement, where shape and color attributes are encoded at different blocks in SVG. Therefore, SVG can naturally be better at debiasing color from shape. In contrast, convolutional neural networks (CNN) are known to be biased toward color for image recognition tasks [17]. Here we create two Colored-MNIST datasets to evaluate the strong shape-color debiasing ability of SVG. (A) We randomly color the foreground into either red or green with 50% probability. Furthermore, we propose a stronger Colored-MNIST dataset: (B) For each background and foreground, the color is chosen randomly from black, white, red, blue, and green, but the background and foreground colors need to be different, i.e., there are 20 combinations. The visualizations of Colored-MNIST datasets are shown in Figure 4. After training on the normal black-and-white MNIST training set, we evaluate ResNet18 and Vicuna on the Colored-MNIST (A) and (B). As Table 1 shows, ResNet18 achieves a much lower accuracy of 72.48% and 29.59% for each case. On the other hand, the performance of our model is much stronger, showing strong robustness. We believe that with advanced image discretization techniques such as using SAM [21] prior knowledge, the advantage of utilizing a discrete representation of images could be magnified even further.
Figure 4: Visualization of our MNIST datasets. (a) Vanilla MNIST dataset. (b) Colored-MNIST-(A) dataset. (c) Colored-MNIST-(B) dataset.
### Image Generation
In this section, we aim to demonstrate that LLMs can understand and generate SVG code, allowing them to create new graphics or modify existing ones based on text prompts, without pixel-level manipulations.
#### 3.5.1 Synthetic Data Study
To evaluate the capabilities of LLMs towards understanding SVG, we follow [5] to create a set of 3 simple synthetic tasks and 3 of their combinations, and evaluate each model on 100 examples per task.
Tasks and Evaluation.Every pair in our example set includes an SVG showcasing a colored shape along with a corresponding SVG with specific transformations. The transformations consist of color, size, or a combination of these aspects. We delve into a more detailed description of each task in the appendix. For evaluation purposes, we adopt the method from [5], for measuring and reporting the color-aware mean Intersection over Union (mIOU).
Prompt.Given two example pairs and a query SVG, we structure the text prompt in the same fashion for all tasks. The prompt is designed to figure out the common transformation in the two examples first and then transform that query into the corresponding key SVG code. We include the prompt details in the appendix.
Qualitative and quantitative results.The results are presented in Table 2. See Figure 5 for our generated results. Our method outperforms the SOTA across six tasks by a clear margin. We believe that GPT4 can clearly understand simple shape, color, and size transformations by analyzing the SVG code without any pixel-level information.
#### 3.5.2 Style and Content Extrapolation
In this section, we assess if LLMs can extrapolate SVG codes with more challenging transformations, such as content and style.
**Style generation:** We present LLMs with sample SVG letters. The first task is to figure out the style in the given examples. Then, given a new test query, the second task is to transform this given query
\begin{table}
\begin{tabular}{l c c c c c c} \hline Method & Color & Shape & Size & Color Shape & Color Size & Shape Size \\ \hline \hline VQGAN [14] & 7.0 & 19.1 & 16.2 & 7.4 & 2.2 & 18.4 \\ BEIT [4] & 40.9 & 31.4 & 7.1 & 33.1 & 21.2 & 13.0 \\ MAE [19] & 70.2 & 44.0 & 34.7 & 19.3 & 19.0 & 46.0 \\ MAE-VQGAN [5] & 40.4 & 46.5 & 42.0 & 20.4 & 18.3 & 40.3 \\ \hline Ours (SVG with GPT4) & 100.0 & 92.6 & 100.0 & 92.6 & 100.0 & 86.5 \\ \hline \end{tabular}
\end{table}
Table 2: Synthetic data study results. We report the color-aware mIOU on the six tasks [5]. Our method surpasses the best SOTA competitors significantly. It demonstrates that GPT4 is able to understand and reason shape, color, and size transformation using SVG representation.
Figure 5: Synthetic data study results. The generation results of our method are annotated with a red square.
so that it adheres to the same stylistic conventions as the example letters. We show some qualitative results in Figure 6. More results can be found in the appendix.
**Content generation:** LLMs are shown two examples of SVG code pairs. Each pair consists of a query and key pair (both are numbers), where the query describes an SVG code of a number, and the key describes the SVG code of another number with an introduced mathematical operation. The operation can consist of add, subtract, multiply, and divide. The mathematical operation should be held in both example pairs. The first task is to figure out the mathematical operation in the two examples. Then, given a new test query SVG number, the second task is to identify what number it is and follow the mathematical operation discovered to generate the corresponding test key number. We include qualitative results in Figure 7. The prompt details can be found in the appendix.
#### 3.5.3 Referring Segmentation
The objective of the task is to label pixels in an image or video that correspond to an object instance referred by a linguistic expression. SVG representation has two advantages. First, language instruction is naturally embedded within the prompt, thus a separate design of the image segmentation model is not needed. Second, a large corpus of text and programming languages including XML are seen during pretraining, benefiting the vision-language understanding ability.
SVG is typically composed of several colored polygons, where each of them can correspond to a part of the object. Therefore, we can use the referring segmentation instructions to guide the LLM in finding the corresponding SVG code. Shown in Figure 6 (b) and (d), LLM can localize the object decently well. In (b), the majority of the airplane is selected as foreground, while in (d), not only is the lettuce recognized, but also the two pieces of cheese are localized and subsequently removed.
Figure 6: In-context learning and image generation capabilities of SVG with LLMs. (a) With human feedback, LLM gradually performs better on digit classification. (b) LLM powers SVG with the capability of image recognition and referring segmentation. (c) With human feedback, the content generation performance becomes better. (d) LLM can recognize and manipulate specific parts of the hamburger, such as removing or replacing them.
## 4 Discussion
While our research demonstrates the potential of using Scalable Vector Graphics (SVG) with large language models (LLMs) to tackle visual tasks without a parameterized visual encoder, the major limitation of SVG representation is the loss of fine details: Though our method of converting raster images into SVG format and leveraging XML-based textual descriptions allows for efficient processing of crisp graphics and designs, it is not as effective in handling photographic content. As a result, fine-grained details, such as image textures, may be lost during conversion. Conversely, when the SVG code incorporates an excessive level of detail, its sequence length can become prohibitively long, which can pose challenges for the training and inference of current Transformer-based LLMs. Developing hybrid representations that can retain the advantages of both discrete and continuous data, while preserving finer details, is an area for future exploration. For example, in LLMs, the processing unit is the token, which can correspond to one or several words. However, in SVG, we would prefer to have a specific embedding module for each geometric primitive in SVG, such as circles, polygons, and so on.
In summary, while our approach presents limitations, it offers promising initial results for the integration of LLMs and SVG in visual tasks. Addressing these limitations could lead to more powerful image representation algorithms and pave the way for more versatile and comprehensive artificial intelligence systems.
## 5 Conclusion
This paper explored the possibility of enabling large language models (LLMs) to "see" and process images through the Scalable Vector Graphics (SVG) format. By converting raster images into SVG representations and leveraging XML-based textual descriptions, we showed that LLMs can directly understand and manipulate images without a parameterized visual encoder. Our method leverages the intrinsic abilities of LLMs, thus eliminating the need for specialized vision encoders in the processing of images.
We demonstrated the efficacy of our proposed approach across discriminative and generative tasks, revealing the underlying shape-color disentanglement nature of SVG. Through these experiments, we showed that our method can improve image classification performance by directly exploiting the in-context learning capabilities of LLMs.
This research can open the door to new opportunities in the realm of computer vision by integrating the powerful capabilities of LLMs with SVG format. We believe that our work provides an initial exploratory step for future research in the integration of LLMs and SVG for the development of advanced image representation formats and more complex vision tasks. As we continue to explore the potential of large language models on visual input, this approach could inspire further progress in the understanding of visual data with multi-modal fusion approaches.
Figure 7: Understanding SVG content through the lens of GPT-4: GPT-4 demonstrates its ability to generate accurate content by analyzing the correlation between provided example number pairs, and subsequently applying this relationship to ascertain the corresponding test key number. Remarkably, in scenarios where the relationship exhibits ambiguity, GPT-4 showcases its proficiency in identifying multiple possible interpretations.
## Appendix A Experiment Details
### Raster Images to SVG Conversion
One of the most fundamental pieces of information for visual perception is object shape. Our method can be conceptualized as selectively diminishing details from an image, prioritizing the extraction of less significant shapes. This guided process of reduction offers a quantitative way to manage the amount of visual data present within an image. Within this framework, we perceive segmentation as an example of extreme simplification, whereas vectorization serves as a more moderate form of the same. Here we introduce how we use such two approaches to convert the raster images to SVG.
Image Vectorization.The vector tracing algorithm operates in a sequential three-step process. Initially, pixels are transformed into a defined path. Subsequently, this path is condensed into a simplified polygonal representation. Lastly, the polygon is refined and approximated using a curve-fitting (tracing) technique, which enhances its smoothness.
There are several online tools to convert the raster images (jpg and png) into vector graphics (SVG), such as Adobe Illustrator [1], Inkscape [20], and VTracer [31]. We experiment with all of them and found that VTracer leads to the best balance between SVG complexity (code length) and rich semantic representation.
In MNIST [10], we use the default hyperparameters during conversion. Specifically, we (i) first binarize the MNIST pixel value from the continuous range [0, 255] to the binary set \(\{0,255\}\) using the threshold 127.5, (ii) set the foreground to black, and the background to white, and (iii) apply the vector tracing algorithm VTracer.
Segmentation Prior.As mentioned earlier, segmentation can provide a strong prior for object shape. We want a generalist model that can segment any image, i.e., not trained and thus biased towards a certain dataset. The Segment Anything (SA) [21] project introduces such an image segmentation model, the Segment Anything Model (SAM), and a large-scale dataset, SA-1B, with the aim of achieving powerful generalization and zero-shot transfer across diverse segmentation tasks, demonstrating competitive results often surpassing prior fully supervised methods. We use the default hyper-parameters of SAM to segment the whole image into a set of masks without class labels, where the color of each mask is represented by the average value of the pixels within the mask. Specifically, we sample 32 query points per side (1024 points overall) to generate the image mask. Then we select the top 20 masks with the highest area as the final representation for an image.
We then use VTracer to transform the mask into SVG format. Note that, to reduce the final SVG, we adjust several settings: we set the number of significant bits to use in an RGB channel to 0; we set the minimum angle displacement degree to splice a spline to 90; we set the color difference between gradient layers to be 35; we consider a corner to have a minimum momentary angle of 0 degrees; we discard patches smaller than 16 pixels in size; and we perform iterative subdivide smoothing until all segments are shorter than 10 pixels.
### Fine-tuning Dataset for Vicuna
We use the same JSON format in Vicuna [30] to construct the fine-tuning dataset. We use all the training samples in MNIST, translating to 60,000 SVG images. For each sample, we construct one round of conversation: (i) From human: "Which digit does the following SVG reflect? <SVG code here>", and (ii) From GPT: "<label>". Here <label> denotes the digit label, which ranges from 0 to 9. Then we use this dataset to fine-tune Vicuna using the default hyper-parameters 4 for 3 epochs.
Footnote 4: [https://github.com/lm-sys/FastChat](https://github.com/lm-sys/FastChat)
### Prompt Engineering
In this section, we provide the details of prompt engineering for each task. The prompt is designed to figure out the common transformation in the SVG example pairs first (each example pair consists of
a query and a key) and then transform the query into the corresponding key SVG by following the discovered common transformation.
In-context Image Classification.In this task, in-context examples are aimed to provide more context information using several image-label pairs, thus facilitating the final classification. The specific prompt utilized for this purpose using 3 in-context examples is detailed below: "Instruction: please predict the digit number for each of the following SVG images. Please think step by step, and closely look at the key identifying image characteristics. Please just tell me the image class, no other information is needed. Q: What digit does this SVG image represent? <SVG code here> A: This SVG image represents digit <label> Q: What digit does this SVG image represent? <SVG code here> A: This SVG image represents digit <label> Q: What digit does this SVG image represent? <SVG code here> A: This SVG image represents digit <label> Q: What digit does this SVG image represent? <SVG code here> A: This SVG image represents digit.
Synthetic Data Study:In this task, the objective is to conduct an analytical evaluation of the provided two example pairs, examining changes that occur in aspects such as color, shape, and size. The insight gathered from this analysis will then be used to adapt the given query into its corresponding key. The specific prompt utilized for this purpose is detailed below: "Please perform the following task carefully. In this task, you will be shown two examples of Scalable Vector Graphics (SVG) code pairs. Each pair consists of a query and key pair, where the query describes the SVG code of the original image, and the key describes the SVG code of the transformed image. Each will be named "Example Query #" and "Example Key #" respectively. Your first task is to figure out the common transformation in the two examples. The transformation can consist of color, shape, size, or any combination thereof. Then, given a new test query SVG code (named "Test Query"), your second task is to transform that query into the corresponding key SVG code (named "Test Key"), following the common transformation that you discovered in the two example pairs. Here are the two example query and key pairs: Example Query 1: <SVG code here>; Example Key 1:<SVG code here>; Example Query 2:<SVG code here>; Example Key 2:<SVG code here>; Here are the test query and key pair: Test Query:<SVG code here>; Test Key:"
Content Extrapolation:In this task, LLMs are presented with SVG code pairs, each containing a query-key set that depicts numbers. The key introduces a consistent mathematical operation (addition, subtraction, multiplication, or division) to the query number. The tasks are to identify this operation in the examples and apply it to new test queries to generate corresponding test keys. To facilitate a more comprehensive understanding of SVG number codes for the LLM, we initially present the SVG codes for numbers 0 through 9 to the LLM prior to posing specific queries. The specific prompt utilized for this purpose is detailed below: "Please perform the following task carefully. In this task, you will be shown two examples of Scalable Vector Graphics (SVG) code pairs. Each pair consists of a query and key pair, where the query describes an SVG code of an integer number, and the key describes the SVG code of another integer number with an introduced mathematical operation. Each will be named "Example Query #" and "Example Key #" respectively. In addition to the example pairs, you will be shown a new test query SVG code (named "Test Query"). Your first task is to identify which number each example query, example key, and test query represents. Your second task is to figure out all the possible mathematical operations that are held for all given example pairs. The operation could be add, subtract, multiply, and divide (the subtract or multiply factor could be a fraction). Then, according to the numbers of example pairs and test query you identified, your third task is to predict the corresponding test key number (named "Test Key"), following all the mathematical operations that you discovered in the given example pairs. Finally, you need to generate the corresponding SVG code of the test key number. Here are the two example query and key pairs:
Example Query 1: <SVG code here>; Example Key 1:<SVG code here>; Example Query 2:<SVG code here>; Example Key 2:<SVG code here>; Here are the test query and key pair: Test Query: <SVG code here>; Test Key: (Note: think about four operations one by one, and the operation should be consistent for all given example pairs)"
## Appendix B More Chat Results
**Image Recognition and Manipulation.** In this section, we provide more examples for chat-based image recognition and manipulation using GPT4 [27]. The qualitative results are shown in Figure 8: (a) SVG representation empowers robust in-context digit recognition capability given different background and foreground colors, (b) GPT4 can recognize and depict the details of a dog with the prompt: "a stylized bear or a similar mammal with a round face and ears." Furthermore, GPT-4 can identify the location of the dog's left eye and remove it. (c) GPT4 is capable of recognizing a natural image from the CIFAR-10 dataset.
**Style Extrapolation:** LLMs are provided with five example pairs and are tasked with deciphering the stylistic attributes inherent in these examples. Following this, a new test query is presented to the LLMs. Their objective is to modify this query into the corresponding key, ensuring that it aligns with the same stylistic principles showcased in the example pairs. The specific prompt utilized for this purpose is detailed below:"Please perform the following task carefully. In this task, you will be shown five examples of Scalable Vector Graphics (SVG) code pairs. Each pair consists of a query and key pair (both are English letter), where the query describes the SVG code of the original image, and the key describes the SVG code of the transformed image. Each will be named "Example Query #" and "Example Key #" respectively. Your first task is to figure out the common transformation in the five examples. The transformation can consist of color, shape, size, style, font, and background changes, or any combination thereof. Even though you cannot see the images, and only their SVG codes, you need to discover the transformations that are happening at the image level and not just at the code level. Be detailed, and try to discover every change, and
Figure 8: More qualitative results of chat-based image recognition and manipulation. (a) In-context digit recognition in Colored-MNIST-(B). (b) GPT can explain and manipulate the dog SVG image. (c) GPT4 can also recognize the bird from a CIFAR-10 example.
the most important change is that the paths in the SVG code between each query and key is different due to the common transformation but the shapes of the letters that query and key are representing remain the same. Then, given a new test query SVG code (named "Test Query"), your second task is to transform that query into the corresponding key SVG code (named "Test Key"), following the common transformation that you discovered in the five example pairs. To help you better understand the transformation, I will also inform you of what letter each query and key represent. You need to find the shape of each query and key by analyzing their path. Here are the five example query and key pairs: Example Query 1 (letter B):: Example Key 1 (letter B)::SVG code here>; Example Query 2 (letter R)::SVG code here>; Example Key 2 (letter R)::SVG code here>; Example Query 3 (letter Z)::SVG code here>; Example Key 3 (letter Z)::SVG code here>; Example Query 4 (letter E)::SVG code here>; Example Key 4 (letter E)::SVG code here>; Example Query 5 (letter N)::SVG code here>; Example Key 5 (letter N)::SVG code here>; Here is the test query and key pair: Test Query (letter #):: Test Key: " The qualitative results are shown in figure 9.
Figure 9: More qualitative results of style extrapolation. The generation results of our method are annotated with a red square. |
2307.00914 | Revisiting equilibrium condensation and rocky planet compositions:
Introducing the ECCOplanets code | We introduce ECCOplanets, an open-source Python code that simulates
condensation in the protoplanetary disk. Our aim is to analyse how well a
simplistic model can reproduce the main characteristics of rocky planet
formation. For this purpose, we revisited condensation temperatures ($T_c$) as
a means to study disk chemistry, and explored their sensitivity to variations
in pressure (p) and elemental abundance pattern. We also examined the bulk
compositions of rocky planets around chemically diverse stars. Our
T-p-dependent chemical equilibrium model is based on a Gibbs free energy
minimisation. We derived condensation temperatures for Solar System parameters
with a simulation limited to the most common chemical species. We assessed
their change ($\Delta T_c$) as a result of p-variation between $10^{-6}$ and
0.1 bar. To analyse the influence of the abundance pattern, key element ratios
were varied, and the results were validated using solar neighbourhood stars. To
derive the bulk compositions of planets, we explored three different planetary
feeding-zone (FZ) models and compared their output to an external n-body
simulation. Our model reproduces the external results well in all tests. For
common planet-building elements, we derive a Tc that is within $\pm5$ K of
literature values, taking a wider spectrum of components into account. The Tc
is sensitive to variations in p and the abundance pattern. For most elements,
it rises with p and metallicity. The tested pressure range ($10^{-6} - 0.1$
bar) corresponds to $\Delta T_c \approx +350$ K, and for -0.3 $\leq$ [M/H]
$\leq$ 0.4 we find $\Delta T_c \approx +100$ K. An increase in C/O from 0.1 to
0.7 results in a decrease of $\Delta T_c \approx -100$ K. Other element ratios
are less influential. Dynamic planetary accretion can be emulated well with any
FZ model. Their width can be adapted to reproduce gradual changes in planetary
composition. | Anina Timmermann, Yutong Shan, Ansgar Reiners, Andreas Pack | 2023-07-03T10:19:32Z | http://arxiv.org/abs/2307.00914v2 | # Revisiting equilibrium condensation and rocky planet compositions
###### Abstract
Context:The bulk composition of exoplanets cannot yet be directly observed. Equilibrium condensation simulations help us better understand the composition of the planets' building blocks and their relation to the composition of their host star.
Aims:We introduce ECCoplanets, an open-source Python code that simulates condensation in the protoplanetary disk. Our aim is to analyse how well a simplistic model can reproduce the main characteristics of rocky planet formation. For this purpose, we revisited condensation temperatures (\(T_{c}\)) as a means to study disk chemistry, and explored their sensitivity to variations in pressure (\(p\)) and elemental abundance pattern. We also examined the bulk compositions of rocky planets around chemically diverse stars.
Methods:Our \(T\)-\(p\)-dependent chemical equilibrium model is based on a Gibbs free energy minimisation. We derived condensation temperatures for Solar System parameters with a simulation limited to the most common chemical species. We assessed their change (\(\Delta T_{c}\)) as a result of \(p\)-variation between \(10^{-6}\) and \(0.1\,\mathrm{bar}\). To analyse the influence of the abundance pattern, key element ratios were varied, and the results were validated using solar neighbourhood stars. To derive the bulk compositions of planets, we explored three different planetary feeding-zone (FZ) models and compared their output to an external \(n\)-body simulation.
Results:Our model reproduces the external results well in all tests. For common planet-building elements, we derive a \(T_{c}\) that is within \(\pm 5\,\mathrm{K}\) of literature values, taking a wider spectrum of components into account. The \(T_{c}\) is sensitive to variations in \(p\) and the abundance pattern. For most elements, it rises with \(p\) and metallicity. The tested pressure range (\(10^{-6}-0.1\,\mathrm{bar}\)) corresponds to \(\Delta T_{c}\approx+350\,\mathrm{K}\), and for \(-0.3\leq[\mathrm{M}/\mathrm{H}]\leq 0.4\) we find \(\Delta T_{c}\approx+100\,\mathrm{K}\). An increase in C/O from \(0.1\) to \(0.7\) results in a decrease of \(\Delta T_{c}\approx-100\,\mathrm{K}\). Other element ratios are less influential. Dynamic planetary accretion can be emulated well with any FZ model. Their width can be adapted to reproduce gradual changes in planetary composition.
## 1 Introduction
The chemical composition of rocky planets is, among other factors such as the surface temperature or the presence of a magnetic field, an important parameter with respect to the habitability and the potential existence of extraterrestrial life. The bulk composition of a planet can be roughly estimated using density measurements from observational data of radial velocity (planet mass) and planetary transits (planet radius; Fulton & Petigura 2018; Zeng et al. 2019; Fridlund et al. 2020; Otegi et al. 2020a,b; Schulze et al. 2021). Transit spectroscopy is beginning to provide answers regarding the existence of specific molecules in planetary atmospheres (Seager & Sasselov 2000; Brown 2001; Madhusudhan 2019; Brogi & Line 2019; Madhusudhan et al. 2021; Rustamkulov et al. 2022). However, the observational answer to the pivotal question of the composition of rocky planets, in particular the makeup of their interior structure, still remains largely elusive. Currently, the only technique that allows glimpses into solids compositions is the spectroscopic analysis of polluted white dwarfs (Jura & Young 2014; Farihi et al. 2016; Harrison et al. 2018; Wilson et al. 2019; Bonsor et al. 2020; Veras 2021; Xu & Bonsor 2021).
This lack of observational data on planetary compositions is currently bridged by simulations of planet formation. To estimate the elemental composition of a planet, it is common to look to its stellar host. The star's chemical composition is assumed to approximately represent that of its protoplanetary disk due to the fact that stars and their disks form from the same molecular cloud (Wang et al. 2019; Adibekyan et al. 2021). Out of such a disk, planetesimals and planet embryos form from locally condensed solid materials, whose atomic and mineralogic makeups depend not only on the bulk proportions of elements available, but also on the local temperature and pressure, as governed by the principles of chemical equilibration. In other words, to the first order, the types of building blocks that comprise a planet are determined by the condensation sequence for a given stellar composition and formation location. Due to the assumption of both (1) a chemical equivalence of the molecular cloud and the protoplanetary disk and (2) a chemical equilibrium within the disk, it is immaterial if planetary building blocks truly develop in situ or are inherited from molecular clouds.
The validity of this paradigm has been well tested within our Solar System (Bond et al., 2010; Wang et al., 2019). Information about the composition of the protoplanetary disk of the Solar System has traditionally been inferred from the analysis of meteorites. The carbonaceous chondrites, in particular of the Ivunarype (CI chondrites), have been found to reflect the elemental abundance of the Sun's photosphere to a very high degree (Lodders, 2003; Asplund et al., 2009) - if one allows for some depletion of volatile elements, especially a deficiency in H, C, N, O, and noble gases (Righter et al., 2006).
Regarding the rocky planets of our Solar System, it has been found that the composition of the Earth conforms very well to expectations (Wang et al., 2019; Schulze et al., 2021): relative to the composition of the Sun (measured by spectroscopy and approximated by the CI chondrites), the bulk Earth is depleted in moderately volatile elements, that is, elements that condense into mineral dust grains at lower temperatures (Kargel and Lewis, 1993; Carlson et al., 2014; Wood et al., 2019; Wang et al., 2019), but reflects the Sun's elemental abundance pattern for refractory elements. Much less is known about the composition of the other rocky planets of the Solar System. There are several processes that can modify the bulk composition of a planet. For instance, the planetary material may be taken out of the chemical equilibrium of the disk at some threshold temperature ('incomplete condensation'), as was likely the case for Earth (Wood et al., 2019; Sossi et al., 2022). Another example is Mercury, with its high density. Mercury has lost about 80 percent of its rocky (low density) mantle via collisional erosion (Benz et al., 1988). In contrast to Mercury, the bulk density of Mars is lower than expected when assuming a CI chondritic bulk composition (Schulze et al., 2021). These findings show the limits of predicting planetary bulk compositions based on the element abundance of their host star alone. Complex processes, such as dynamical interactions, migration, radial mixing, discontinuous distribution of solids, or even giant impact events can offset a planet's abundance pattern (see e.g. Benz et al., 1988; Clement et al., 2021; Izidoro et al., 2022). The identification of such anomalies, however, requires knowledge of the expected bulk density, which can only be obtained from the bulk chemistry and size of a planet.
It is now becoming possible to investigate how well exoplanet systems conform to this picture, at least indirectly. Schulze et al. (2021) explored the statistical likelihood of rocky planets having the same composition as their host stars, by comparing their expected core mass fraction to the core mass fraction that could be expected given the elemental abundance of the star. Of their sample of eleven planets, only two were found to be incompatible with the null-hypothesis assumption of the planet reflecting the composition of its host star at the 1\(\sigma\) level. Similarly, Plotnykov and Valencia (2020) analysed an ensemble of planets and stars and find that the predicted composition of the population of planets spans an overlapping, yet wider, range with respect to the corresponding host stars, both in terms of the Fe/Si distribution and the core mass fraction (see also Adibekyan et al., 2021).
Given these findings, we can take advantage of the possibility to deduce a star's composition from its spectrum and assume the same elemental ratios for the protoplanetary disk. The advances in modern spectrographs, coupled with an improved understanding of spectral line characteristics, allows the derivation of precise abundances of major rock-building elements, at least for F, G, and K stars (Adibekyan et al., 2012; Brewer et al., 2016). Despite all the advances in this field, it should be kept in mind that deducing concrete stellar elemental abundances from the depth and width of absorption features in a spectrogram is far from straightforward. As compiled and analysed by Hinkel et al. (2016), there are a multitude of methods that obtain quite different elemental abundances with different error margins, even from the same spectra (see also Bedell et al., 2014).
Using simulations to find the composition of exoplanets depending on the composition of their host star has become a fairly common practice. Apart from trying to recreate the planets of the Solar System in order to test our understanding of planet formation (Raymond et al., 2004; O'Brien et al., 2006; Bond et al., 2010), there have been great efforts to explore the compositional diversity of exoplanets (Bond et al., 2010; Johnson et al., 2012; Thiabaud et al., 2015; Dorn et al., 2019), and especially the influence of stellar elemental abundance patterns that deviate significantly from that of our Sun (Bitsch and Battistini, 2020; Carter-Bond et al., 2012; Jorge et al., 2022).
Due to its provision of an extensive thermochemical database and its widely applicable general chemical equilibrium computations, the powerful commercial software suite HSC Chemistry1 has been the backbone of many recent studies of exoplanet compositions (for instance, Bond et al., 2010; Johnson et al., 2012; Thiabaud et al., 2014; Moriarty et al., 2014; Thiabaud et al., 2015; Dorn et al., 2019). There are also several general equilibrium condensation codes that were written specifically for applications in planetary science but which are generally not publicly available, such as the Conco code, developed by and described in Lodders and Fegley (1993), and the PHEQ code, developed by and described in Wood and Hashimoto (1993). A freely available Fortran code is GGchem(Woitke et al., 2018). This code simulates the equilibrium chemistry in the protoplanetary disk down to 100 K and includes an extensive thermochemical database. Another open-source program is the TEA code by Blecic et al. (2016), which, however, is limited to gas-phase simulations. Originally intended to study geochemical processes, but also usable for planetary simulation, is the SUPCRTBL software package by Zimmer et al. (2016), with its extensive thermochemical database SUPCRT92.
Footnote 1: [http://www.hsc-chemistry.net](http://www.hsc-chemistry.net)
Building on the foundation of results from general equilibrium calculations, additional effects can be included to capture the complexity of the disk evolution process. Examples include: combining a thermochemical equilibrium simulation with a dynamical simulation of disk development (Bond et al., 2010, 2010; Moriarty et al., 2014; Thiabaud et al., 2015; Khorshid et al., 2022); including dust enrichment in the composition of the protoplanetary disk to account for deviations of planet compositions from stellar elemental ratios (Ebel and Grossman, 2000); adding the notion of the isolation of a fraction of the condensed solids from the chemical equilibrium (Petaev and Wood, 1998); and looking into non-ideal solid solutions (e.g. Pignatael et al., 2011). Once the most likely composition of a rocky planet has been found, further simulations can follow to estimate the internal structure of the planet (see e.g. Rodriguez-Mozos and Moya, 2022), its geological evolution (Putrika et al., 2021), the development of an atmosphere (Herbort et al., 2020; Spaargren et al., 2020; Ballmer and Noack, 2021; Putrika et al., 2021), and even the formation and composition of clouds (Herbort et al., 2022).
With our ECCOpLanets2 code, we provide a general equilibrium condensation code as a simplified, open-source Python alternative to use for simulations. The main focus of our code is its ease of use, which allows it to be tailored to specific research
questions and extended. As a first application of our code, we show its projection for the composition of exoplanets in different stellar systems and the condensation temperatures of common planet-building molecules and elements. We also study the sensitivity of condensation temperatures to variations in disk pressure and elemental abundance patterns within the protoplanetary disk. With this analysis, we want to highlight the limitations of the application of element volatility, as determined from Solar System parameters, in general theories of planet formation.
In Sect. 2 we describe the underlying thermochemical principles of our simulation. Section 3 deals with the mathematical properties of the thermochemical equations. Our data sources and processing are presented in Sect. 4. The basics of our code are shown in Sect. 5. Finally, we present our simulation results regarding condensation temperatures of certain species and their variability (Sect. 6) and the composition of exoplanets (Sect. 7). Apart from showing our own results, these simulations are used as a benchmark test for our code.
## 2 Thermochemical basis
A protoplanetary disk, at least during the stage of solid condensation, can be approximated as a closed system, that is, closed with respect to matter but open with respect to heat exchange. The timescale on which the disk cools is generally large compared to the timescale of condensation (Toppani et al. 2006; Pudritz et al. 2018). This is, however, only true for the formation of condensates from the gas phase, not for the rearrangement of the condensates into their thermochemically favoured phases (for estimates of the relevant timescales, see Herbort et al. 2020). Thus, assuming that the disk evolves through a sequence of equilibrium conditions is, to some degree, a simplification, especially for large bodies at low temperatures.
### Chemical equilibrium and Gibbs free energy minimisation
A system is in thermochemical equilibrium when its Gibbs free energy is minimised (White et al. 1958; Eriksson et al. 1971). Accordingly, we can compute the disk's equilibrium composition at each temperature by minimising its Gibbs free energy.
The Gibbs free energy is an extensive property, that is, the total Gibbs energy of a multi-component system is given by the sum of the Gibbs energies of its constituents (Eriksson et al. 1971):
\[G_{\rm tot}(T)=\sum_{i}G_{i}(T). \tag{1}\]
The Gibbs energy of a substance \(i\) is given by its molar amount \(x_{i}\) and its chemical potential \(\mu_{i}(T)\), also referred to as its molar Gibbs free energy (Eriksson et al. 1971):
\[G_{i}(T)=x_{i}\,\mu_{i}(T). \tag{2}\]
The chemical potential at a temperature \(T\) depends on the chemical potential at standard state, \(\mu_{i}^{c}(T)\), and the natural logarithm of the chemical activity, \(a_{i}\), of the component (Eriksson et al. 1971)
\[\mu_{i}(T)=\mu_{i}^{c}(T)+R\,T\ln a_{i}, \tag{3}\]
where \(R\) is the ideal gas constant.
Regarding the first term of Eq. (3), we use the relation of the standard Gibbs free energy, \(G^{\circ}\), to the standard enthalpy, \(H^{\circ}\), and the standard entropy, \(S^{\circ}\):
\[{\rm d}G^{\circ}={\rm d}H^{\circ}-T\,{\rm d}S^{\circ}. \tag{4}\]
If we use these variables as molar quantities, that is, enthalpy and entropy per mole of a substance, this equation holds for the standard chemical potential, \(\mu^{c}(T)\) (cf. Keszei 2012, Ch. 8.3).
The dependence of the enthalpy and entropy on changes in temperature is defined by the heat capacity, \(C_{p}^{\circ}\) (cf. Keszei 2012, Ch. 4.4.1, 4.4.2):
\[{\rm d}H^{\circ} =C_{p}^{\circ}(T)\,{\rm d}T \tag{5}\] \[{\rm d}S^{\circ} =\frac{C_{p}^{\circ}(T)}{T}\,{\rm d}T. \tag{6}\]
The heat capacity, \(C_{p}^{\circ}\), is well approximated by the Shomate polynomial (Chase 1998; Linstrom & Mallard 1997), allowing us to integrate the differentials analytically. With \(\tau=T\times 10^{-3}\) it takes the form
\[C_{p}^{\circ}(T)=A+B\tau+C\,\tau^{2}+D\,\tau^{3}+E\,\tau^{-2}. \tag{7}\]
The Shomate equation is only valid for temperatures larger than \(T=298.15\,\)K. This is also the reference temperature of the standard state of all thermochemical data used in our code. Therefore, it constitutes the lower limit of the integration. The upper limit is given by any temperature, \(T\):
\[H^{\circ}(T) -H^{\circ}(298.15\,{\rm K})=\int_{298.15\,{\rm K}}^{T}C_{p}^{ \circ}(T){\rm d}T \tag{8}\] \[=A\,\tau+\frac{1}{2}\,B\,\tau^{2}+\frac{1}{3}\,C\,\tau^{3}+\frac {1}{4}\,D\,\tau^{4}-E\,\tau^{-1}+F\] (9) \[S^{\circ}(T) -S^{\circ}(298.15\,{\rm K})=\int_{298.15\,{\rm K}}^{T}\frac{C_{p} ^{\circ}(T)}{T}{\rm d}T\] (10) \[=A\,\ln{(\tau)}+B\,\tau+C\,\frac{\tau^{2}}{2}+D\,\frac{\tau^{3}} {3}-\frac{E}{2\,\tau^{2}}+G, \tag{11}\]
where \(F\) and \(G\) denote the negative value of the integrated polynomials evaluated at \(T=298.15\,\)K. We can rearrange the equations and add the constants \(H^{\circ}(298.15\,\)K) and \(S^{\circ}(298.15\,\)K) to the constants \(F\) and \(G\), respectively, on the right-hand side. The new constants are denoted with a tilde sign.3
Footnote 3: This is a slight deviation from the definition of the constants in the NIST-JANAF web-book (Linstrom & Mallard 1997).
Combining the resulting polynomials for \(H^{\circ}(T)\) and \(S^{\circ}(T)\) gives us an equation for the standard chemical potential of each species defined by the Shomate parameters:
\[\mu_{i}^{c}(T) =H_{i}^{c}(T)-TS_{i}^{\circ}(T) \tag{12}\] \[=10^{3}\,\bigg{[}A\,\tau\,(1-\ln{\tau})-\frac{B\,\tau^{2}}{2}- \frac{C\,\tau^{3}}{6}-\frac{D\,\tau^{4}}{12}-\frac{E}{2\,\tau}+\vec{F}-\vec{G} \,\tau\bigg{]}. \tag{13}\]
The usage of \(\tau=T\times 10^{-3}\) entails that for consistent constants \(A\) to \(G\), the enthalpy, \(H^{\circ}\), is given in units of \([{\rm kJ\,mol^{-1}}]\), whereas the heat capacity, \(C_{p}^{\circ}\), and the entropy, \(S^{\circ}\), are given in \([{\rm J\,mol^{-1}\,K^{-1}}]\). The chemical potential, \(\mu^{c}(T)\), is given in \([{\rm J\,mol^{-1}}]\).
In this version of the code, we treat the gas phase as ideal gas and only consider pure solid phases, that is, no solid solutions. We can therefore use a common approximation for the activity of a substance (second term of Eq. (3); see e.g. Eriksson et al.
1971): For solids, the activity is unity; for components in the gas phase, the activity is assumed to equal the partial pressure of this component. This also entails that we do not differentiate between stable and unstable condensed phases on the basis of the activity. The presence of a phase is solely determined by the product of its molar amount and chemical potential at standard state. The gas-phase approximation is valid, as long as deviations from an ideal gas are negligible. This deviation can be quantified with the fugacity coefficient, which depends on the gas in question, the pressure, and the temperature. As a rule of thumb, this approximation of an ideal gas is better the lower the pressure and the higher the temperature (see e.g. Atkins & de Paula 2006, Ch. 1.2), we do not expect significant influences on our result for our parameter range where \(T>300\) K and \(p<1\) bar.
The partial pressure of a component, \(p_{i}\), can be expressed as the product of the total pressure with the fraction of the molar amount of the species in question, \(x_{i}\), of the total molar amount of all gaseous species, \(X\). In our case the total pressure is that of the protoplanetary disk \(p_{\rm disk}=p_{\rm tot}:=p\). The activity can be summed up as
\[a_{i}=\begin{cases}p_{i}=\frac{v_{i}}{X}\ p\quad\text{gas-phase species},\\ 1\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
where \(\mathbf{A}\in\mathbb{N}^{\alpha\times p}\) is the stoichiometry-matrix for the number balance of a system composed of initially \(o\) elements resulting in a final composition with \(p=m+n\) molecular species (see the example in Appendix A), and \(\mathbf{b}\in\mathbb{R}^{o}\) is the vector containing the total abundances of the elemental components. Typically, \(p>o\) because the most common molecular species are only made up of a handful of different elements.
The gradient of the target function is given by
\[\frac{\partial f}{\partial x_{i}}=\begin{cases}c_{i}+d\,\ln\frac{n}{X}&i\leq n,\ x_{i}\neq 0\\ c_{i}&i>n\text{ or }x_{i}=0.\end{cases} \tag{28}\]
The Hesse matrix of the target function is given by
\[\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}=\begin{cases}-\frac{d}{X}& i,j\leq n,\ i\neq j\text{ or }x_{i}=0,\\ \frac{d}{X}-\frac{d}{X}&i,j\leq n,\ i=j,\ x_{i}\neq 0,\\ 0&\text{else}.\end{cases} \tag{29}\]
In summary, our problem can be characterised as a convex minimisation problem subject to two linear constraints. For the numerical minimisation we can make use of the gradient and Hesse matrix of the target function, and can be sure of the uniqueness of a found solution due to the convexity of the target function.
## 4 Data
### Data types and sources
There are different types of data used in this code. Regarding their function within the code, we can distinguish the thermochemical data of molecules (this term includes minerals; melts were not considered), on the one hand, and the stellar elemental abundance data, on the other hand. In terms of their mathematical usage, the former plays the main role in the target function of the minimisation procedure, whereas the latter is used in the number balance constraint. The thermochemical data describes laboratory-measured properties of molecules and is constant for all simulations; the stellar abundance data were derived from astronomical observations of particular stars and can be varied as an input parameter between simulations. We use ancillary atomic weight data to express the atomic composition of planets in terms of wt \(-\) %.
Our thermochemical database is limited to the most common species expected to form in a protoplanetary disk, and does not contain any charged species at the moment. It is likely that the lack of certain molecules and ions increases the expected error of the computed condensation temperatures, especially for high-\(T\) condensates containing Mg and Na. For the sake of formal comparability between the thermochemical data of different species (i.e. identical derivation, processing, and presentation of data), we used as few different data sources as possible. Most data, especially the gas-phase data, were taken from the comprehensive NIST-JANAF Thermochemical Tables.4 Most of the mineral data were taken from three bulletins of the U.S. Geological Survey (Robie et al. 1978; Robie & Hemingway 1995; Hemingway et al. 1982). The data were extracted from these sources in their given tabulated form. An overview of the included data is shown in Appendix D.
Footnote 4: [https://janaf.nist.gov/](https://janaf.nist.gov/)
The stellar elemental abundance data are given as the absolute number of atoms of each element, normalised to \(N_{\text{Si}}=10^{6}\). This is an arbitrary scaling commonly used in cosmochemistry (see e.g. Lodders 2003; Bond et al. 2010b; Lodders 2020). The exact normalisation is inessential for the code, as we only consider the element ratios in the disk. We included the data of 1617 F, G, and K stars from the Brewer et al. (2016) database in the code.
Further data can easily be added to all databases if considered useful for a simulation.
### Data processing, uncertainties, and extrapolation
The tabulated data are only available at discrete temperatures, with intervals of 100 K. To obtain continuous thermochemical data for any temperature, we used the tabulated enthalpies to fit Shomate parameters \(A\), \(B\), \(C\), \(D\), \(E\), and \(F\), via the respective Shomate equation (cf. Eq. (9)). Subsequently, we use the found values (\(A\) to \(E\)), the tabulated entropies and the respective Shomate equation (Eq. (11)) to find the last parameter \(G\).
As an example, we show the thermochemical data of Al\({}_{2}\)O\({}_{3}\)(S) in Fig. 1. The top and middle panel show the entropy and enthalpy values of the species, respectively. We see that our Shomate fit (black solid line) retraces the tabulated data (blue diamonds) well. The bottom panel shows the Gibbs free energy derived from the other two properties.
In cases where the tabulated data contains discontinuities in the form of jumps at certain temperatures (commonly in reference-state data including multiple phases of a species), we fitted separate Shomate parameters for either side of the discontinuity. If the tabulated data do not span our entire temperature range of 300 to 6000 K, we applied a linear extrapolation for the Shomate enthalpy equation (parameters \(A\) and \(F\)) using the last three tabulated enthalpy values. The \(G\) parameter was fitted subsequently.
The uncertainty in thermochemical data can be very large, especially for solids. The reason for this is a combination of the limited precision of experimental measurements and errors introduced in the subsequent data processing, for instance by extrapolating measured data beyond the temperature range of the experiment. Regarding the measurement uncertainty, our data sources state an uncertainty of the order of 10% for the entropy at reference state for many minerals; for gases the uncertainty
Figure 1: Example of graphical output of the molecule database for corundum (Al\({}_{2}\)O\({}_{3}\)(s)). _Top panel_: Entropy. _Middle panel_: Enthalphy. _Bottom panel_: Gibbs energy (chemical potential). Blue diamonds show tabulated values (upper two panels) and solid black lines the corresponding fitted Shomate function.
is generally much lower, often below 1% (for the exact values, we refer to the data sources themselves, Sect. 4.1). This uncertainty is propagated to the other values in the tables. Regarding the data processing uncertainty, Worters et al. (2018) and Woitke et al. (2018) studied the deviations between the thermochemical equilibrium constants of the species (\(k_{p}\)) as stated in different data collections as a function of temperature. They found that only for 65% of the species the agreement between different sources is good over the entire temperature range, that is, better than 0.1 dex at high temperatures and better than 0.4 dex at low temperatures.
We did not take the uncertainty in the thermochemical data into account in the assembling of our thermochemical database, and it is not considered in any way in our simulations. It should, therefore, be kept in mind for the interpretation of our simulation results.
There are also uncertainties in the stellar elemental abundance data. For most rock-building elements, the estimated error of the given abundances is smaller than \(\pm 0.03\) dex for the stars in our database (Brewer et al., 2016). We discuss the implications of these uncertainties in Sect. 6.2.3.
## 5 Computational solution
Our computational solution to the chemical equilibrium problem is provided as an open-source software on GrHus5. It is written in Python and only relies on the standard Python libraries numpy, scipy and pandas.6 Both the databases and minimisation code can be easily expanded to include more species or integrate a more sophisticated scientific approach, for instance by including an isolation fraction for the simulated solids or a thermochemical activity model.
Footnote 5: [https://github.com/AninaTimmermann/ECCoplanets](https://github.com/AninaTimmermann/ECCoplanets)
Footnote 6: We acknowledge the fact that Python is not a particularly efficient coding language (see our performance test in Table C.3 and compare to e.g. Woitke et al. (2018), who use Fortran-90 in their GGügem-code), and understand the growing concern about the ecological impact of computational astrophysics (Portegies Zwart, 2020). Our choice is motivated by the conjecture that Python is most commonly taught and used in physics, astronomy, and geosciences, and our aim to also make our code accessible to an audience with limited coding experience.
### Scope of our simulation
We limited the scope of our simulation in several areas. Most importantly, it is a purely thermochemical simulation. We do not consider disk profiles, disk dynamics, dynamical planet formation models or planet migration. The temporal development of the disk is only considered indirectly, in that we simulate a decrease in temperature, but do not set an absolute timescale for its evolution. The thermochemical approach is limited to equilibrium condensation, that is, we assume that all components of the system stay in chemical equilibrium for the whole temperature range of 6000 to 300 K, or, in any case, down to the temperature at which the planet is extracted from the disk. This is a twofold approximation: firstly, we assume that the cooling timescale of the disk is large compared to the thermochemical equilibration timescale, and secondly, we assume that no part of the condensates becomes isolated from the equilibration process, for instance, by being integrated into larger bodies. Furthermore, our model only includes ideal gases and solids. We do not include the condensation of trace elements.
Our scientific objective is the analysis of the composition of rocky planets; thus, it is limited to the solid materials found after condensation. While we do simulate the evolution of gas-phase species in the disk, they are not considered to be part of a forming planet. We only look at relative amounts of species and make no assumptions regarding absolute planet sizes.
### Condensation simulation
The code's main function is to compute the temperature-dependent equilibrium composition of a protoplanetary disk and to allow for the subsequent analysis of the results. This is realised by performing a sequence of Gibbs Free Energy minimisations at decreasing temperatures. As a result, the minimisation routine returns the relative amounts of each species - gaseous and solid - at each temperature step. This result matrix allows the inference of the temperature ranges in which a species is stable, and, therefore, expected to be present in the protoplanetary disk.
The code requires the specification of a start and end temperature, as well as the temperature increment, the disk pressure, the elemental abundance pattern in the disk, and the list of molecular species to be considered. The start and end temperature, as well as the temperature increment have no physical meaning; their computational impact is discussed in Appendix C.
The disk pressure is kept constant for the entire simulation, which means it is not automatically adapted to a decrease of gaseous material or temperature. Its value should be chosen in accordance with disk profiles. The elemental abundance pattern needs to be specified in terms of the normalised absolute number of atoms per element (see Sect. 4.1).
#### 5.2.1 Molecule selection
A meaningful selection of the species to be included in the simulation is the most important, albeit most complex, aspect of the simulation. Our ideal is to include as few species as possible, in order to make the minimisation problem as numerically easy to solve as possible, thereby reducing the expected computation time and the likelihood of numerical errors. On the other hand, one wishes to find a stable mineral assemblage and hence would like to include all phases where thermodynamic data are available. Currently, there are \(>5000\) minerals known to occur in nature; including all would likely lead to computational problems. The difficulty is determining the set of crucial species, that is, those that have a notable influence on the thermochemistry of the disk, for instance, by controlling the total pressure or otherwise determining condensation temperatures.
The total pressure is generally controlled by He and H\({}_{2}\). On top of that, the simulation results will only be reliable if we include the most stable species at any sampled (\(T\),\(p\))-tuple for each element contained in the simulation. The most stable species carrying a particular element at a given (\(T\),\(p\))-value depends on the overall elemental abundance pattern; thus, compiling the list of species is typically an iterative process. The starting point of this process can be based on the extensive simulations of the Solar System, for instance by Lodders (2003), Woitke et al. (2018), and Wood et al. (2019).
#### 5.2.2 Minimisation procedure
The minimisation at each temperature step is done using a trust region method, due to its known stability when solving bounded and constrained non-linear problems (Conn et al., 2000). The Gibbs function \(G(\mathbf{x},T,p)\), as defined in Eq. (15), is passed to the minimisation function as the target function, in combination
with the non-negativity and number-balance constraints, the gradient (Eq. (28)) and Hessian matrix (Eq. (29)) in their respective capacity.
We derive the initial starting point \(\mathbf{x}_{0}\) for the minimisation at the highest temperature of the simulation by solving the linear part of the Gibbs equation (Eqs. (22), (23)) with a simplex method. We ignore the transcendental part of the function, but respect non-negativity and number-balance.
The vector \(\mathbf{c}\), as defined in Eq. (23), is constructed from the Shomate parameters of all included species, the specified temperature, and disk pressure. In all subsequent temperature steps, the solution at the previous temperature step is used as the initial guess input of the minimisation procedure.
### Simulation analysis and definition of the condensation temperature
We provide several functions to analyse the results of the condensation simulation. These analysis functions roughly fall into three categories. First, the basic parameters of the simulation (temperature range, included molecules, abundance pattern, and disk pressure) can be retrieved. Second, temperature progression curves of species can be plotted to analyse their amounts as a function of temperature and, in the case of solid species, their condensation behaviour. And third, the projected composition of a rocky planet as a function of its formation temperature can be studied.
Temperature progression curves are a powerful tool for analysing different aspects of planet formation. As an example, we show a plot of the relative molar amounts (mol - %)7 of all the solid species included in a simulation as a function of decreasing temperature in Fig. 2. The amounts are relative to the total molar content of the disk, that is, both gas- and solid-phase species \(\left(\dfrac{n_{\text{species}}}{n_{\text{solids}}+n_{\text{guess}}}\text{ in mol }-\%\right)\). From this figure, we can gather various information. For instance, the temperature of the first onset of condensation, the approximate total proportion of solids in the disk as a function of temperature, and the dominant contributors to a planet's bulk composition at any given temperature, especially the distribution between typical planetary metal-core and silicate mantle components.
Footnote 7: The specification ‘mol’ is used to distinguish it from the later used weight percentage (wt – %).
We implemented a computation of the condensation temperature of elements, as a common parameter to assess planet compositions. This is defined as the temperature at which 50% of the total amount of an element is bound in solid-phase species. As an example, we show the condensation of Ca in Fig. 3. The dark blue line denotes the fraction of Ca-atoms bound in gas-phase species, \(\left(\frac{n_{\text{Ca-atom}}}{n_{\text{Ca-atom}}}\right)\), in %, the light blue line the fraction bound in solid-phase species, \(\left(\frac{n_{\text{Ca-atom}}}{n_{\text{Ca-atom}}}\right)\). The curves intersect at \(T=1512\) K, signalling the 50% condensation temperature of Ca.
Additionally, we define a 'condensation temperature' of specific solid-phase molecules (synonymously used for minerals) in order to analytically assess the appearance of condensates and sequences of phases containing specific elements. This value is, however, far less distinct than the condensation temperature of an element. There are two reasons for this: firstly, the maximum amount of a particular phase is not known a priori and depends on the temperature range considered; secondly, phases often disregate at lower temperatures in favour of more stable phases. This behaviour is exemplified in the left panel of Fig. 4 for perovskite (CaTiO\({}_{3}\)(s)), which is superseded by Ti\({}_{4}\)O\({}_{7}\)(s). Some species show even more complex condensation behaviours, involving successions of increases and decreases in their relative amounts. This is demonstrated in the right panel of Fig. 4 for forsterite (Mg\({}_{2}\)SiO\({}_{4}\)(s)), which is part of the intricate Mg-Si-chemistry in the protoplanetary disk. Irrespective of the precise curve shape, we only compute one condensation temperature per species, based on the 50% level of the local maximum closest to the first onset of condensation. If a species is present in such small quantities that it cannot be distinguished from numerical noise, it is assumed to not have condensed in the simulation, and therefore no condensation temperature is reported.
For the analysis of the composition of an exoplanet itself, we use the code's output of relative molar amounts of the chemical species that are stable in chemical equilibrium as a function of temperature. We convert this information into the relative amounts of the individual elements bound in solid species, using the structural formulae of the species. This procedure results in a temperature-series of the elemental composition of solid material in the disk. As default, we give the resulting composition in units of wt - %, meaning the total mass of one element bound in solid species relative to the total mass of all elements in solid species, using the atomic weights data specified in Sect. 4.1. To get the composition of an individual planet out of the temperature series, we have to specify its 'formation temperature'. This temperature defines the point, at which the planetary material is taken out of the chemical equilibrium of the disk. This does not necessarily mean that the planet is fully formed at this temperature, nor that it corresponds to the final disk temperature at the location of the planet. It is sufficient that the planetesimals have become so large that their interiors are effectively shielded from the disk chemistry. For instance, the depletion pattern of volatile elements in the Earth suggests a formation temperature between 1100 K and 1400 K (Wang et al. 2019a; Sossi et al. 2022). We discuss our different models connecting the formation temperature to the planet's composition in Sect. 7.1.
## 6 Condensation temperatures and their dependence on pressure and element abundance
We used our code to look at condensation temperatures and their variability. The condensation temperatures of rocky species give a first idea of the building blocks of a planet, because only material that has condensed at the formation temperature of a planet can be accreted from the protoplanetary disk. The condensation temperature of elements takes this idea to a slightly more abstract level, as we are not looking at the specific material that can be accreted onto a planet anymore, but rather think about the final elemental composition of the planet, even after the original planetesimals have undergone chemical changes in the formation and consolidation of the planet, for instance due to thermal processes. This relies on the assumption that while the specific molecules that have been accreted might not be found in the final planet in their original form, their elemental proportions will be retained. Finally, the variability of the condensation temperatures gives an indication as to how applicable our understanding of the connection between the formation and composition of the Earth is to different planet formation regions in the disk and to exoplanetary systems.
To validate the results of our code externally, we compared them against benchmark results from the literature. In Sect. 6.1 we show our computed condensation temperatures of common rocky species (Sect. 6.1.1) and of elements (Sect. 6.1.2) for a
Solar System elemental abundance pattern at a constant pressure, and compare them against the results by Lodders (2003) and Wood et al. (2019). In Sect. 6.2 we explore the variability of the condensation temperatures of elements as a function of the disk pressure and the stellar elemental abundance pattern.
### Condensation temperatures for a solar elemental abundance pattern and constant pressure
First, we analyse the condensation temperatures of species and elements that can be derived for the solar elemental abundance pattern at the disk pressure associated with the formation of Earth. In order to also use our results as a benchmark test for our code, we used the same system parameters that were used in the studies we compare our results against, even if these values are not necessarily in line with the currently most accepted ones.
Namely, we used the Solar System elemental abundances pattern as reported by Lodders (2003) and a disk pressure of \(1\times 10^{-4}\) bar, which was found to represent the total pressure in the solar nebula near 1 AU by Fegley (2000), and which was also used for the simulations of Lodders (2003) and Wood et al. (2019). Our simulation covered a temperature range from 2000 K to 300 K, with a resolution of 1 K. The number of species included in our simulation (47 gases + 25 solids) was much smaller than in the comparison studies. The simulation parameters are summarised in Table 2 in the Appendix.
We estimate very conservatively that different codes should return condensation temperatures of molecules within 100 K of each other and condensation temperatures of elements within 20 K for a given disk pressure and abundance pattern. This presumes that the most common elements and the majority of the most stable molecules are included in the simulation. This estimate is based on our experience regarding the response of our own code to variations in the molecule selection, the thermochemical data, and the definition of the condensation temperatures, in the case of molecular species.
Figure 3: Example of the condensation curve and condensation temperature of a specific element, here Ca. The dark blue curve shows the fraction of Ca atoms bound in gas-phase species, the blue line shows the fraction of Ca atoms bound in solid-phase species. The \(T\)-value of the intersection at 50% of atoms in gas- and solid-phase signifies the 50% condensation temperature.
Figure 2: Example of the progression of solid species of a simulation. The molar amount relative to the total molar content of the disk of each of the condensates included in the simulation is shown as a function of decreasing temperature in different line colours and styles, as denoted in the legend. See Appendix D for details on the included species. The simulation was run at a constant disk pressure of \(p=10^{-4}\) bar and is based the solar elemental abundance pattern recommended by Lodders (2003). See Table 2 for a summary of the simulation parameters.
#### 6.1.1 Condensation temperatures of rocky species
For the validation of our simulated condensation temperatures of common rocky species, we used the benchmark results of the seminal work by Lodders (2003). Their results have been computed with the Condor-code, which is not publicly available. Their thermochemical database contains 2000 gas-phase species and 1600 solids, which are all considered for the simulation. In their code, the chemical equilibration is based on equilibrium constants of formation, rather than Gibbs free energy, and the reported condensation temperatures pertain to the point at which the computed activity of a solid species reaches unity (Lodders 2003). Visually, this point would typically correspond to the onset of condensation, that is, the initial sharp change in gradient seen in our condensation curves of molecules (see Fig. 4 as an example). Depending on the slope of the curve, this temperature might easily be 20 K higher than our 50% condensation temperature, as defined in Sect. 5.3.
Keeping this in mind, we found a high degree of agreement between the two sets of values, as shown in Table 1. Our condensation temperatures are mostly within \(\pm\)50 K of the literature values. Our values are on average lower than those of (Lodders 2003), confirming our expectation based on the different definitions of the condensation temperature.
Regarding condensation sequences, we found that the order in which the molecules are expected to condense has been reproduced well with our code. The slight observed differences, as well as our failure to condense grossite (CaAl\({}_{4}\)O\({}_{7}\)), are likely due to the fact that our code does not include solid solutions. The solid solutions of the olivine and pyroxene mineral groups especially will lead to a shift in the Mg budget, potentially affecting the condensation of most of the shown species.
In conclusion, the found agreement between the two codes is much better than our conservative estimate. The combination of vagueness of the definition of the condensation temperature of a molecule and the large uncertainty in the thermochemical data itself (see Sect. 4.2) suggests that one should not attach great meaning to their exact simulated values.
#### 6.1.2 Condensation temperatures of elements
In contrast to the condensation of a rocky species, the 50% condensation temperature of the elements (the temperature at which 50% of the element is bound in solid species) is very sharply defined, since its maximum amount is known a priori from the given elemental abundance (cf. Sect. 5.3). Additionally, the selection of species is less influential in the equilibrium computation, since the elements tend to only be exchanged between different solid-phase species after the initial onset of condensation, but do not to return to gas-phase species. Accordingly, not including a particular species does not affect the amount of the element being bound in solid-phase species, and consequently the 50% condensation temperature.
Furthermore, the condensation temperatures of elements enable us to easily estimate the composition of a rocky planet. Most elements condense (10% to 90%) within a \(T\)-interval smaller than 100 K. This implies that there is hardly any of the element in solid form at temperatures above the 50% condensation temperature, whereas all of it is in solid form at temperatures below. Hence, if the formation temperature of a planet is above the 50% condensation temperature of an element, it will not contain significant amounts of this element. Otherwise, the element will have a similar relative abundance in the planet as in its host star.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & & \multicolumn{3}{c}{Condensation Temperature in [K]} \\ & \multicolumn{1}{c}{Species} & \multicolumn{1}{c}{Lodders} & \multicolumn{1}{c}{our code} & \multicolumn{1}{c}{deviation} \\ & & (2003) & & \\ \hline Al\({}_{2}\)O\({}_{3}\) & Corundum & 1677 & 1644 & \(-\)33 \\ CaTiO\({}_{3}\) & Perovskite & 1593 & 1576 & \(-\)18 \\ CaAl\({}_{4}\)O\({}_{7}\) & Grossite & 1542 & no cond. & n.a. \\ Ca\({}_{2}\)Al\({}_{2}\)SiO\({}_{7}\) & Gehlenite & 1529 & 1512 & \(-\)17 \\ MgAl\({}_{2}\)O\({}_{4}\) & Spinel & 1397 & 1360 & \(-\)37 \\ CaAl\({}_{2}\)Si\({}_{2}\)O\({}_{8}\) & Anorthite & 1387 & 1295 & \(-\)92 \\ Fe & Iron & 1357 & 1331 & \(-\)26 \\ Mg\({}_{2}\)SiO\({}_{4}\) & Forsterite & 1354 & 1328 & \(-\)26 \\ CaMgSi\({}_{2}\)O\({}_{6}\) & Diopside & 1347 & 1359 & \(+\)12 \\ MgSiO\({}_{3}\) & Enstatite & 1316 & 1262 & \(-\)54 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of condensation temperatures of some common planetary species.
Figure 4: Examples of different types of condensation behaviours of solid species. We show the relative molar amount of the species present in the protoplanetary disk as a function of decreasing temperature to demonstrate our definition of 50% condensation temperatures. Red diamonds show the computed 50% condensation temperatures. _Left panel_: Progression of perovskite CaTiO\({}_{3}\)(s) (condensation and subsequent disintegration). _Right panel_: Progression of forsterite Mg\({}_{2}\)SiO\({}_{4}\)(s) (varying relative amounts).
If the formation temperature of the planet is close to the condensation temperature of an element, this element will likely be depleted to some degree in the planet.
For the comparison, we again used Lodders (2003), as well as the more recent study of Wood et al. (2019). The latter applied the PHEO code, developed by and described in Wood & Hashimoto (1993), which is very similar in its thermochemical approach to our code.
The agreement with Lodders (2003) and Wood et al. (2019) for the condensation temperatures of major rock-forming elements are excellent, as shown in Table 2. The average discrepancy from the mean condensation temperature of each element is below 5 K and the largest deviation is below 15 K (cf. Fig. 5).
We conclude that the 50% condensation temperatures of the most common planet-building elements are not sensitive to details of the used condensation algorithm. In other words, even our very simplistic approach with only a few included species, will return results with a projected error below \(\pm 10\) K, for a given elemental abundance pattern and disk pressure.
### Dependence of the condensation temperatures of elements on system parameters
We explored the variability of the condensation temperature of elements as a function of disk pressure and elemental abundance pattern. While it has been widely acknowledged that the chemical processes in the protoplanetary disk are controlled by its elemental composition and especially its C/O and Mg/Si ratio (Bond et al., 2010; Carter-Bond et al., 2012; Thiabaud et al., 2014; Moriarty et al., 2014; Dorn et al., 2019; Bitsch & Battistini, 2020), the condensation temperatures of the elements are sometimes implicitly treated almost as material constants, at least within certain limits of system parameters (see e.g. Wang et al., 2019, 2019, 2020).
In Sect. 6.2.1 we explore the influence of the disk pressure, and in Sect. 6.2.2 we analyse the effect of a variation in the elemental abundance pattern. In Sect. 6.2.3 we briefly discuss implication of our findings.
#### 6.2.1 Dependence on disk pressure
The condensation temperatures of elements are often reported for a total disk pressure of \(1\times 10^{-4}\) bar. In general, however, the disk pressure depends both on the radial distance from the central star, the vertical distance from the mid-plane, and the total material within the system (Fegley, 2000).
We varied the disk pressure logarithmically between \(1\times 10^{-6}\) bar and \(1\times 10^{-1}\) bar, while keeping all other parameters of the simulation constant. Depending on the disk model, this range of pressure values might correspond to a radial distance range from 0.1 AU to 5 AU in a Solar-System-like disk (Fegley, 2000), that is, distances within the water snow line, where we expect to find rocky planets.
As shown in the top panel of Fig. 6, we found an overall trend of a higher disk pressure corresponding to a higher condensation temperature of the elements. Raising the pressure in a system where nothing else is changed, is equivalent to increasing the particle concentration. This implies an increase in the reaction rates. Since the pressure raises the effective concentration of all species equally, we found a quantitatively very similar relation between the disk pressure and the condensation temperature for all analysed elements, except for S.
This can be seen particularly clearly in the bottom panel of Fig. 6, where we show the deviation of the condensation temperature of an element at a given disk pressure from the element's mean condensation temperature within the analysed pressure range. For the analysed pressure range all species have their mean condensation temperature at roughly the same pressure (between \(4\times 10^{-4}\) bar and \(5\times 10^{-4}\) bar). Also, the deviation from this mean is similar for all elements at each disk pressure. On average, the condensation temperature of all elements except for S increases by \((357\pm 57)\) K over the analysed five orders of magnitude in disk pressure. S behaves differently, its condensation temperature does not change at all over the pressure range.8
Footnote 8: The deviation is, however, not relevant for our argument, because the condensation temperature of S is much lower than those of the other main rock-forming elements.
The consistency in pressure versus condensation temperature relations between the different elements has implications for the analysis of planet formation at different radial locations within the protoplanetary disk. In disk models, both the midplane temperature and the disk pressure decrease with increasing distance from the central star. Generally, when simulating planet formation, both pressure and temperature are considered in combination. However, since the disk pressure affects the condensation temperatures of all elements very similarly, the variation in disk pressure does not change the equilibrium chemistry and equilibrium composition as a function of temperature qualitatively, but only shifts the equilibrium composition to higher temperatures (for an increased pressure) or to lower temperatures (for a decreased pressure). The small variations in the pressure response of the elements' condensation temperature will likely be rendered insignificant by the fact that a planet does not comprise material of one specific (\(T\)-\(p\))-equilibrium condition but rather a mixture over a range of conditions. As shown in Fig. 6, the greater the change in pressure, the larger the difference in
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{Condensation Temperature in K} \\ Element & Lodders (2003) & Wood et al. (2019) & our code \\ \hline Al & 1653 & 1652 & 1643 \\ Ti & 1582 & 1565 & 1575 \\ Ca & 1517 & 1535 & 1512 \\ Mg & 1336 & 1343 & 1331 \\ Fe & 1334 & 1338 & 1329 \\ Si & 1310 & 1314 & 1305 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of 50% condensation temperatures of major rock-forming elements.
Figure 5: Graphical comparison of deviation of 50% condensation temperatures of the major rock-forming elements, minus the mean over the three values for each element.
between the elements. Our argument is therefore only valid for small pressure ranges.
#### 6.2.2 Dependence on the elemental abundance pattern
There have been many studies assessing the diversity of exoplanetary compositions as a result of the changed equilibrium chemistry in a protoplanetary disk, due to variations in its elemental abundance pattern (e.g. Bond et al. 2010; Carter-Bond et al. 2012; Thiabaud et al. 2014; Moriarty et al. 2014; Dorn et al. 2019; Bitsch & Battistini 2020; Jorge et al. 2022). Certain element ratios control which molecular species will form out of the available elements. Different molecules can have vastly different condensation temperatures even if they consist of similar elements. The species in which an element is predominantly bound, generally determines its 50% condensation temperature.
To systematically analyse the influence of the elemental abundance pattern on the condensation temperatures of the elements, we ran condensation simulations for synthetic abundance patterns, only varying one key element ratio at a time. We explored the role of the overall metallicity and the element ratios C/O, Mg/Si, Fe/O, and Al/Ca. We specify metallicities logarithmically and normalised to solar values:
\[\mathrm{[M/H]}=\log\left(\frac{N_{M}}{N_{H}}\right)_{\mathrm{star}}-\log \left(\frac{N_{M}}{N_{H}}\right)_{\mathrm{sun}}, \tag{30}\]
where \(N_{M}\) is the sum of the relative number of atoms in the system of all elements larger than He, and \(N_{H}\) the relative number of H atoms. All other element ratios are given as non-normalised number ratios, for instance, 'C/O' means
\[\mathrm{C/O}=\left(\frac{N_{C}}{N_{O}}\right)_{\mathrm{star}}. \tag{31}\]
As a basis for our analysis, we used the Brewer et al. (2016) catalogue of 1617 F, G, and K stars. To ensure the reliability of the abundance data, we only took stars into account whose spectra have a signal-to-noise ratio (S/N) larger than 100. Furthermore, to avoid giant stars, we excluded all stars with \(\log g\leq 3.5\) (compare Harrison et al. 2018). We used the remaining 964 stars to (1) generate a representative abundance pattern as a starting point for the element ratio variations, (2) determine the parameter ranges of the element ratios we were interested in, and (3) pick roughly 100 stars, covering the whole parameter space, to verify that any trends found in the analysis of the synthetic data are also followed by the real stellar data. Figure 7 shows the parameter ranges (\(T_{\mathrm{eff}}\) versus metallicity, C/O versus Al/Ca, and Mg/Si versus Fe/O) covered by the Brewer et al. (2016) stars, and the distribution of the sample of comparison stars.
For most variations of the element ratios, we kept the abundance of one element constant in our representative abundance pattern, and only varied the other. For the overall metallicity, only the H abundance was changed. For the Al/Ca, Mg/Si, and Fe/O ratios, Al, Mg, and Fe were varied, respectively. The C/O ratio was treated differently. In the studied stellar sample, there is a strong correlation between the C/O ratio and the ratio of O to the sum of other abundant elements, such as Mg, Si, and Fe. In order to avoid a distortion of the analysis due to an unrealistic abundance pattern of the synthetic data, we approximated this correlation with a parabola, as shown in Fig. 8, and adapted both the C and O abundance accordingly.
Our tests show that an element ratio can affect the condensation temperatures in different ways. We can differentiate the effects in terms of the number of affected elements, and the curve shape of the correlation between the ratio and condensation temperature. Table 3 shows the summary of our findings. The overall metallicity and C/O ratio have the most profound impact on the condensation temperatures. These two variations stand out,
Figure 6: Dependence of the 50% condensation temperature (\(T_{c}\)) of elements on the disk pressure. Markers represent simulated condensation temperatures, and corresponding dashed lines are only to guide the eye. Colours, as denoted in the legend, are the same for both panels. All simulations use the solar elemental abundance pattern recommended by Lodders (2003). _Top panel_: 50% condensation temperature of major planet-building elements as a function of different disk pressures. _Bottom panel_: Deviation of the 50% condensation temperature from the respective mean condensation temperature.
because (1) they affect a large number of elements, (2) the magnitude of change in condensation temperatures is high, and (3) the correlation between the ratio and the condensation temperature of all affected elements is systematic. We therefore limit our discussion to these two parameters and only cover the others in a cursory fashion.
In Fig. 9, we show the influence of the overall metallicity of the system on the condensation temperatures of a selection of common elements. The coloured foreground markers represent the simulation result for the synthetically varied metallicity, the grey background markers show the simulation results of the random sample of comparison stars. The figure clearly demonstrates a linear correlation between the logarithmic metallicity and condensation temperature for all elements. We found an increase in condensation temperature between 52 K (Fe) and 117 K (Al) over the covered metallicity range. Despite the fact that the abundance patterns of the comparison stars are quite diverse (cf. Fig. 7), the log-linear correlation between metallicity and condensation temperatures can also be found there.9 The median deviation of the condensation temperatures from the interpolation curve of the synthetic simulation results is below 20 K for all the elements.
Footnote 9: The deviation of the Fe condensation temperatures at low metallicities is caused by the superposition of the effect of disproportionately low Fe-abundances in the stellar sample (‘\(\alpha\)-enhancement’; see e.g. Gebek and Matthee 2022).
The effect of the overall metallicity is reminiscent of the effect of disk pressure variations, seen in Sect. 6.2.1. This is unsurprising as both variations effectively change the relative number of rock-building particles per volume, that is, the chemical reaction rates.
We have found a similar log-linear correlation for the Fe/O ratio, which does, however, only affect the condensation temperature of Fe itself. Again, we suspect this to be explainable by the fact that we increased the partial pressure of Fe. Since its dominant solid species in our simulation is pure solid iron, this increased partial pressure does not shift the balance of any other chemical reaction. The result would apply analogously to similar condensation patterns, where the dominant solid species do not include any other elements, such as the Ni condensation.
In Fig. 10, we show the influence of the C/O ratio on the condensation temperature of some common elements. Again, the synthetic simulation results are depicted with coloured foreground markers, the comparison stars with grey background markers. It is important to note that in this figure the effect of the metallicity, as described above, has already been removed from the results. The order of magnitude of the change in condensation temperature caused by the variation of the C/O ratio is similar to that of the metallicity. There are, however, also several qualitative differences. The most obvious difference is that an increase in the C/O ratio causes a decrease in condensation temperatures, in contrast to the increase caused by a higher metallic
\begin{table}
\begin{tabular}{c c|c c} \hline \hline & & \multicolumn{2}{c}{curve shape} \\ & & log-linear & amorphous \\ \hline \multirow{2}{*}{\(\Delta\)O} & many elements & [M/H] & C/O \\ & few elements & Al/Ca (for Al) & Al/Ca (for Ca) \\ & & Fe/O (for Fe) & Mg/Si (for Mg \& Si) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of the influence of the variation in different element ratios on the condensation temperature of elements.
Figure 8: Correlation between the C/O and \(\Sigma\)/O, where \(\Sigma=N_{\text{Mg}}+N_{\text{Si}}+N_{\text{Fe}}+N_{\text{Ca}}+N_{\text{Al}}\). Grey background markers represent the studied stellar sample, and the dotted line shows the fit quadratic fit to the data. The light blue foreground markers show the synthetic data for the C/O analysis.
Figure 7: Parameter range of the Brewer et al. (2016) stellar database. Grey background markers represent all stars with \(\log{g}>3.5\) and \(S/N>100\) of the database, and light blue foreground markers represent the stellar sample studied here. _Left Panel_: Effective temperature versus metallicity. _Middle Panel_: C/O ratio versus Al/Ca ratio. _Right Panel_: Mg/Si ratio versus Fe/O ratio.
ity. Also, while there certainly seems to be a systematic effect of the C/O ratio on all condensation temperatures, the correlation is not log-linear. Finally, while the metallicity affected all condensation temperatures, the C/O ratio only affects elements whose dominant species contain O, for instance, the condensation of Fe and Ni are unaffected.
The expected correlation mapped out by the synthetic data is followed exceptionally well by the real data simulations, especially for the range \(0.3\leq\mathrm{C/O}\leq 0.7\). For the whole parameter range, the median deviation from the expected curve is below \(10\,\mathrm{K}\) for all elements. For C/O values below \(0.3\), the real data condensation temperatures of Al and Ca do not follow the synthetic data well. This is caused by the superposition of the influence of the Al/Ca ratio. The diverging systems coincidentally all feature a particularly low Al/Ca ratio. Our tests with synthetically varied Al/Ca ratios have shown that it causes a roughly log-linear increase of the Al condensation temperatures and a step-function increase for Ca (see Fig. 11, left panel). This explains why the condensation temperatures of Al shown in Fig. 10 gradually taper off from the expectation curve, whereas the Ca condensation temperatures suddenly jump to values more than \(100\,\mathrm{K}\) below the expectation.
These differences between the effect of the variations in metallicity and in the C/O ratio allude to different underlying mechanisms. We have argued that an increase in metallicity implies increasing the number of all reactants per volume, thereby increasing all reaction rates. In contrast, changing the C/O ratio tilts the reaction balances in the formation of many major species, by only changing the availability of one of the reactants or by changing them to different degrees. The effect of a changed reaction balance is far more difficult to predict than the effect of globally increased reaction rates. Reaction balances are particularly strongly affected when the involved element ratios in a system are typically close to unity, because then a change in the ratio can imply that the availability one of the reactants is exhausted before the other, inhibiting this reaction. This is the case for the C/O, Mg/Si, and Al/Ca ratios.
Figure 9: Correlation between the overall metallicity of the system and the condensation temperature of some major planet-building elements. The coloured circles in the foreground show the simulation results of the synthetic abundance patterns, and the grey circles in the background show the simulation results of a representative subset of approximately 100 stars from the Brewer et al. (2016) database. All simulations were run at a constant disk pressure of \(p=10^{-4}\) bar.
#### 6.2.3 Implications of the variability in condensation temperatures
Our findings regarding the variation of the condensation temperatures of elements have several implications. Most importantly, a combination of variations in pressure and elemental abundance pattern, even over a moderate parameter range, can easily change the condensation temperatures of elements by more than 100 K. This needs to be taken into account, when they are used to estimate planet compositions in other stellar system, for instance when applying the elemental devolatilization pattern of the Earth to exoplanets (Wang et al., 2019; Spaargaren et al., 2023) or in the context of white dwarf pollution (Jura and Young, 2014; Farihi et al., 2016; Harrison et al., 2018; Wilson et al., 2019; Bonsor et al., 2020; Veras, 2021; Xu and Bonsor, 2021).
We have, however, seen that the overall metallicity and the disk pressure affect all elements very similarly. Neglecting those will likely not be of great consequence to any derived exoplanetary compositions. That is to say, the computed element ratios would agree well with a model taking these parameters into account over the whole simulation range, but these ratios would be predicted for shifted radial distances.
Other variations in the elemental abundance pattern, however, cause more unpredictable changes to the condensation temperatures of some elements. As a result, both the sequence in which the elements condense, as well as the difference in their condensation temperatures can be significantly altered. These changes entail substantial qualitative deviations in the most likely composition of a planet expected to form in a given system compared to its Solar System analogue. We explore this point further in the next section (Sect. 7).
Furthermore, our findings give us an idea of the potential influence of the uncertainty of stellar abundance measurements. While the uncertainties of the abundances of most planet-building elements might be lower than \(\pm 0.03\) dex for many well-studied F, G, and K stars (Brewer et al., 2016), and even an uncertainty at the \(\pm 0.01\) dex level seems feasible for these stars (Bedell et al., 2014), the situation is generally much worse. For M-dwarfs, where abundance measurements are in their infancy, typical errors exceed \(\pm 0.1\) dex (Souto et al., 2017). As we see in
Figure 10: Correlation between the C/O ratio and the condensation temperature of some major planet-building elements, corrected for the metallicity effect. The coloured circles in the foreground show the simulation results of the synthetic abundance patterns, and the grey circles in the background show the simulation results of a representative subset of approximately 100 stars from the Brewer et al. (2016) database. All simulations were run at a constant disk pressure of \(p=10^{-4}\) bar.
Fig. 10, a difference of \(\pm 0.1\) dex in the C/O ratio can signify a difference of some tens of Kelvins in the condensation temperature of certain elements, at least at the upper end of our tested range, that is, C/O \(\geq 0.5\). An in-depth analysis of the impact of uncertainty is available in Hinkel & Unterborn (2018).
## 7 Exoplanet compositions
We now look at the bulk composition of rocky planets around chemically different stars. To emulate the dynamical formation of planets we compare different methods of assembling the solid material from our equilibrium condensation simulation. To externally validate our results and qualitatively assess the merits of our composition models, we compare them against the \(n\)-body simulations of Bond et al. (2010b) (in this Sect. abbreviated as B10).
### Derivation of planet compositions
Our underlying disk model is vastly simplified. The only parameter changing within our disk is the temperature, which is a proxy for distance from the central star. For our study, we are not interested in an exact temperature-distance relation but only in qualitative tendencies. We keep the pressure constant at a value of \(1\times 10^{-4}\) bar. This pressure value is often assumed for the formation of Earth (Fegley, 2000; Lodders, 2003; Wood et al., 2019). In this context, however, the choice was arbitrary. As shown above in Sect. 6.2.1, variations in disk pressure affect the condensation of all species very similarly, especially if the expected variation in pressure is small.10
Footnote 10: Note that, in contrast to our simplified model, realistic disk models have a two-dimensional structure, show pressure gradients and pressure bumps; they likely have inhomogeneous element distributions, and they evolve in time.
To derive the bulk composition of a planet for any given formation temperature (see above, Sect. 5.3), we use three different methods of assembling planetary material to emulate planet formation via accretion. While our models are loosely rooted in the idea of planetary accretion from within the planet's Hill sphere (see e.g. Pollack et al., 1996; Kley, 1999), we only use them to bypass computationally expensive \(n\)-body simulations. That means there is no correspondence between the expected physical size of the planet and the diversity of the material assembled in our code; we instead made the latter match the results of the \(n\)-body simulation. As illustrated in Fig. 12, we compare two differently shaped planetary feeding zones (FZs) to a model without a FZ.
In the simplest approach, the planet is only made up of the solid material that is stable at the planet's formation temperature, with the relative amounts dictated by the thermochemical equilibrium at that temperature. This approach corresponds to taking an infinitesimally thin section of the elements-temperature-progression described in Sect. 5.3. It is illustrated in the bottom panel of Fig. 12, where the \(x\)-axis represents the temperature decreasing with distance from the central star, all material except that at \(T=T_{\rm central}\) is discarded.
The first FZ is an equal weights temperature band, illustrated in the middle panel of Fig. 12, later referred to as 'boxcar FZ'. We specify the width of the temperature band, add up the elemental equilibrium compositions within the temperature range, and normalise the result. This normalised result is taken to be the planetary composition at the central temperature of the band. Since a lower temperature generally entails a higher total amount of solids in the equilibrium, it follows that the lower-temperature edge of the band effectively has a stronger influence on the resulting planetary composition than the higher-temperature edge.
Figure 11: Correlation between element ratios and the condensation temperatures, corrected for the metallicity and C/O effect. The coloured circles in the foreground show the simulation results of the synthetic abundance patterns, and the grey circles in the background show the simulation results of a representative subset of approximately 100 stars from the Brewer et al. (2016) database. All simulations were run at a constant disk pressure of \(p=10^{-4}\) bar. _Left panel_: Influence of the Al/Ca ratio on the condensation temperature of Al and Ca. _Right panel_: Influence of the Mg/Si ratio on the condensation temperature of Mg and Si.
Figure 12: Illustration of the three different FZ models for creating a planet’s composition at a given temperature. The x-axis denotes the temperature, or equivalently the distance from the star. _Top panel_: Gaussian profile. _Middle Panel_: Boxcar profile. _Bottom Panel_: No FZ model.
The second type of FZ is a Gaussian profile, as illustrated in the top panel of Fig. 12. Here, the total material at each temperature is first multiplied by a normal distribution with a specified standard deviation, \(\sigma\), and subsequently added up. The argument regarding more solid material being present at lower temperatures also applies to this FZ. The width of the FZ is given by \(2\sigma\). The location of the peak of the normal distribution gives the planetary formation temperature.
For both of these types of FZ, the effect of a ring geometry on the amount of available material is not taken into consideration for the final planetary make-up. That means we neglected the fact that a lower temperature corresponds to a greater distance from the star. A larger radius of the ring implies more material with that particular composition being available for accretion onto the planet. To quantify this geometric effect, we would have to connect our temperature profile to specific distances, which is not part of our simplified model.
### Application and comparison study
Following the approach of B10, we analyse the predicted planetary compositions with our simplified disk model for chemically diverse stars, delineated primarily by differences in their C/O ratios. In particular, we explore the simulated compositions of a rocky planet formed around a low-carbon star (HD27442), around a medium-carbon star (HD17051), and around a high-carbon star (HD19994), using the three different planetary FZ models described above. The physical properties of the stars are listed in Table 4. We use the elemental abundances of these stars as reported by B10, in order to facilitate the comparison of the results. It should be noted, though, that these abundances have later been found to be inaccurate, especially the C/O ratios are vastly overestimated (Fortney, 2012; Nissen, 2013; Teske et al., 2014; Brewer & Fischer, 2016). Based on more recent studies of stars in the solar neighbourhood (e.g. Brewer et al., 2016), all of the C/O ratios analysed in this section would be classified as moderately high or high.
We compare our results against the results of B10. They simulated the composition of rocky planets by combining a chemical equilibrium condensation with a dynamical accretion simulation. The chemical equilibrium condensation was done with the HSC suite. The \(T\)-\(p\) input parameters were based on the Hersant et al. (2001) temperature-pressure-profile for the midplane of the protoplanetary disk.11 The solids formed in equilibrium at the specified temperature and pressure constitute the planetary building blocks for the dynamical \(n\)-body simulation, which was done using the SvMBA integrator (Duncan et al., 1998). They ran four accretion simulations for each of the studied stars, slightly varying the initial distribution of planetesimals, and recorded the composition of the final planets as the sum of all accreted material. Each simulation run returned between zero and three planets per star. For each star, we show the planet compositions in order of formation distance across the whole set of the B10 simulations. This illustrates representative compositional patterns as a function of distance, and captures their variability due to dynamics.
Footnote 11: The Hersant et al. (2001) disk profile is not considered state-of-the-art anymore, as it is a purely diffusive model. It has been shown that introducing radiative transfer to the model inverses the vertical temperature profile of the disk (i.e. \(T\) increases for increasing \(z\)) compared to a purely diffusive model in which the temperature decreases with distance from the midplane, and a shadowing effect results in an overall cooler midplane (Pinte et al., 2009; Woitke et al., 2009; Oberg et al., 2022). However, for the purpose of our analysis, the only essential aspect of the disk-profile is that the midplane temperature decreases with distance from the star.
### Results for chemically diverse systems
Figures 13 to 15 show the comparisons among our results of planet compositions for the three stars and to the B10 results. We describe them in more detail in the following subsections. Each figure contains four panels, showing the bulk composition of a planet (in wt - %) simulated to form around the respective star. The top panel shows the discrete results of the B10 simulation as a function of distance, the second panel from the top shows our composition for a Gaussian FZ, the third panel shows our simulation for a boxcar FZ, and the bottom panel shows the composition without any FZ. Where possible, we group the B10 planets with similar composition and indicate roughly which section of our simulation best corresponds to them with arrows between the top most and second panel.
#### 7.3.1 Low carbon abundance
In Fig. 13 we show the system of HD27442, which has the lowest carbon abundance of the three analysed systems (C/O = 0.61). The elemental abundance pattern of this star is similar to the solar values. This implies that the simulated planets can be expected to also resemble the inner planets of the Solar System in bulk composition. No S abundance is reported for HD27442. Since its C/O ratio is far below 1, S species are not expected to play a major role in the equilibrium chemistry (see footnote 8 and Sect. 7.3.3). We therefore excluded all species containing S from the simulation of this system.
Starting closest to the central star, two planets from the B10 simulations formed at 0.33 and 0.35 AU with similar compositions of mostly O and Al, significant amounts of Mg, Ca, and Si. We find a very similar composition in our simulation without a FZ at or slightly below 1400 K. The boxcar FZ, with a width of 100 K, produces a slightly different composition for this formation temperature, with a reduced Mg-content. This is caused by mixing in material from the higher-temperature regions, where only Al-Ca-O species have condensed. We cannot reproduce the composition of the two innermost planets with our Gaussian profile, because it does not create a region that contains Mg but no Fe.
At intermediate distances, we find a group of five planets in the B10 simulation with similar O and Si contents as the innermost planets, but with ever increasing Fe amounts. These planets formed between 0.36 AU and 0.52 AU, which seems to correspond to the location Fe snow line in the B10 simulation. Due to the very abrupt condensation of Fe at \(T\approx 1360\) K, we can
\begin{table}
\begin{tabular}{l l l l} \hline \hline & HD 27442 & HD 17051 & HD 19994 \\ \hline \(T_{\rm eff}\) [K] & 4825 & 6097 & 6188 \\ \(M_{*}\) [\(M_{\odot}\)] & 1.48 & 1.15 & 1.37 \\ \(R_{*}\) [\(R_{\odot}\)] & 3.43 & 1.18 & 1.75 \\ \(L_{*}\) [\(\log_{10}(L_{\odot})\)] & 0.838 & 0.250 & 0.626 \\ \(\rm[Fe/H]\) [dex] & 0.42 & 0.11 & 0.19 \\ \hline \end{tabular}
\end{table}
Table 4: Physical parameters of stars analysed in this section.
not reproduce this planetary composition without resorting to a FZ. The gradual change in planet composition can be reproduced with both FZ models. The boxcar model would profit from a larger FZ width than the one used here, though.
Figure 13: Predicted bulk composition (in wt – %) of a rocky planet simulated for the elemental abundance of HD27442 (low carbon system). _Top panel_: B10 planet composition results from four separate simulation runs. We also show our simulations: with a Gaussian FZ (_second panel_), a boxcar FZ (_third panel_), and no FZ (_bottom panel_). The arrows between the first two panels indicate roughly the location of the best correspondence between the Bond simulation and ours.
At greater distances, we find a group of three planets in the B10 simulation that can be characterised by their large content of Fe, Mg, O, and Si. These planets formed between 0.52 AU and 0.77 AU. As shown in our continuous simulations, the composition of the solids in the disk converges to these specific ratios, which are in accordance with the stellar elemental abundance ratios. This is due to the fact that we now look at planetary formation temperatures below the condensation temperatures of the main planetary components. Once we enter this region, the FZ model becomes obsolete, as the material to either side of the central temperature is identical.
There is one final planet left in the B10 simulation that has no correspondence with our continuous simulation. It formed at the greatest distance from the star, but its composition rather resembles the second group of planets. The formation of this planet requires substantial dynamical processes, likely in the form of planet migration, which cannot be emulated by our simple FZ model.
#### 7.3.2 Medium carbon abundance
In Fig. 14 we show the planets simulated to form around the star HD17051, which has an intermediate carbon abundance among the analysed systems (\(\rm C/O=0.87\)). Closest to the central star, we see five B10 planets with almost identical compositions. They formed between approximately 0.3 AU and 0.4 AU. They contain a large fraction of O, similar amounts of Al and Ca, and a small amount of Si. We find the same composition in our simulation for all three types of FZs between roughly 1500 K and 1400 K. For the Gaussian FZ, though, the region corresponding to the composition of the B10 planets is very narrow, which is difficult to reconcile with the consistent compositions returned by the \(n\)-body simulation. The model without a FZ has a broad plateau of the same composition as the B10 planets. The boxcar FZ does not have such a broad plateau, but shows a section with a sufficiently constant composition to be compatible with the formation of similar planets over an extended range of distances.
At greater distances, we identify two B10 planets with very similar composition that are vastly different from the first group. These planets are dominated by their high Fe content, exceeding 50% of the total weight of the planet, and suggesting a very extensive planetary core. The remaining composition is made up of O, Mg, and Si, with only small contributions of Ni, Ca, and Al. In contrast to the HD27442 system, this group of planets has not formed in region of the convergence composition of the disk. We can clearly see in our simulations that the composition changes significantly all the way down to approximately 600 K, when S condenses.
Both the high similarity of composition over a fairly large distance range, as well as the deviation from it in the form of a slight increase in Mg, O, and Si can be seen in our three continuous models in the temperature range from approximately 1300 K to 1200 K. Both the Gaussian and the boxcar profile reproduce the gradual changes in the composition of the B10 planets. The composition without a FZ compares less favourably, because at the onset of the Mg and Ni condensation, it changes are very rapidly and strongly, and at lower temperatures, it stays constant.
#### 7.3.3 High carbon abundance
Finally, we show the planets simulated to form around the high-carbon star HD199944 in Fig. 15. Based on the elemental abundance data we use for this simulation, the star has the exceptional C/O ratio of 1.26. We expect a completely altered disk chemistry for systems with C/O ratios exceeding unity. All O-atoms are bound to C-atoms to form highly stable CO gas molecules (Molliere et al., 2015; Woitke et al., 2018). Accordingly, O is no longer available for the solid-phase chemistry, inhibiting the condensation of some of the most common species in planet formation, such as Al\({}_{2}\)O\({}_{3}\), CaAl\({}_{12}\)O\({}_{19}\), and MgSiO\({}_{3}\). Because all O is bound to CO, no O is available to bind with H\({}_{2}\) to form H\({}_{2}\)O and the system becomes highly reducing. This means that all Fe is in reduced state and that some Si occurs in metal instead of silicates. A C/O ratio exceeding 1 also means that free C is available to form exotic phases like SiC or free C in form of, for example, graphite. In high C/O systems, S replaces O as anion, which leads to a much higher condensation temperature of S as it condenses with Ca into refractory phases like oldhamite (CaS). Although the solar C/O is approximately 0.5, some portions of the early Solar System apparently had C/O ratios close to 1 as it is evident from the presence of CaS in reduced enstatatic chondrites or exotic elemental ratio patterns of the rare Earth elements in ordinary chondrite chondrules (Pack et al., 2004). A planet with a bulk \(\rm C/O>1\) would certainly not allow the presence of liquid water and thus would likely be hostile for life.
From a practical, computational point of view, it should be noted that not many S species are taken into account in condensation simulations, due to their limited importance in solar-like systems. For instance, B10 only considers the solid S species FeS, MgS, and CaS; we only added Al\({}_{2}\)S\({}_{3}\) to this selection.12 This means that the simulations likely do not reflect the true disk chemistry of a C-rich system.
Footnote 12: The GGGitem code (Woitke et al., 2018), on the other hand, contains thermochemical data of 12 different solid S species.
As expected, the composition of the simulated planets is completely different from the planets discussed so far. Starting again closest to the central star, we find two B10 planets only containing C and Si. They formed at 0.31 and 0.33 AU. In all our models, the same composition can be found for a large range of temperatures. The condensation of C and SiC in this system occurs at a much higher temperature than any of the other species, and in combination with the suppression of the Al-Ca-O species, this C-Si composition of solids is very stable in the disk for an extended temperature range. This implies that the FZ type has hardly any influence on the predicted composition of the innermost planets in the system.
At intermediate distances, we find two B10 planets with a small fraction of Fe. They formed at 0.35 and 0.37 AU. We cannot reproduce this composition without using a FZ. The very rapid condensation of Fe means that the disk composition changes from no Fe in solid form to all Fe in solid form within a few kelvins. This makes it difficult to form a planet with a small amount of Fe.
The next two B10 planets in the sequence, formed at 0.45 and 0.46 AU, show increasing amounts of Fe and traces of other elements. Despite their almost identical distance from the central star, the ratios of these elements are substantially different. While we can identify a section in our FZ models, in which the Fe fraction increases rapidly, we do not find these exact compositions in any of our models. Especially the Al traces cannot be reproduced, as we found Al to condense at a temperature that is too low to allow for mixing of the material into the region in which Fe has not yet fully condensed. One reason for this deviation might be the geometric effect we described in Sect. 7.1. This would increase the relative amount of the lower-temperature material, making it available for a redistribution to
the higher-temperature regions. It is also possible that this composition requires a more dynamical accretion of different types of materials than we can emulate with our FZ models.
Figure 14: Predicted bulk composition (in wt-%) of a rocky planet simulated for the elemental abundance of HD17051 (medium carbon system). _Top panel_: B10 planet composition results from four separate simulation runs. We also show our simulation: with a Gaussian FZ (_second panel_), with a boxcar FZ (_third panel_), and with no FZ (_bottom panel_). The arrows between the first two panels indicate roughly the location of the best correspondence between the Bond simulation and ours.
The most distant B10 planet, at 0.7 AU, has a much more diverse composition. This planet formed at a temperature at which CO starts to lose its role as the dominant C gas phase and is replaced by CH\({}_{4}\), removing C from the solid phase and freeing O
Figure 15: Predicted bulk composition (in wt-%) of a rocky planet simulated for the elemental abundance of HD17051 (high carbon system). _Top panel_: B10 planet composition results from four separate simulation runs. We also show our simulation with a Gaussian FZ (_second panel_), with a boxcar FZ (_third panel_), and with no FZ (_bottom panel_). The arrows between the first two panels indicate roughly the location of the best correspondence between the Bond simulation and ours.
for the condensation of more common rocky species. Accordingly, the relative C and Si contents are significantly reduced, but there is also a large fraction of the typical rock components of O and Mg. Additionally, S becomes a more abundant trace element. Qualitatively, we find this composition in all our models, the only difference seems to be that our models predict a much lower relative S abundance at the location at which there is still a significant amount of C in solids.
### Implication for simplified planet formation models
We learned several things in the comparison between combined thermochemical-dynamic model of B10 and our simplified continuous planet composition models, where the only free parameter was the disk temperature.
Firstly, since the analysed B10 planets were confined to a radial distance between approximately 0.3 AU and 0.8 AU from their central star, the variations in disk pressure in these simulations is only about one order of magnitude. As we show in Sect. 6.2.1, the condensation temperatures of the elements do not change significantly within one order of magnitude in pressure. This makes it unsurprising that we can recreate the B10 results so easily without a variable pressure input.
Regarding the emulation of dynamical planet formation, a FZ is generally able to reproduce the results of an \(n\)-body simulation. The continuum compositions of all three systems show that we can distinguish sections in which the element ratios are fairly constant over a large temperature range, and section with rapid changes. The greatest variability in composition occurs in the vicinity of the condensation temperatures of the major planet-building elements Mg, Si, and Fe. At these temperatures, using a FZ is crucial to reproduce the gradual variations in planet composition found in \(n\)-body simulations. In regions where the element ratios are constant over a large temperature range, using a FZ is less relevant, or, in the case of the convergence composition, completely obsolete.
The exact shape of the FZ does not seem to be particularly significant, as their width can be adapted to generate the required effect on the final composition. For instance, Sossi et al. (2022) has shown that the measured elemental depletion pattern of Earth compared to the Sun can be achieved by using a Gaussian FZ with a standard deviation of approximately \(\sigma\approx 216\) K, whereas that of Vesta, with its mass of \(4\times 10^{-5}\)M\({}_{\oplus}\), requires a standard deviation of \(\sigma\approx 57\) K. There are, however, some arguments in favour of the boxcar model. On the one hand, it seems to be better at reproducing the composition of the innermost planets formed in the \(n\)-body simulation. At the onset of condensation, when there is no solid material at higher temperatures, a Gaussian profile results in a very asymmetric assemblage of material that is skewed towards low-temperature material. On the other hand, a boxcar profile seems to be more compatible with the physical concept of accretion from a region within the gravitational influence of the forming planet. This could also be achieved by cutting off the wings of the Gaussian profile, for example at \(2\sigma\) or \(3\sigma\).
We have, however, seen some deviations from our continuum composition in the B10 planets, which we could not reproduce with any of our FZ models, and which must therefore be a result of the dynamical accretion simulation. This shows the limitation of our simplistic model. \(N\)-body simulations can help us explore the extent to which processes that entail large displacements of planetary building blocks from their formation region might affect the final composition of a planet. Taking this idea even further, these simulations would also allow us to study the composition of planets that are partly formed by accreting material from remote reservoirs ('pebble accretion'; see e.g. Kleine et al.2020; Schneeberger et al.2023; Gu et al.2023).
## 8 Summary and conclusions
ECCOplanets is a simple, accessible, and versatile Python code that can be used to simulate the equilibrium condensation of the main building blocks of rocky planets in the protoplanetary disk of stars, as a function of the elemental abundance pattern and disk pressure, based on a Gibbs free energy minimisation. The performance of our code is stable and robust for a variety of starting conditions. The software package, which we make publicly available, includes a limited built-in (and extendable) library of thermochemical data representative of common problems in exoplanet formation.
In this paper we have used our code for two typical applications in planetary science: finding the condensation temperature of elements and condensates, and deriving the composition of rocky planets as a function of the stellar abundance pattern. Both these analyses were also used as a benchmark test for the results of our code against literature values.
The computed condensation temperature of a condensate is very sensitive to its exact definition and to the selection of molecules included in the simulation. In combination with the uncertainty in thermochemical data, this suggests that the exact value of simulated molecular condensation temperatures is not very meaningful. Nevertheless, under reasonably simple assumptions, we have shown that our code outputs condensation temperatures within 50 K of accepted literature values for most tested species.
The derived 50% condensation temperatures of elements are a far more robust measure of disk chemistry. They are unambiguously defined and less sensitive to the selection of molecules. Here, the agreement between our results and the literature values is of the order of 5 K. The condensation temperatures of elements are highly sensitive to physical variations in the system, that is, the disk pressure and elemental abundance pattern.
The disk pressure affects the condensation temperature of all elements in a similar way, with higher pressures corresponding to higher condensation temperatures. Over the analysed range \(10^{-6}\) to \(10^{-1}\) bar, we find an average increase in condensation temperatures of (\(357\pm 57\)) K for the studied elements.
To understand the influence of variations in the elemental abundance pattern, we performed simulations with synthetically altered key element ratios and compared them to a representative selection of stars. We identified different groups of systematic variations to the condensation temperatures, which hint at different underlying chemical processes. Regarding the number of affected elements and the magnitude of the change in condensation temperature, the metallicity and C/O ratio have the greatest impact. An increase in metallicity results in a log-linear increase in elemental condensation temperatures; in contrast, an increase in C/O lowers the condensation temperature exponentially. While not all elements are affected to the same degree, the condensation temperatures can easily vary by more than 100 K for the sampled parameter ranges of \(4\times 10^{-4}\) to \(2\times 10^{-3}\) in metallicity and 0.1 to 0.7 in C/O.
We conclude that the combined effect of the pressure and elemental abundance pattern on the condensation temperature of elements limits the applicability of the values derived in the context of the formation of the Earth to other planet formation locations within the Solar System, and especially other stellar systems. Finally, we studied the composition of rocky planets form
ing around three exemplary stars, delineated by their C/O ratio. To explore the effects of profoundly limited model assumptions, we used a one-parameter (\(T\)) disk model and only emulated planetary accretion with FZ models. We compared our results against a study using a (\(T\)-\(p\)) disk model in a combined thermochemical and \(n\)-body simulation.
Our simple model was able to reproduce almost all compositions of the combined thermochemical-dynamical simulation. This serves as a further confirmation that the disk pressure has an almost uniform influence on the whole condensation regime, and that neglecting it does not affect the results qualitatively for small pressure ranges. It also provides insights into the effects of dynamical accretion. Dynamical accretion leads to gradual changes in the planetary composition as a function of distance from the star. As most elements condense abruptly, these gradual changes require the mixing of condensates from the equilibrium conditions of a large temperature range, that is, a FZ. The shape of the FZ appears to be insignificant, as any FZ can be tailored to achieve the required degree of redistribution of material by adjusting its width.
We conclude that the most likely main characteristics of rocky planet compositions can be determined with very simplified model assumptions. Adding further model parameters can give us invaluable insights into the variability and deviations from equilibrium conditions to be expected in a real exoplanet population.
|
2310.14376 | Activity-driven emulsification of phase-separating binary mixtures | Systems containing active components are intrinsically out of equilibrium,
while binary mixtures reach their equilibrium configuration when complete phase
separation is achieved. Active particles are found to stabilise non-equilibrium
morphologies in phase separating binary mixtures by arresting coarsening by
exerting active pressure that competes with surface tension driving forces. For
moderate activities, an emulsion morphology is stabilised, where the droplet
size is well-defined and controlled by activity. Conversely, the ability of
active particles to drive phase-separated mixtures away from their equilibrium
configuration is shown. A rich co-assembly behaviour is shown due to the
competing energy scales involved in the system. | Javier Diaz, Ignacio Pagonabarraga | 2023-10-22T18:03:25Z | http://arxiv.org/abs/2310.14376v1 | # Activity-driven emulsification of phase-separating binary mixtures
###### Abstract
Systems containing active components are intrinsically out of equilibrium, while binary mixtures reach their equilibrium configuration when complete phase separation is achieved. Active particles are found to stabilise non-equilibrium morphologies in phase separating binary mixtures by arresting coarsening by exerting active pressure that competes with surface tension driving forces. For moderate activities, an emulsion morphology is stabilised, where the droplet size is well-defined and controlled by activity. Conversely, the ability of active particles to drive phase-separated mixtures away from their equilibrium configuration is shown. A rich co-assembly behaviour is shown due to the competing energy scales involved in the system.
**Introduction.** Emulsification of phase-separating binary mixtures (BM) is relevant for several industrial applications where a precise control of the characteristic droplet size is desirable[1]. The coarsening behaviour of pure BM phase-separating via Ostwald ripening is well understood in the LSW theory[2; 3; 4] predicting a power law scaling of phase-separated domains \(R(t)\sim t^{\alpha}\), while the inclusion of fillers has shown the ability to modify the coarsening behaviour[5], or even arrest it altogether, such as in the case of Pickering emulsions[6] and bijels[7]. Beyond BMs, two-phase coarsening governed by the LSW theory include motility-induced phase separation[8; 9] or planet formation[10].
Active particles (APs) are intrinsically out-of-equilibrium at the level of each particle, which consumes energy from the embedding medium to perform work, typically, self-propulsion. This has been shown to lead to a rich self-assembly behaviour, exhibiting phase separation[11; 12; 13; 14] and polar order[15; 16]. APs are often found at fluid-fluid interfaces[17; 18]. Furthermore, living active matter, such as bacteria, are often dispersed within a complex fluid composed of several species. APs are increasingly used for biological systems, in the presence of heterogeneous media.
The out-of-equilibrium nature of systems containing active elements suggests the possibility of reaching non-equilibrium steady states different from the equilibrium counterparts. Non-equilibrium shape fluctuations have been encountered in droplets containing APs[19], where collective motion under confinement was observed. Similarly, a deformable, soft confining medium can drive the spontaneous emergence of chiral collective motion in active filaments [20]. The emergence of collective motion in systems involving APs under confinement has been reported for living[21] and inert particles[22]. Conversely, droplets containing APs can lead to droplet self-propulsion[23]. Furthermore, the ability to introduce space-dependent active velocity has led to emergent anomalous diffusivity[24]. The ability of APs to impact the equilibrium configuration of droplets motivates the study of their effect on the coarsening behaviour of passive phase-separating BMs.
**Model.** We use a mesoscopic model for a system containing \(N_{p}\) APs with diameter \(\sigma\) in a BM, described by the differences in concentration of A and B species \(\psi(\mathbf{r},t)=\phi_{A}(\mathbf{r},t)-\phi_{B}(\mathbf{r},t)\). The total free energy of the system has three contributions \(F=F_{BM}+F_{cpl}+F_{pp}\), where \(F_{BM}=\int d\mathbf{r}\left[-1/2\tau\psi^{2}+1/4uv\phi^{4}+1/2D(\nabla\psi)^{ 2}\right]\) is a standard Ginzburg-Landau free energy, leading to a surface tension[25] of the BM \(\gamma_{AB}=(2\sqrt{3}/2)\sqrt{D\tau^{3}}/u\). The particle-particle interaction free energy \(F_{pp}\) is pairwise additive repulsive and penalises particle overlapping. The coupling free energy is
\[F_{cpl}=\sum_{i=1,N_{p}}c\int d\mathbf{r}\psi_{c}(r)\left[\psi-\psi_{0}\right] ^{2} \tag{1}\]
where \(c\) specifies the scale of the particle-field interaction, the affinity parameter \(\psi_{0}\) specifies the selectivity of the BM towards the particle and \(\psi_{c}\) is a tagged function that determines the size of the particle. A characteristic energy scale can be extracted as \(\epsilon_{cpl}=c\sigma^{2}\psi_{eq}^{2}\) where \(\psi_{eq}=\sqrt{\tau/u}\) is the equilibrium value of the order parameter following minimisation of the local terms in the BM free energy.
The state of the system is given by \(\psi(\mathbf{r},t)\), the position of APs \(\mathbf{r}_{i}\) and their orientation \(\varphi_{i}\), with the dynamics of the system controlled by three coupled equations
\[\frac{\partial\psi}{\partial t}=M\nabla^{2}\left(\frac{\delta F}{\delta\psi} \right)+\eta_{BM}(\mathbf{r},t) \tag{2a}\] \[\frac{\partial\mathbf{r}_{i}}{\partial t}=v_{a}\mathbf{\hat{u}}_{i}+\mathbf{f}/ \gamma_{t}+\sqrt{2D_{t}}\xi_{t}\] (2b) \[\frac{\partial\varphi_{i}}{\partial t}=\sqrt{2D_{r}}\xi_{r} \tag{2c}\]
where equation 2a is the standard Cahn-Hilliard-Cook[26; 27; 28; 29] equation for the dynamics of diffusive phase-separating mixtures. The random fluctuations term \(\eta_{BM}\) satisfies fluctuation-dissipation theorem [30]. Equations 2b and 2c constitute the Active Brownian Particle model with the two being coupled by the unit vector \(\mathbf{\hat{n}}_{i}=(\cos\varphi_{i},\sin\varphi_{i})\) that dictates the direction of self-propulsion. The force acting on particles are of two origins: repulsive particle-particle forces \(\mathbf{f}_{pp}=-\nabla F_{pp}\) and coupling forces \(\mathbf{f}_{cpl}=-\nabla F_{cpl}\) due to the embedding field. The active velocity is given by \(v_{a}\) and defines a swimming time scale \(t_{s}=\sigma/v_{a}\). Furthermore, the rotational diffusive time scales is \(t_{rot}=D_{r}^{-1}\). The Einstein relation applies for each diffusive constant \(D_{t}=k_{B}T/\gamma_{t}\) and \(D_{r}=k_{B}T/\gamma_{r}\).
The dimensionless Peclet number can be defined \(Pe=t_{rot}/t_{s}\propto v_{a}\), characterising the persistence of the active motion. Alternatively, it can be defined as the ratio of the active energy \(\epsilon_{a}=v_{a}\gamma_{t}\sigma\) and the thermal scale \(Pe=\epsilon_{a}/k_{B}T\). We will use \(\sigma\), \(t_{rot}\) and \(k_{B}T\) as units of length, time and energy, respectively, while \(\psi\) is expressed in units of the equilibrium value \(\psi_{eq}\). We consider a 2D system with size \(L_{x}=L_{y}\approx 76\) and periodic boundary conditions leading to a AP surface fraction \(\phi_{p}=N_{p}\pi\sigma^{2}/(4L_{x}L_{y})\). A standard cell dynamic simulation[31] scheme coupled with Brownian dynamics[32; 33] is used to numerically solve Eq. 2. A full description of the model, as well as complete list of the parameters used can be found in the Appendix.
**Results** We consider a modest concentration of APs \(\phi_{p}=0.2\) on a symmetric phase-separating mixture \(\langle\psi\rangle=0\). The system is initialised from a random configuration of APs in a homogeneous distribution for the BM field \(\psi\), modelling a quench from a disordered state. In the passive limit (\(Pe\to 0\)) APs are energetically favoured to segregate within the white domains in Fig. 1 with \(\psi_{0}=1\). In Fig. 1 we monitor the characteristic length scale of the binary mixture in time \(R(t)\) calculated _via_ the scattering intensity of \(\psi\) (see Eq. S22).
For small \(Pe\lesssim 5\), the BM undergoes coarsening exhibiting a monotonic growth of \(R(t)\sim t^{\alpha}\) over time, until complete phase separation is achieved in a long time scale. In the snapshot for \(Pe=2\) a large percolating domain is formed, with a flat interface, while smaller droplets will eventually coalesce and macrophase separation is completed. The effect of APs on the exponent \(\alpha\) will be subject of a future work.
However, for higher activity \(Pe\gtrsim 8\) the coarsening of the BM appears to be arrested, with a plateau reached in the droplet size \(R(t)\to R^{*}\) for late times, as shown in Fig. 1. The snapshot corresponding to \(Pe=13\) in Fig. 1 shows APs in the vicinity the domain walls and preferentially pointing into the interface, which suggests the interplay between the active pressure and the surface-tension-driven coarsening of the interfaces. The snapshot for \(Pe=13\) clearly shows that the morphology of the mixture has changed from white droplets in a gray matrix, to gray droplets in a white matrix, stabilised by the active pressure exerted by the particles. We note that, within this regime, as the activity increases, the stabilised droplet size \(R^{*}\) decreases.
Finally, for higher activity \(Pe\gtrsim 60\), the BM is shown to recover its continuous growth and coarsening is resumed. We hypothesise that, for such high activity, APs carry a high enough active energy \(\epsilon_{a}\) compared with the characteristic wetting energy of the BM, so that the dynamics of the APs and BM appear to be decoupled. Therefore, the coarsening behaviour recovers its expected scaling \(R(t)\sim t^{1/3}\). However, the coarsening curve is not identical to the passive case, which motivates a closer study of the phase separation for high activity.
The apparent coarsening prevention, shown in Fig. 1 can be explained as APs exert active pressure on the BM interfaces. More broadly speaking, activity due to particles prevents the BM from reaching equilibrium and, instead, stabilise non-equilibrium morphologies of the mixture. In order to better support these claims, we consider an initially equilibrated flat interface separating A-rich and B-rich domains, with all APs initially located within the white phase with \(\phi_{p}=0.2\). This is computationally advantageous to avoid the slow time scales associated with the macrophase separation and have better control over the geometry of the system. Therefore, at \(Pe=0\) the initial condition is also the equilibrium configuration
Figure 1: **Effect of APs on a phase separating mixture.** The characteristic length scale of the BM is shown in time along with the ideal \(R(t)\sim t^{1/3}\) scaling, for different activity rates \(Pe\). In the bottom two snapshots are shown, corresponding to each of the regimes, with the orientation of each APs displayed as a red arrow.
of the system. By doing so, we directly assess the effect of APs on the BM equilibrium morphology, _i.e._, how activity can drive the system away from equilibrium.
Fig. 2 shows the effect of APs on an equilibrated symmetric BM, quantified by the characteristic length scale \(R(t)\) in \(\mathbf{A}\) for selected values of \(Pe\). Again, for small \(Pe\)\(R(t)\) remains equal to the equilibrium size of the BM domain, which scales with the system size (see Fig. S8). However, for intermediate values of \(Pe\) a plateau is observed \(R(t)\to R^{*}\), smaller than the system size, indicating that the steady-state morphology of the BM is not complete phase separation. In \(\mathbf{B}\) the steady state behaviour is shown by \(R^{*}\) in terms of \(Pe\). Four different regimes can be identified visually, with the aid of the curve of \(R^{*}\) and the hexatic order parameter \(\Psi_{6}\), which characterises the global ordering of the BM domains into an hexagonal lattice. Furthermore, the pressure curves in Fig. 3 are used to identify critical points, shown as vertical bars in Fig. 2.
In the low activity regime -regime I- APs do not significantly disturb the equilibrium morphology, with the system exhibiting a similar behaviour as confined APs within their preferred medium: APs accumulate at the soft confining walls, as shown in Fig. 4, where density peaks appear close to the confining walls for \(Pe<7.0\). In Fig. 3 the BM pressure is seen to be independent on the activity for \(Pe<7\), indicating that the flat interface morphology remains unchanged.
For higher activity \(7.0<Pe<21.1\), APs are able to drive the BM away from its equilibrium configuration and induce curvature -regime II- where an emulsion of droplets is stabilised due the APs. This is clearly marked as an increase in \(p_{BM}\) in Fig. 3, signaling the departure of the BM from its equilibrium configuration. The domain size distribution, shown in Fig. 2 \(\mathbf{C}\), indicates that a very well-defined droplet size \(R^{*}\) can be identified. The onset of the I-II transition is the result of the competition between the total active energy for active particles near interfaces \(\epsilon_{a}N_{p}\), compared with the total energy associated to the BM interface \(\gamma_{AB}\Gamma\sim\gamma_{AB}2L_{y}\), as shown in Fig. 5 \(\mathbf{A}\), where the surface tension of the BM is shown to be the controlling parameter for the transition from regimes I and II, i.e., for the ability of APs' to drive the BM away from equilibrium. Moreover, the droplet morphology possesses hexatic order, quantified by a peak in \(\Psi_{6}\) in Fig. 2.
We can estimate the scaling behaviour of \(R^{*}(Pe)\) assuming a force balance of the total force due to surface tension \(\gamma_{AB}\) on one droplet \(f_{\gamma}\sim 2\pi R^{*}\gamma_{AB}\) and the total force due to activity exerted on a droplet \(f_{a}^{1}\sim\gamma_{t}v_{a}N_{p}^{1}\) where \(N_{p}^{1}=8\phi_{p}(R^{*}/\sigma)^{2}\) is the average number of particles per droplet. Balancing these two forces acting on a single droplet we find \(R^{*}/\sigma=\gamma_{AB}/(12k_{B}TPe\phi_{p})\). In Fig. 2 a yellow curve shows the \(R^{*}\propto Pe^{-1}\) scaling which roughly predicts the behaviour of simulation data points. To support this mechanical basis for the emulsification of the BM, the role of the BM time scale is examined in Fig. S5, finding that the slower or faster BM time scale does not play a role in the emergence of the droplet morphology. Furthermore, the droplet morphology is irrespective of the initial conditions as shown in Fig. S6 which further shows the ability of APs to both arrest the phase separation of the BM and drive it out of equilibrium.
For intermediate activity \(21.1<Pe<38.1\), APs continue to drive the system away from equilibrium with the characteristic domain size \(R^{*}\) being reduced with increasing \(Pe\). However, in this regime APs do not stabilise droplets with an isotropic shape, instead, APs continue to push through the interfaces, resulting in isolated gray domains without a defined shape -regime III-. In this regime the droplets are not stabilised. This translates into a plateau in the interaction pressure in Fig. 3 as APs have more available real space to explore and AP-AP collisions are less likely to occur. Furthermore, the droplet size distribution \(PDF(R^{*})\) in Fig. 2 \(\mathbf{C}\) is considerably different to regime II: unstable droplets with amorphous shape have a much broader distribution of sizes compared to the \(Pe\sim 10.1\) case. The onset of this transition is rationalised as the competition between the active energy per particle \(\epsilon_{a}\) (as opposed to regime II where the total active energy is the controlling parameter), and the coupling energy \(\epsilon_{cpl}\), which is associated to the surface tension of an AP immersed in the incompatible (gray) phase. This is clear in the collapsed of the interparticle pressure into a single curve for three different values of the AP-BM coupling parameter \(c\), shown in Fig. 5 \(\mathbf{B}\) for varying \(\epsilon_{a}/\epsilon_{cpl}\), which is reminiscent of the bond number for gravitational forces.
Finally, for large activities \(Pe>38.1\) the APs active energy is considerably larger than the coupling energy due to the embedding BM. Therefore, the dynamics of the APs and the BM appear to be largely decoupled - regime IV-, with the BM increasingly recovering its equilibrium morphology towards complete phase separation. On the onset of the transition, in Fig. 2 \(\mathbf{C}\) for \(Pe=35\), it is possible to see the emergence of large domain sizes, albeit with low probability. On the other hand, as activity grows, APs are increasingly unaffected by the forces originating from the BM, except for a slightly reduced effective mobility when dispersed within the incompatible phase. This translates into more accumulation of APs within the gray region, which can be seen visually in Fig. 2 for \(Pe=88.6\) and quantified in the density profiles in 4, in sharp contrast with the low activity regime for \(Pe<7\).
**Conclusions** APs have been shown to drive phase-separating mixtures away their equilibrium configuration due to the competition between the active energy carried by the colloids, and the surface tension driving phase separation. This is shown both in the ability of APs to prevent the BM from reaching its equilibrium morphology (Fig. 1) as well as driving the system away from
equilibrium (Fig. 2). In both cases APs can stabilise intermediate states characterised by droplets, composed of the incompatible species to the APs, within a matrix of the soluble phase. These droplets have a well-defined size and, in a slow time scale, can form organised hexagonal structures. The condition for droplet formation is that the active pressure exerted by the APs is comparable with the surface tension of the BM. In this sense, APs are surfactant-like despite being, at equilibrium, completely dispersed within one phase. Contrary to Pickering emula
Figure 4: **Density profiles of APs** across the horizontal dimension of the system for selected values of \(Pe\). Density is averaged over 10 time steps after a steady state is reached.
Figure 5: **Characterisation of transitions**. In **A** the characteristic domain size \(R^{*}\) is shown for various values of BM interfacial tension \(\gamma_{AB}\) in terms of the dimensionless energy ratio comparing total active energy \(N_{p}\epsilon_{a}\) versus the interfacial energy. In **B** the active energy per particle is compared with the coupling energy scale for various values of \(c\).
Figure 3: **Pressure profiles** in terms of \(Pe\) for \(\phi_{p}=0.2\), in the steady state, scaled with the ideal pressure \(\rho k_{B}T\) Different contributions to pressure are: \(p_{a}\) active pressure, \(p_{int}\) AP-AP interaction, \(p_{cpl}\) coupling and BM pressure \(p_{BM}\). Vertical dashed lines indicate regime boundaries. The inset shows a detail of the behaviour in the unstable droplet phase (regime III).
Figure 2: **Effect of APs on an equilibrated BM** with \(\phi_{p}=0.2\). In **A** the time evolution of the characteristic length scale of the BM is shown for selected values of \(Pe\) for each of the regimes. In **B** the steady state values of \(R^{*}=R(t\rightarrow\infty)\) and the hexatic order parameter for the droplets \(\Psi_{6}\) are shown. In **C** the probability distribution function of the droplet size is shown for representative values of \(Pe\), scaled with the mean value \(R^{*}\). Vertical dashed lines indicate the regime boundaries for \(Pe=7.1\), \(21.1\) and \(38.1\). In the bottom four snapshots are shown, corresponding to each of the regimes.
sions, where a high surface coverage is required, the size of the droplets is controlled by the activity. The wetting of the APs additionally selects the curvature of the droplets and determines the species confined in droplets. On the other hand, a secondary transition is observed, where the active energy of each particle is considerably larger than the wetting forces of the AP. It is precisely in the intermediate active energy values, where the active forces are enough to deform interfaces but not large enough to penetrate into the interface, where the emulsion is stabilised.
J.D. acknowledges financial support from the Spanish Ministry of Universities through the Recovery, Transformation and Resilience Plan funded by the European Union (Next Generation EU), and Universitat de Barcelona. I.P. acknowledges support from Ministerio de Ciencia, Innovacion y Universidades MCIU/AEI/FEDER for financial support under grant agreement PID2021-126570NB-100 AEI/FEDER-EU, from Generalitat de Catalunya under Program Icrea Academia and project 2021SGR-673.
|
2303.06892 | Direct tomography of quantum states and processes via weak measurements
of Pauli spin operators on an NMR quantum processor | In this paper, we present an efficient weak measurement-based scheme for
direct quantum state tomography (DQST) and direct quantum process tomography
(DQPT), and experimentally implement it on an NMR ensemble quantum information
processor without involving any projective measurements. We develop a
generalized quantum circuit that enables us to directly measure selected
elements of the density matrix and process matrix which characterize unknown
quantum states and processes, respectively. This generalized quantum circuit
uses the scalar J-coupling to control the interaction strength between the
system qubits and the metre qubit. We experimentally implement these weak
measurement-based DQST and DQPT protocols and use them to accurately
characterize several two-qubit quantum states and single-qubit quantum
processes. An extra qubit is used as a metre qubit to implement the DQST
protocol, while for the DQPT protocol, two extra qubits (one as a metre qubit
and the other as an ancilla qubit) are used. | Akshay Gaikwad, Gayatri Singh, Kavita Dorai, Arvind | 2023-03-13T06:40:19Z | http://arxiv.org/abs/2303.06892v1 | Direct tomography of quantum states and processes via weak measurements of Pauli spin operators on an NMR quantum processor
###### Abstract
In this paper, we present an efficient weak measurement-based scheme for direct quantum state tomography (DQST) and direct quantum process tomography (DQPT), and experimentally implement it on an NMR ensemble quantum information processor without involving any projective measurements. We develop a generalized quantum circuit that enables us to directly measure selected elements of the density matrix and process matrix which characterize unknown quantum states and processes, respectively. This generalized quantum circuit uses the scalar \(J\)-coupling to control the interaction strength between the system qubits and the metre qubit. We experimentally implement these weak measurement-based DQST and DQPT protocols and use them to accurately characterize several two-qubit quantum states and single-qubit quantum processes. An extra qubit is used as a metre qubit to implement the DQST protocol, while for the DQPT protocol, two extra qubits (one as a metre qubit and the other as an ancilla qubit) are used.
## I Introduction
In recent decades, the concepts of weak measurements and weak values have attracted immense attention in quantum information processing from the fundamental as well as the applications point of view [1; 2]. The weak value of a given observable obtained via a weak measurement, although is in general a complex number, has been shown to carry information about the system at times between pre and post-selection [3; 4]. Weak measurements allow us to sequentially measure incompatible observables so as to gain useful information from the quantum system and learn about the initial state without fully collapsing the state [5]. This is in complete contrast to conventional projective measurements, wherein the system collapses into one of the eigenstates resulting in maximum state disturbance [6]. This feature of weak measurements provides an elegant way to address several important issues in quantum theory including the reality of the wave function[7; 8], observation of a quantum Cheshire Cat in a matter-wave interferometer experiment[9], observing single photon trajectories in a two-slit interferometer[10], and the Leggett-Garg inequality[11]. Weak measurements are also actively exploited in the field of quantum information processing covering a wide range of applications including quantum state and process tomography[12; 13], state protection against decoherence[14; 15], quantum state manipulation[16], performing minimum disturbance measurements[17], precision measurements and quantum metrology[18], sequential measurement of two non-commuting observables[19] and tracking the precession of single nuclear spins using weak measurements[20].
Several techniques have focused on direct estimation of quantum states and processes including a method based on phase-shifting technique [21; 22] and direct measurement of quantum states without using extra auxiliary states or post-selection processes [23]. A selective QPT protocol based on quantum 2-design states was used to perform DQPT [24] and experimentally demonstrated on different physical platforms[25; 26; 27]. Conventional QST and QPT methods require a full reconstruction of the density matrix and are computationally resource intensive. On the other hand, weak measurement based tomography techniques have been used to perform state tomography and it was shown that for certain special cases they outperform projective measurements [28; 29]. An efficient DQST scheme was proposed which directly measured arbitrary density matrix elements using only a single strong or weak measurement [30]. Circuit-based weak measurement with post-selection has been reported on NMR ensemble quantum information processor [31].
In this work, we propose an experimentally efficient scheme to perform direct QST and QPT using weak measurements of Pauli spin operators on an NMR ensemble quantum information processor. The scheme allows us to compute desired elements of the density matrix and is designed in such a way that it does not require any ancillary qubits and has reduced complexity as compared to recently proposed weak measurement-based DQST and DQPT methods [32; 33]. Our scheme has three major advantages, namely, (i) it does not require sequential weak measurements, (ii) it does not involve implementation of complicated error-prone quantum gates such as a multi-qubit, multi-control phase gate and (iii) it does not require projective measurements. Furthermore, our proposed method is experimentally feasible as it requires a single experiment to determine multiple selective elements of the density/process matrix. Our scheme is general and can be applied to any circuit-based implementation. We experimentally implemented the scheme to
characterize several two-qubit quantum states and single-qubit quantum processes with high fidelity. Further, we fed the weak measurement experimental results as input into a convex optimization algorithm to reconstruct the underlying valid states and processes[34]. We compared the experimentally obtained state and process fidelities with theoretical predictions and with numerical simulations and obtained a good match within experimental uncertainties.
This paper is organized as follows: A brief review of weak measurements and the detailed schemes for DQST and DQPT are presented in Section II. The details of the experimental implementation of DQST and DQT via weak measurements are given in Section III. Section III.1 describes how to use an NMR quantum processor to perform weak measurements of Pauli spin operators, while Sections III.2 and III.3 contain details of a weak measurement of the Pauli operator \(\sigma_{1z}\) and the results of DQST and DQPT performed using weak measurements, respectively. Section IV contains a few concluding remarks.
## II General scheme for direct QST and QPT via weak measurements
Consider the system and the measuring device initially prepared in a product state \(|\psi\rangle|M\rangle\). The weak measurement of an observable \(A\) requires the evolution of the joint state \(|\psi,M\rangle\) under an operator of the form, \(U_{SM}=e^{-igA\otimes B}\), where \(g\) is coupling strength (\(|g|\ll 1\)) between the system and the measuring device. The operator \(B\) corresponding to the measuring device is chosen such that \(\langle M|B|M\rangle=0\). In the weak measurement limit (\(|g|\ll 1\)), the evolution operator can be approximated upto first order in \(g\) as, \(U_{SM}^{weak}=(I-igA\otimes B)\) and the evolution of the joint state (system+measuring device) can be worked out as follows [31]:
\[|\psi,M\rangle_{\rm final} = e^{-igA\otimes B}|\psi,M\rangle \tag{1}\] \[\approx (I-igA\otimes B)|\psi,M\rangle\] \[= |\psi,M\rangle-igA|\psi\rangle\otimes B|M\rangle\]
We will see that the above equation can be used for QST and QPT by making appropriate measurements on the measuring device.
Generally quantum processes are either represented via the corresponding (i) \(\chi\) matrix (also referred to as the process matrix) using a Kraus operator decomposition[35] or (ii) Choi-Jamiolkowski state using the channel-state duality theorem [36]. For an \(N\)-qubit system, the \(\chi\) matrix and the Choi-Jamiolkowski state corresponding to quantum channel \(\Lambda\) are given by [35; 36]:
\[\Lambda(\rho_{in}) = \sum_{i=0}^{4^{N}-1}K_{i}\rho_{in}K_{i}^{\dagger}=\sum_{m,n=0}^{4 ^{N}-1}\chi_{mn}E_{m}\hat{\rho}_{in}E_{n}^{\dagger} \tag{2}\] \[|\Phi_{\Lambda}\rangle = (I\otimes\Lambda)|\Phi\rangle=\frac{1}{2^{N/2}}\sum_{m=0}^{2^{N}- 1}|m\rangle\otimes\Lambda|m\rangle \tag{3}\]
where \(\chi_{mn}\) in Eq. (2) are elements of the \(\chi\) matrix and \(|\Phi_{\Lambda}\rangle\) in Eq. (3) is the Choi-Jamiolkowski state; \(\{K_{i}\}\)'s and the \(\{E_{i}\}\)'s in Eq. (2) are Kraus operators and fixed basis operators respectively, while the quantum state \(|\Phi\rangle\) in Eq. (3) is a pure maximally entangled state of \(2N\) qubits \(|\Phi\rangle=2^{-N/2}\sum_{m=0}^{2^{N}-1}|m\rangle|m\rangle\). The density matrix \(\rho_{\Lambda}=|\Phi_{\Lambda}\rangle\langle\Phi_{\Lambda}|\) corresponding to the Choi-Jamiolkowski state can be mapped to the \(\chi\) matrix using an appropriate unitary transformation \(U_{\chi}\) as \(\chi=U_{\chi}\rho_{\Lambda}U_{\chi}^{\dagger}\). The unitary transformation matrix \(U_{\chi}\) depends only on the fixed set of basis operators \(\{E_{i}\}\) (Eq. (2)) and does not depend on the quantum channel to be tomographed. To perform DQPT of a given quantum channel \(\Lambda\) in terms of the \(\chi\) matrix, we need to apply the unitary transformation \(U_{\chi}\) on \(|\Phi_{\Lambda}\rangle\) and then follow the direct QST protocol and estimate the desired elements \(\chi_{mn}\).
Consider the operators \(O_{x}^{\phi}=|\phi\rangle\langle\phi|\otimes\sigma_{x}\) and \(O_{y}^{\phi}=|\phi\rangle\langle\phi|\otimes\sigma_{y}\) where \(|\phi\rangle\) is a pure system state and \(\sigma_{x(y)}\) are single-qubit Pauli spin operators. The expectation values of the \(O_{x}^{\phi}\) and \(O_{y}^{\phi}\) operators in the weakly evolved joint state (Eq. (1)) turn out to be:
\[\langle O_{x}^{\phi}\rangle = ig\Big{[}\langle\psi|A^{\dagger}|\phi\rangle\langle\phi|\psi \rangle-\langle\phi|A|\psi\rangle\langle\psi|\phi\rangle\Big{]} \tag{4}\] \[\langle O_{y}^{\phi}\rangle = -g\Big{[}\langle\psi|A^{\dagger}|\phi\rangle\langle\phi|\psi \rangle+\langle\phi|A|\psi\rangle\langle\psi|\phi\rangle\Big{]} \tag{5}\]
which can be simplified to:
\[\frac{\langle O_{y}^{\phi}\rangle-i\langle O_{x}^{\phi}\rangle}{-2g}=\langle \phi|A|\psi\rangle\langle\psi|\phi\rangle \tag{6}\]
Straightforward algebra leads to
\[\frac{\langle O_{y}^{\phi}\rangle-i\langle O_{x}^{\phi}\rangle}{-2g}=\rho_{ mn}=\langle m|\rho|n\rangle \tag{7}\]
Similarly,
\[\frac{\langle O_{y}^{\phi}\rangle-i\langle O_{x}^{\phi}\rangle}{-2g}=\chi_{ mn}=\langle m|\chi|n\rangle \tag{8}\]
Using Eqs. (7) and (8), one can perform direct QST and QPT by measuring \(\langle O_{x}^{\phi}\rangle\) and \(\langle O_{y}^{\phi}\rangle\) for an appropriate choice of \(A\) and \(|\phi\rangle\).
It is interesting to note that
\[\langle\phi|A|\psi\rangle\langle\psi|\phi\rangle=\langle A\rangle_{w}^{\phi} \Pi_{\psi}^{\phi} \tag{9}\]
where \(\langle A\rangle_{w}^{\phi}\) is the weak value associated with a post-selection of the system into the state \(\phi\) and \(\Pi_{\psi}^{\phi}\) is the post-selection probability [6] We do not use this connection is our work, however, it connects our work with other schemes involving weak values and post-selection.
## III NMR implementation of weak measurement scheme
### NMR weak measurements of Pauli spin operators
We used the three \({}^{19}\)F spins in the molecule trifluorootoethylene dissolved in acetone-D6 to realize three qubits, denoting F\({}_{1}\) and F\({}_{2}\) as the system qubits and F\({}_{3}\) as the meter qubit (Fig. 1(c)). The rotating frame NMR Hamiltonian for a system of three spin-1/2 nuclei is given by [37]:
\[\mathcal{H}=-\sum_{i=1}^{3}\nu_{i}I_{iz}+\sum_{i,j=1,i>j}^{3}J_{ij}I_{iz}I_{jz} \tag{10}\]
where \(\nu_{i}\) and \(I_{iz}\) are the chemical shift and the \(z\)-component of the spin angular momentum of the \(i\)th spin respectively, and \(J_{ij}\) is the scalar coupling between the \(i\)th and \(j\)th spins. Experimental parameters characterizing the given system can be found in the Reference [38].
We set the initial state of the meter qubit to be \(|M\rangle=|0\rangle_{m}\), with \(B=\sigma_{x}\) (Eq. (1)). In this case, the weak interaction evolution operator \(U_{SM}^{weak}\) is of the form:
\[U_{SM}^{weak}=I-igP_{k}\otimes\sigma_{x} \tag{11}\]
where \(I\) is an \(8\times 8\) identity matrix and \(P_{k}=\{I,\sigma_{x},\sigma_{y},\sigma_{z}\}^{\otimes 2}\) are two-qubit Pauli spin operators. The operator \(U_{SM}^{weak}\) given in Eq. (11) can be decomposed as:
\[I-igP_{k}\otimes\sigma_{x} = I-ig(U_{k}\sigma_{iz}U_{k}^{\dagger})\otimes(R_{y}(\frac{\pi}{2 })\sigma_{z}R_{y}^{\dagger}(\frac{\pi}{2})) \tag{12}\] \[= \mathcal{U}_{k}(I-ig\sigma_{iz}\otimes\sigma_{z})\mathcal{U}_{k} ^{\dagger}\]
where \(\sigma_{iz}\) is either \(\sigma_{1z}=\sigma_{z}\otimes I\) or \(\sigma_{2z}=I\otimes\sigma_{z}\) and \(\mathcal{U}_{k}=U_{k}\otimes R_{y}(\frac{\pi}{2})\); \(U_{k}\) is a two-qubit unitary operator acting on system qubits and is constructed such that \(P_{k}=U_{k}\sigma_{iz}U_{k}^{\dagger}\). To further simplify Eq. (12), consider the \(J\)-evolution operator \(U_{ij}^{J}(t)\):
\[U_{ij}^{J}(t)=e^{-i2\pi J_{ij}I_{iz}I_{jz}t} \tag{13}\]
If the evolution time \(t\) is sufficiently small such that \(g=\frac{\pi J_{ij}t}{2}\ll 1\), Eq. (13) can be approximated as:
\[U_{ij}^{J}(t)\approx I-ig\sigma_{iz}\otimes\sigma_{jz} \tag{14}\]
Hence using Eqs. (12) and (14):
\[U_{SM}^{weak}\approx\mathcal{U}_{k}U_{ij}^{J}(t)\mathcal{U}_{k}^{\dagger} \tag{15}\]
where \(t=\frac{2g}{\pi J_{ij}}\), \(i=1,2\) and \(j=3\).
Hence the weak measurement of a desired Pauli operator \(P_{k}\) can be performed by applying the sequence of unitary operations given in Eq. (15) on an initial joint state of the three-qubit system followed by the measurement of \(O_{x}^{\phi}\) and \(O_{y}^{\phi}\). The list of \(\mathcal{U}\)s corresponding to all \(P_{k}\)s is given in Table 2.
For the NMR implementation, the quantum circuit depicted in Fig.1(a) is divided into six parts, each consisting of a set of unitary operations. Each of the six parts are implemented using optimized pulse sequences generated through GRAPE, which are represented graphically as colored Gaussian shapes in Fig. 1(b). For example, the first part of the circuit consists of a Hadamard gate followed by a CNOT gate, and this composite operation is implemented using a GRAPE pu
Figure 1: (Color online) (a) General quantum circuit for DQST of an initial unknown state \(|\psi\rangle_{s}\) and DQPT of a quantum channel \(\Lambda\) using weak measurements. The first (gray-shaded) block corresponds to DQPT performed on the initial state \(|0\rangle_{a}|0\rangle_{s}|0\rangle_{m}\). The unitary operator \(U_{\chi}\) depends on the operator basis in which DQPT is performed. The second (unfilled) block implements the weak interaction between the system qubits and the meter qubit, followed by measurement on the meter qubit. (b) NMR implementation of the quantum circuit given in panel (a). The Gaussian-shaped curves represents GRAPE-optimized pulses (G\({}_{i}\)) corresponding to given unitary operations on all three qubits. (c) Structure of the molecule, trifluorootoethylene, used to realize the three NMR qubits F\({}_{1}\), F\({}_{2}\) and F\({}_{3}\).
Fig.1(b). The approximate length of the GRAPE optimized rf pulses corresponding to given quantum states (or processes) and Pauli operators is given in Table 1. The power level was set to 28.57 W in all the experiments.
### NMR Measurement of \(O_{z}^{\phi}\) and \(O_{y}^{\phi}\)
For simplicity, consider the post-selected state \(|\phi\rangle\) to be one of the computational basis vectors: \(\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}\) which are required to perform DQST or DQPT (Eq. (17)). In this case, it turns out that the observables \(O_{x(y)}^{\phi}\) can be directly measured by acquiring the NMR signal from F\({}_{3}\) (the meter qubit). The NMR signal of F\({}_{3}\) consists of four spectral peaks (see thermal spectra in blue color in Fig. 2(a)) corresponding to four transitions associated with density matrix elements (referred to as readout elements): \(\rho_{56}\), \(\rho_{12}\), \(\rho_{78}\) and \(\rho_{34}\)[39]. The first peak from the left in Fig.2(a) corresponds to the post-selected state \(|\phi\rangle=|10\rangle\) while the second, third and fourth peaks correspond to the post-selected states \(|00\rangle\), \(|11\rangle\) and \(|01\rangle\), respectively. These peaks (from the left) are also associated with readout elements \(\rho_{56}\), \(\rho_{12}\), \(\rho_{78}\) and \(\rho_{34}\), respectively. The line intensity of the absorption mode spectrum (\(x\)-magnetization) is proportional to the real part of the corresponding readout element of the density matrix while the dispersion mode spectrum (\(y\)-magnetization) is proportional to the imaginary part of the corresponding readout element:
\[\langle O_{x}^{\phi}\rangle\propto\text{Re}(\rho_{ij})\quad\text{and}\quad \langle O_{y}^{\phi}\rangle\propto\text{Im}(\rho_{ij}) \tag{16}\]
where \(\rho_{ij}\) is the readout element of the three-qubit density matrix on which the observables \(O_{x(y)}^{\phi}\) are being
Figure 2: (Color online) (a) NMR spectra obtained by measuring on the third qubit (F\({}_{3}\)), corresponding to the meter qubit. The spectrum in blue represent thermal equilibrium while the spectrum in blue represents the reference spectrum. The other spectra (from top) are obtained by implementing the quantum circuit for weak measurements for different values of \(g\), the input state \(|00\rangle_{s}|0\rangle_{m}\) and the weak interaction unitary \(U_{\text{SM}}^{\text{weak}}\) corresponding to the Pauli operator \(\sigma_{1z}\) followed by a \(90^{\circ}\) phase shift on F\({}_{3}\). (b) The theoretically and experimentally obtained quantities \(\langle O_{x}^{00}\rangle,\langle O_{y}^{00}\rangle\) and \(\langle A\rangle_{w}^{00}\Pi_{00}^{00}\) are compared for different values of \(g\).
measured. The complete list of observables \(\langle O^{\phi}_{x(y)}\rangle\) with corresponding spectral transitions and readout elements are listed in Table 3. Note that in the case of an arbitrary post-selected state \(|\phi\rangle\), one has to decompose the observables \(O^{\phi}_{x(y)}\) into Pauli basis operators as \(O^{\phi}_{x(y)}=\sum_{i}a^{x(y)}_{i}P_{i}\), then measure \(\langle P_{i}\rangle\) corresponding to non-zero coefficients \(a^{x(y)}_{i}\) and finally compute \(\langle O^{\phi}_{x(y)}\rangle\) for the given \(|\phi\rangle\). An efficient way of measuring the expectation value of any Pauli observable is described in Reference [40].
### Experimental weak measurement of \(\sigma_{1z}\)
As an illustration, we experimentally obtained various relevant quantities (as described in Section II) for the two-qubit Pauli operator \(\sigma_{z}\otimes I_{2\times 2}=\sigma_{1z}\). The weak measurement of \(\sigma_{1z}\) allows us to measure all the diagonal elements of the density matrix in a single experiment. We experimentally implemented the proposed scheme and measured \(\langle O^{\phi}_{x(y)}\rangle\) and \(A^{\phi}_{w}\Pi^{\phi}_{\psi}\) with varying weak interaction strength \(g\) for the case of \(A=P_{k}=\sigma_{1z}\), with the initial state \(|\psi\rangle=|00\rangle\) and \(|\phi\rangle\) being the computational basis vectors. All the NMR spectrums in Fig.2(a) correspond to measurements on the F\({}_{3}\) qubit. The bottom spectrum in blue represents the thermal equilibrium spectrum obtained by applying a readout pulse on thermal state followed by detection on the F\({}_{3}\) qubit. As shown in Table.3, the first peak (from the left) in the thermal spectrum corresponds to \(\langle O^{10}_{x(y)}\rangle\), while the second, third and fourth peaks correspond to \(\langle O^{00}_{x(y)}\rangle\), \(\langle O^{11}_{x(y)}\rangle\) and \(\langle O^{01}_{x(y)}\rangle\), respectively.
The reference spectrum depicted in red is obtained by applying a readout pulse on an experimentally prepared pseudo pure state (PPS) using the spatial averaging technique[41; 37] followed by detection on the F\({}_{3}\) qubit. The value of the reference peak is set to be 1. With respect to this reference, the observable \(\langle O^{\phi}_{x}\rangle\) can be directly measured by computing spectral intensity by integrating the area under the corresponding peak, while the observable \(\langle O^{\phi}_{y}\rangle\) can be measured by first performing a \(90^{\circ}\) phase shift and then computing the intensity. Note that the quantity \(\langle O^{\phi}_{y}\rangle\) is (-1) times the spectral intensity.
For example, the third spectrum (depicted in green) in Fig.2(a) corresponds to \(g=0.1\) The peak intensity with respect to reference spectrum turns out to be \(0.1821\pm 0.003\) which gives \(\langle O^{00}_{y}\rangle=-0.1821\pm 0.003\) while the experimental value of \(\langle O^{00}_{x}\rangle\) (intensity before \(90^{\circ}\) phase shift) turns out to be \(0.0272\pm 0.0066\). Similarly, the other four spectra in Fig.2(a) correspond to various \(g\) values. One can see that from Fig.2(a), the for all values of \(g\) the spectral intensity of the first, third and fourth peak corresponding to the post-selected states \(|10\rangle\), \(|11\rangle\) and \(|01\rangle\), respectively, is negligible as compared to the reference peak which implies that the quantities \(\langle O^{10}_{x(y)}\rangle\), \(\langle O^{11}_{x(y)}\rangle\) and \(\langle O^{01}_{x(y)}\rangle\) are almost zero. This is to be expected since theoretically \(\langle\phi|\sigma_{1z}|\rho|\phi\rangle=0\) except for \(|\phi\rangle=|00\rangle\), whereas the spectral intensity of the second peak corresponding to \(\langle O^{00}_{y}\rangle\) is non-zero and increases with \(g\).
The experimental values \(\langle O^{00}_{x(y)}\rangle\) and \(\langle\sigma_{1z}\rangle^{00}_{w}\Pi^{00}_{00}\) are compared with their theoretically expected values in Fig. 2(b), for different values of \(g\). Only the real part of \(\langle\sigma_{1z}\rangle^{00}_{w}\Pi^{00}_{00}\), \(i.e.\)Re\((\langle\sigma_{1z}\rangle^{00}_{w}\Pi^{00}_{00})=\frac{\langle O^{00}_{y} \rangle}{-2g}\) is plotted as the imaginary part turns out to be almost zero (as seen from \(\langle O^{00}_{x}\rangle\) values). The experimental quantity \(\langle\sigma_{1z}\rangle^{00}_{w}\Pi^{00}_{00}\) was calculated using Eq. (11), however it can also be computed by rescaling the spectrum by the factor \(|\frac{1}{2g}|\) with
Figure 3: (Color online) (a) NMR spectra obtained after implementing the quantum circuit for weak measurements on the initial state \(|\psi\rangle_{s}=\cos{(\frac{\pi\pi}{20})}|00\rangle+\sin{(\frac{\pi\pi}{20})} |10\rangle\) for \(g=0.1\) and the observable \(\sigma_{1z}\). The spectra in green and red correspond to \(n=10\) and \(n=0\), respectively. (b) Plots comparing the experimentally measured \(\langle\sigma_{1z}\rangle^{\phi}_{w}\Pi^{\phi}_{\psi}\) with its theoretical value as a function of the initial state \(|\psi\rangle_{s}=\cos{(\frac{\pi\pi}{20})}|00\rangle+\sin{(\frac{\pi\pi}{20} )}|10\rangle\).
respect to the reference spectrum. The expected value of \(\langle\sigma_{1z}\rangle_{w}^{0}\Pi^{00}_{00}\) is equal to 1, which is the density matrix element \(\rho_{11}\) of initial state \(|\psi\rangle=|00\rangle\).
As the value of \(g\) increases, the experimental and theoretical value of \(\rho_{11}\) deviates more and more from 1, because the weak interaction approximation no longer holds for relatively large values of \(g\). At \(g=0.05\), the experimental value of \(\rho_{11}^{\rm exp}\) was \((0.9198\pm 0.0057)+i(0.0525\pm 0.0399)\), while at \(g=0.5\) the value of \(\rho_{11}^{\rm exp}\) was \((0.7971\pm 0.0021)+i(0.0989\pm 0.0439)\). We also would like to point out here that in real experiments an arbitrary small value of \(g\) may not work, since the signal strength after the weak interaction may be too small to detect and may introduce large errors in the measurements.
We also implemented the weak measurement-based scheme for different initial states. The results shown in Fig.3 were obtained by experimental implementing the weak measurement-based scheme for a fixed interaction strength \(g=0.1\) and for different initial states of the form \(|\psi\rangle=\cos{(\frac{n\pi}{20})}|00\rangle+\sin{(\frac{n\pi}{20})}|10\rangle\). The NMR spectrum in blue color in Fig.3(a), is the reference spectrum, while the other two spectra in red and green correspond to the states \(n=0\) and \(n=10\), respectively, which were obtained by implementing the weak measurement quantum circuit (Fig.1) followed by a \(90^{\circ}\) phase shift. Note that since the value of \(g\) is fixed, the spectra corresponding to all \(n\) are rescaled by the factor \(\frac{1}{(2.01)}=5\) with respect to the reference spectrum, which directly yields \(\mathrm{Re}(\langle\sigma_{1z}\rangle_{w}^{\phi}\Pi^{\phi}_{\psi})\) and \(\mathrm{Im}(\langle\sigma_{1z}\rangle_{w}^{\phi}\Pi^{\phi}_{\psi})\) (the real and imaginary parts of corresponding density matrix, respectively). For \(n=0\) (red spectrum), the observables \(\langle O^{10}_{0}\rangle\), \(\langle O^{00}_{y}\rangle\), \(\langle O^{11}_{y}\rangle\) and \(\langle O^{01}_{y}\rangle\) turned out to be \(-0.1370\pm 0.0008\), \(-0.9137\pm 0.0038\), \(-0.0267\pm 0.0010\) and \(-0.1243\pm 0.0085\), respectively. For \(n=10\) (green spectrum) the observables turned out to be \(0.8687\pm 0.0054\), \(0.1255\pm 0.0033\), \(0.0294\pm 0.0003\) and \(0.0237\pm 0.0030\), respectively. The experimentally obtained \(\mathrm{Re}\langle\sigma_{1z}\phi_{w}^{\phi}\Pi^{\phi}_{\psi}\rangle\) is compared with the theoretically expected values in Fig. 3(b), for \(|\phi\rangle=|00\rangle\) and \(|\phi\rangle=|10\rangle\) and for various initial states. The experimental values are in very good agreement with the theoretical predictions in both Figs.2 and 3, which clearly shows the successful implementation of the weak measurement of \(\sigma_{1z}\).
### Experimental DQST and DQPT using weak measurements
We now proceed to experimentally demonstrate element-wise full reconstruction of the density and process matrices of several states and quantum gates using the proposed weak measurement-based DQST and DQPT schemes. To estimate a desired element \(\rho_{mn}\) of the density matrix or \(\chi_{mn}\) of the process matrix, one of the possible choices of the post-selected state \(|\phi\rangle\), together with the Pauli operator \(P_{k}\), is depicted as \((|\phi\rangle,P_{k})\) in the matrix:
\[\begin{pmatrix}(|00\rangle,\sigma_{1z})&(|01\rangle,\sigma_{2x})&(|10\rangle, \sigma_{1x})&(|11\rangle,\sigma_{1x}\sigma_{2x})\\ \sigma_{12}^{\dagger}&(|01\rangle,\sigma_{1z})&(|10\rangle,\sigma_{1x}\sigma_ {2x})&(|11\rangle,\sigma_{1x})\\ \sigma_{13}&\sigma_{23}^{\dagger}&-(|10\rangle,\sigma_{1z})&(|11\rangle, \sigma_{2x})\\ \sigma_{14}&\sigma_{24}^{\dagger}&\rho_{34}^{\dagger}&-(|11\rangle,\sigma_{1 z})\end{pmatrix} \tag{17}\]
In this case, the full QST of a two-qubit quantum state requires weak measurements of only four Pauli operators: \(\{\sigma_{1z},\sigma_{1x},\sigma_{2x},\sigma_{1x}\sigma_{2x}\}\). The weak measurement of \(\sigma_{1z}\) allows us to directly estimate all the diagonal elements representing the populations of the energy eigenstates, while the weak measurements of \(\sigma_{1x}\), \(\sigma_{2x}\) and \(\sigma_{1x}\sigma_{2x}\) yield two off-diagonal elements, each representing a single- and a multiple-quantum coherence.
As an illustration, we experimentally performed DQST of the maximally entangled Bell states: \(|\psi_{1}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\) and \(|\psi_{2}\rangle=(|01\rangle+|10\rangle)/\sqrt{2}\) as well as DQPT of two quantum gates: the Hadamard gate \(H\) and a rotation gate \(R_{x}(\frac{\pi}{2})\). For both DQST and DQPT, the value of \(g\) is set to be 0.2.
The NMR readouts demonstrating the DQST of the Bell state \(|\psi_{1}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\) are shown in Fig. 4, where weak measurements of four Pauli operators were carried out. The NMR readouts corresponding to the weak measurement of \(\sigma_{1z}\), \(\sigma_{2x}\), \(\sigma_{1x}\) and \(\sigma_{1x}\sigma_{2x}\) are depicted in red, green, purple and blue respectively, while the spectrum in black represents the reference spectrum.
\begin{table}
\begin{tabular}{c c c} \hline \hline State(\(|\psi\rangle\))/Process(\(\Lambda\)) & \(\mathcal{F}(\rho_{\rm weak}^{\rm DQST})\) & \(\mathcal{F}(\rho_{\rm weak}^{\rm true})\) \\ \hline \hline \(|\psi_{1}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\) & \(0.9511\pm 0.0065\) & \(0.9791\) \\ \(|\psi_{2}\rangle=(|01\rangle+|10\rangle)/\sqrt{2}\) & \(0.9266\pm 0.0075\) & \(0.9739\) \\ \(\Lambda_{1}=H\) & \(0.9447\pm 0.0060\) & \(0.9703\) \\ \(\Lambda_{2}=R_{x}(\frac{\pi}{2})\) & \(0.9476\pm 0.0029\) & \(0.9729\) \\ \hline \end{tabular}
\end{table}
Table 4: Experimental state (\(|\psi\rangle\)) and process(\(\Lambda\)) fidelities obtained using weak measurement-based DQST and DQPT
Figure 4: (Color online) Experimental readouts demonstrating DQST of the Bell state \(|\psi\rangle_{s}=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) using weak measurements for a value of \(g=0.2\). NMR spectra for the observables \(\sigma_{1z}\) (red), \(\sigma_{2x}\)(green), \(\sigma_{1z}\)(purple), and \(\sigma_{1x}\sigma_{2x}\) (blue) were obtained by implementing the weak measurement quantum circuit, followed by a \(90^{\circ}\) phase shift on the initial Bell state.
It can be clearly seen that the non-zero spectral intensities of the NMR peaks corresponding to \((|00\rangle,\sigma_{1z})\), \((|11\rangle,\sigma_{1z})\), \((|11\rangle,\sigma_{1x}\sigma_{2x})\) yield the three density matrix elements \(\rho_{11}\), \(\rho_{44}\) and \(\rho_{14}\) respectively, whereas other peak intensities (Eq. (17)) tend to zero as compared to the reference spectrum. The experimentally obtained real and imaginary parts of the density matrix corresponding to the Bell state \(|\psi_{1}\rangle\) are given in Eqs. (19)-(20), respectively. All the elements were measured with considerably high accuracy and precision.
It is to be noted that the experimental density matrix is Hermitian by construction (the imaginary part of all diagonal elements can be ignored and set to zero) but may not satisfy positivity and trace conditions as all the independent elements \(\{\rho_{ij},i\leq j\}\) are computed individually and independently. For DQST of the Bell state \(|\psi_{1}\rangle\), the trace turns out to be 1.3435 and the eigenvalues are 1.2836, 0.2825, \(-0.1553\) and \(-0.0673\), which do not correspond to a valid density matrix. However, the true quantum state satisfying all the properties of a valid density matrix can be recovered from the experimental density matrix by recasting it as a constrained convex optimization problem [34]:
\[\min_{\overrightarrow{\rho}_{\text{weak}}^{\text{true}}} \parallel\overrightarrow{\rho}_{\text{weak}}^{\text{true}}- \overrightarrow{\rho}_{\text{weak}}^{\text{dqst}}\|_{l_{2}}\] (18a) subject to \[\rho_{\text{weak}}^{\text{true}}\geq 0, \tag{18b}\] \[Tr(\rho_{\text{weak}}^{\text{true}})=1. \tag{18c}\]
where \(\rho_{\text{weak}}^{\text{true}}\) is the variable density matrix corresponding to the true quantum state to be reconstructed, while \(\rho_{\text{weak}}^{\text{dqst}}\) is the experimentally obtained density matrix using the weak measurement-based DQST scheme. The \(\rightarrow\) arrow denotes the vectorized form of the corresponding matrix and \(\|.\|_{l_{2}}\) represents \(l_{2}\) norm, also known as the Euclidean norm of a vector. The valid density matrix \(\rho_{\text{weak}}^{\text{true}}\) representing the true quantum state was recovered from \(\rho_{\text{weak}}^{\text{dqst}}\) and is given in Eq. (21). We note here in passing that the experimentally obtained density matrices \(\rho_{\text{weak}}^{\text{dqst}}\) (or \(\rho_{\text{weak}}^{\text{true}}\)) corresponding to the states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) can be interpreted as the Choi-Jamiolkowski state corresponding to the identity gate (\(\Lambda=I\)) and the bit flip gate (\(\Lambda=\sigma_{x}\)), respectively.
\[\text{Re}(\rho_{\text{weak}}^{\text{dqst}})=\begin{pmatrix}0&-0.0566\pm 0.0085&0.03 52\pm 0.0123&0.1126\pm 0.0374\\ 0.0566\pm 0.0085&0&0.0860\pm 0.0217&-0.1384\pm 0.0036\\ -0.0352\pm 0.0123&-0.0860\pm 0.0217&0&0.1367\pm 0.0139\\ -0.1126\pm 0.0374&0.1384\pm 0.0036&0.1367\pm 0.0139&0\end{pmatrix} \tag{19}\]
\[\text{Im}(\rho_{\text{weak}}^{\text{dqst}})=\begin{pmatrix}0&-0.0566\pm 0.008 5&0.0352\pm 0.0123&0.1126\pm 0.0374\\ 0.0566\pm 0.0085&0&0.0860\pm 0.0217&-0.1384\pm 0.0036\\ -0.0352\pm 0.0123&-0.0860\pm 0.0217&0&0.1367\pm 0.0139\\ -0.1126\pm 0.0374&0.1384\pm 0.0036&0.1367\pm 0.0139&0\end{pmatrix} \tag{20}\]
\[\rho_{\text{weak}}^{\text{true}}=\begin{pmatrix}0.4667&-0.0300-0.0333i&-0.02 17-0.0618i&0.4858-0.0811i\\ -0.0300+0.0333i&0.0043&0.0058+0.0024i&-0.0255+0.0399i\\ -0.0217+0.0618i&0.0058-0.0024i&0.0092&-0.0118+0.0681i\\ 0.4858+0.0811i&-0.0255-0.0399i&-0.0118-0.0681i&0.5198\end{pmatrix} \tag{21}\]
For DQPT implementation, the \(U_{\chi}\) acts as a change of basis operation, which transforms the Choi-Jamiolkowski state to the process matrix \(\chi\) in the chosen basis. The desired unitary operator \(U_{\chi}\) is set to:
\[U_{\chi}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&0&0&1\\ 0&1&1&0\\ 0&-i&i&0\\ 1&0&0&-1\end{pmatrix} \tag{22}\]
which allows the estimation of the process matrix \(\chi\) in the Pauli basis. The real and imaginary parts of the process matrix \(\chi_{\text{weak}}^{\text{dqst}}\) in the Pauli basis corresponding to the Hadamard gate \(H\) obtained via the weak measurement-based DQPT protocol are given in Eqs. (23)-(24), respectively, where the trace turned out to be 0.9589 and the eigenvalues are -0.1646, 0.0795, 0.1427 and 0.9014. The true quantum process \(\chi_{\text{weak}}^{\text{true}}\) can be recovered from \(\chi_{\text{weak}}^{\text{dqst}}\) by solving a similar convex optimization problem as given in Eq. (18a) with the additional constraint \(\sum_{m,n}\chi_{mn}E_{n}^{\dagger}E_{m}=I\), and is given in Eq. (25). The theoretical process matrix corresponding to the Hadamard gate contains only four non-zero elements \(\{\rho_{ij}=0.5|i,j=2,4\}\), and it can be seen from
Eqs. (23)-(24) that the weak measurement scheme is able to determine all these elements with very high accuracy. The theoretical and experimental density and process matrices corresponding to the quantum state \(|\psi_{2}\rangle\) and the gate \(R_{x}(\frac{\pi}{2})\) are graphically represented in Fig.5. The experimental state (process) fidelity \(\mathcal{F}\) is computed using the normalized trace distance between the experimental and theoretical density (process) matrices[42]. The experimental fidelity of various quantum states and processes obtained via the weak measurement-based protocol is given in Table 4.
\[\mathrm{Re}(\chi^{\mathrm{dqst}}_{\mathrm{weak}})=\begin{pmatrix}-0.0010\pm 0.0005 &0.0422\pm 0.0041&0.0635\pm 0.0012&-0.0854\pm 0.0004\\ 0.0422\pm 0.0041&0.3964\pm 0.0099&-0.0827\pm 0.0004&0.4406\pm 0.0097\\ 0.0635\pm 0.0012&-0.08269\pm 0.0004&0.0789\pm 0.0036&0.0429\pm 0.0019\\ -0.0854\pm 0.0004&0.4406\pm 0.0097&0.0429\pm 0.0019&0.4846\pm 0.0035\end{pmatrix} \tag{23}\]
\[\mathrm{Im}(\chi^{\mathrm{dqst}}_{\mathrm{weak}})=\begin{pmatrix}0&0.0243\pm 0.0033&0.0554\pm 0.0155&-0.0195\pm 0.0012\\ -0.0243\pm 0.0033&0&-0.0664\pm 0.0081&0.0754\pm 0.0445\\ -0.0554\pm 0.0155&0.0664\pm 0.0081&0&0.0633\pm 0.0445\\ 0.0195\pm 0.0012&-0.0754\pm 0.0045&-0.0633\pm 0.0004&0\end{pmatrix} \tag{24}\]
\[\chi^{\mathrm{true}}_{\mathrm{weak}}=\begin{pmatrix}0.0319&-0.0145+0.0072i&0. 0144+0.0246i&-0.0389+0.0077i\\ -0.0145-0.0072i&0.4021&-0.0380-0.0542i&0.3992+0.0653i\\ 0.0144-0.0246i&-0.0380+0.0542i&0.0831&0.0008+0.0646i\\ -0.0389-0.0077i&0.3992-0.0653i&0.0008-0.0646i&0.4829\end{pmatrix} \tag{25}\]
#### Extension to \(n\) qubits
For an \(n\)-qubit density (or process) matrix, all the independent elements \(\{\rho_{ij},i\leq j\}\) can be obtained as given in Eq. (17). All the \(2^{n}\) diagonal elements can be recovered using a weak measurement of the \(\sigma_{1z}=\sigma_{z}\otimes I^{\otimes n-1}\) operator, whereas all the \(2^{n-1}(2^{n}-1)\) off-diagonal elements can be obtained via weak measurements of \(n\)-qubit Pauli operators of the form \(\{I,\sigma_{x}\}^{\otimes n}\) (excluding \(I^{\otimes n}\)), each yielding \(2^{n-1}\) elements. For instance, the operator \(\sigma_{x}^{\otimes n}\) will measure the off-diagonal elements \(\{\rho_{ij},1\leq i\leq 2^{n-1},j=2^{n}+1-i\}\). The reconstruction of the full density matrix requires weak measurements of \(2^{n}\) Pauli operators which is in stark contrast to standard tomographic protocols which require the measurement of \(4^{n}-1\) operators. Hence, even for full reconstruction, the weak measurement-based tomography protocol turns out to be much more efficient than standard and selec
Figure 5: (Color online) Theoretical and experimentally reconstructed density matrices corresponding to (a) the quantum state \(|\psi_{2}\rangle=(|01\rangle+|10\rangle)/\sqrt{2}\) and (b) the rotation operation \(R_{x}(\frac{\pi}{2})\).
tive tomography protocols. The quantum circuit given in Fig. 1(a) can be extended to \(n\)-qubits, with DQST requiring one extra qubit as the meter qubit and DQPT requiring \(n\) extra ancillary qubits along the meter qubit.
## IV Conclusions
In this work an efficient scheme was proposed and a generalized quantum circuit to perform direct QST and QPT using a weak measurement-based technique was constructed and the protocol was successfully tested on an NMR quantum processor. We used the scalar \(J\) coupling to control the strength of the interaction between the system and the metre qubits and hence were able to efficiently simulate the weak measurement process with high accuracy. Our protocol allows us to directly obtain multiple selective elements of the density and the process matrix of an unknown quantum state and an unknown quantum process in a single experiment, which makes it more attractive as compared to other direct tomography methods. Furthermore we employed the convex optimization method to recover the underlying true quantum states and processes from the experimental data sets obtained via the weak measurement-based scheme which substantially improved the experimental fidelities.
Unlike other measurement-based DQST (or DQPT) methods which require projective measurements on the system qubits and maximally disturb the state of the system, our protocol does not involve any measurements on the system qubits. Our experiments open up new research directions for various interesting weak measurement experiments on quantum ensembles which were earlier not possible.
###### Acknowledgements.
All experiments were performed on a Bruker Avance-III 400 MHz FT-NMR spectrometer at the NMR Research Facility at IISER Mohali. Arvind acknowledges funding from the Department of Science and Technology (DST), India, under Grant No DST/ICPS/QuST/Theme-1/2019/Q-68. K.D. acknowledges funding from the Department of Science and Technology (DST), India, under Grant No DST/ICPS/QuST/Theme-2/2019/Q-74.
|
2309.02477 | Einstein, Barcelona, Symmetry & Cosmology: The Birth of an Equation for
the Universe | Albert Einstein visited Spain only once, precisely one hundred years ago. The
circumstances, of a very different kind, of this visit will be explained here.
In special, some important events happened to Einstein during that period,
which, eventually, were key for converting modern cosmology into a genuine
physical theory. Among them is the famous Einstein-Friedmann controversy,
first, on the mathematical validity of Friedmann's equations and, later, their
possible usefulness as a reliable tool to describe the real world. A summary of
the deepest ideas underlying Einstein's contributions to the theory of
relativity, which he had already completed before his visit, will precede the
discussion, also supplemented with a description, in very simple terms, of the
three main relativistic theories, namely Galileo's one, and Einstein's special
and general theory. They pave the way towards a definitive theory of total
relativity, so far unattainable. It will be recalled that the most general
relativity principle, faithfully reflecting Ernst Mach's far-reaching ideas,
might have much to do with the symmetry-breaking paradigm, a most crucial tool
in quantum field theory and high energy physics. | Emilio Elizalde | 2023-09-05T14:31:17Z | http://arxiv.org/abs/2309.02477v1 | # Einstein, Barcelona, Symmetry & Cosmology: The Birth of an Equation for the Universe
###### Abstract
Albert Einstein visited Spain only once, precisely one hundred years ago. The circumstances, of a very different kind, of this visit will be explained here. In special, some important events happened to Einstein during that period, which, eventually, were key for converting modern cosmology into a genuine physical theory. Among them is the famous Einstein-Friedmann controversy, first, on the mathematical validity of Friedmann's equations and, later, their possible usefulness as a reliable tool to describe the real world. A summary of the deepest ideas underlying Einstein's contributions to the theory of relativity, which he had already completed before his visit, will precede the discussion, also supplemented with a description, in very simple terms, of the three main relativistic theories, namely Galileo's one, and Einstein's special and general theory. They pave the way towards a definitive theory of total relativity, so far unattainable. It will be recalled that the most general relativity principle, faithfully reflecting Ernst Mach's far-reaching ideas, might have much to do with the symmetry-breaking paradigm, a most crucial tool in quantum field theory and high energy physics.
Einstein's equations; Friedmann's equations; universe expansion; relativity principle; Mach's principle; symmetry breakdown +
Footnote †: journal: Physics Letters A
## 1 Introduction
Albert Einstein visited Spain only once, which took place one century ago. This fact has been celebrated and extensively reported in the local and national media, in considerable detail, during the last couple of months. Surprisingly, however, some aspects of Einstein's visit went rather unnoticed, namely, the very important social and scientific circumstances surrounding his visit. When a journalist approached me last February, begging for brand new information about Einstein's stay in Barcelona and its surroundings, I had to think quite hard. What could I tell him that had not been written or said before in previous celebrations of the ephemerid? Nothing was my first reaction. But a minute later, I began to reconsider the situation. No doubt, 1923 was a glorious year for cosmology: the year of the famous Einstein-Friedmann controversy, which had started with the publication of Friedmann's fundamental equations in _Zeitschrift fur Physik_ just a few months before the visit [1], and which ended a couple of months after it, with Einstein's acceptance of Friedmann's formulas, which give a faithful description of the universe we live in. Friedmann's equations are now recognized as the formulation of Einstein's general theory of relativity [2] that correspond to our universe and constitute the basis of all modern cosmology. More specifically, we have one general equation (Einstein's) with just one valid solution (Friedmann's) for ruling the whole cosmos: a unique, incredible achievement in the history of cosmology and, by extension, in all of Human History.
This issue was paramount in establishing cosmology as a modern science [3; 4]. And all this happened around 1923, quite precisely, around the time of Einstein's visit to Spain. At a second instance, having to prepare the introductory talk for the 4th Symmetry Conference, I was brought to dig more deeply into Einstein's relativity theory--which was the main
subject of all of Einstein's talks at that time--and then suddenly recalled the following important fact: the relativity principle may indeed have much to do with the symmetry breaking paradigm! Thus, I found a strong connection among all variables and was ready to write a report offering a novel approach to the subject.
In what follows, an exposition will be made, albeit necessarily limited, of the scientific context and, more generally, of the historical, economic, and social circumstances corresponding to the epoch of Einstein's trip a century ago. In particular, the environment and the general circumstances of a very wide scope in which the visit took place will be considered. Not many new or more precise details of his stay will be given; only a few corrections to previously issued, inaccurate statements on the ephemerid.
The contents of the paper are as follows. In Section 2, a summary of the main results obtained by Albert Einstein before 1923 will be discussed. Special emphasis will be made, in Section 3, on the essential principles conforming to his two famous theories of relativity, namely the special and the general one; both will be described as extensions of the pioneering relativity (or covariance) principle due to Galileo Galilei, and as (first- and second-order) attempts to crystallize the very ambitious ideas formulated by Ernst Mach. Section 4 will recall the main events that occurred in the world in 1923 and, subsequently, Einstein's six-month-long journey, which started on 6 October 1922. The essential scientific context surrounding Einstein during his trip, which ended with his visit to Spain will form the content of Section 5, including the famous Einstein-Friedmann controversy. The paper ends with Section 6, containing some conclusions and an outlook.
## 2 Who Was Einstein? What Had He Achieved by 1923?
As is well known, Einstein's most prolific year in his entire life was 1905, when he was just twenty-five and working at the patent office of the Swiss Federal Institute for the Intellectual Property in Bern. That year is sometimes described as his _annus mirabilis_ ('year of miracles', no wonder that 2005 was declared the World Year of Physics), during which Einstein published four extraordinarily momentous and groundbreaking papers [5]. Many scholars claim that each one of these papers could have deserved the Nobel Prize. In one of them, he established the theory of the photoelectric effect; in another, he explained the elusive concept of Brownian motion; and, in the last two papers, he introduced the theory of special relativity and demonstrated the daring mass-energy equivalence, which was to have so many implications for the future of humankind (although he did not explicitly write down, in that paper, his most famous formula, yet).
Einstein observed that the laws of classical mechanics did not agree with those of the electromagnetic field, which led him to develop his particular theory of relativity. It took him, however, another ten years, and many more efforts, to extend that theory to the gravitational field; until he arrived at his theory of gravitation, or general theory of relativity. The consequences of the latter went far beyond those of Newton's laws, including his universal law of gravitation.
As verifiable evidence that Einstein's general relativity (and not Newtonian mechanics) was the correct theory--and as proofs that could serve to establish clear observational evidence of its validity--Einstein referred to the anomalous precession of Mercury's perihelion, to the deflection of light in gravitational fields, and the gravitational redshift. His general relativity made precise numerical predictions about these three effects that differed from the results obtained in Newtonian gravitation. \(\surd\)
Already in 1915, Einstein could calculate with his equations--although approximately using them-- the anomalous precession of Mercury's perihelion (an effect much easier to obtain from Schwarzschild's solution, which he still did not have [3]). And he found a value that perfectly matched the observations of the anomaly, which was a significant issue at the time. Deeply excited, he hurried to communicate the good news to his friend Michele Besso. Einstein had already become fully confident that his theory was correct!
And this happened four years before the famous observation of the solar eclipse of 1919, which constituted for the rest of the world the definitive confirmation of Einstein's
theory, beating Newton's. This fact, almost incredible at that time for everyone, was published on the front pages of all newspapers and magazines everywhere and made Einstein a world-famous person. Until then, nobody had even imagined that Newton would ever be challenged. For Einstein himself, on the contrary, there was no surprise. When he was asked by some journalist in 1919 what his reaction would have been if the data obtained from the solar eclipse had not confirmed his calculations, he answered, without hesitation, that if that had been the case, then "the error would necessarily have been in the observations of the eclipse, since I have for certain that my theory is correct". It must be noted that, in both cases, the relativistic contribution to the corresponding effect is not minor. On the contrary, it is of the same order of magnitude as the classical effect. This goes against the often belief that general relativistic effects are small with respect to Newtonian ones.
Going back in time, it was in 1917 when Einstein first applied his general theory of relativity to the description of the Universe. And, as he immediately saw that he had the same difficulty as Newtonian physics in modeling a static universe, he had no choice but to introduce a universal constant: the cosmological constant. In a similar way, as he explains in all detail in his work [6], as it can be done in the Newtonian case. And Robert Hooke, who has sometimes been termed "the genius in the shadow of Isaac Newton", had already considered as a possibility several centuries before [7]. Indeed, in that respect, the problems of Newtonian gravity and general relativity are the same and the'solution' to render a static universe (by means of the cosmological constant), too. But this solution turns out to be unstable and, therefore, not useful, as was later discovered (by Eddington and Lemaitre, among others [3]).
The ones mentioned above had been Einstein's most important discoveries before he visited Spain and had already earned him several nominations for the Nobel Prize in physics. He had long been convinced he would be granted the prize [8]. When he was finally awarded, in a thank-you letter he wrote to the Nobel Committee, he joked that "he was very happy to have received the award, in particular, because, from then on, he would get rid of so many boring people who kept asking him all the time, how it was that he had not yet got the award". He finally got it in 1922, although the prize did correspond to 1921, a year in which the objections of some members of the Committee had prevented it from being granted to him for his theory of relativity. As we will later see, when he visited Spain, he had not yet had the opportunity to make the prescriptive speech at the formal acceptance ceremony, which did not take place until July 1923.
The Nobel Committee's task had not been simple, indeed. The creation of the Nobel Prize was still quite recent; there were few precedents, and Alfred Nobel's will established that the prize was to be awarded to "_those who, during the previous year, have conferred the greatest benefit on humanity"_. At that time, it was unclear how important Einstein's work was for humanity. And, as for the general theory of relativity, only a few true specialists understood what he had done. It is therefore not surprising that, finally, the Committee decided to award him the prize (on reconsidering the case in 1922) "_for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect"_. His law of the photoelectric effect had already been verified experimentally in 1916 by Robert Millikan, who would receive the prize the following year, 1923.
Einstein was very confident that he would eventually be awarded. The surprising point is that in the diary he kept, quite schematic but very detailed, Einstein did not mention the day he knew he had won the prize! And another equally remarkable fact is that, in the separation agreement of his first wife, Mileva Maric, which took place in 1918, Einstein offered her a monthly income associated with the award, for her and their two children (one of whom required very expensive medical attention), in the case, he would get it. Mileva accepted such an agreement, showing that both considered this a highly plausible possibility. The material value of the prize was equivalent to about fifty times Einstein's annual salary, which, in a short time, was significantly devalued (along with the monetary deposits he might have had) due to the economic situation in Germany after the Great
War. We will deal with this and other crucial circumstances of the epoch in Section 4 while approaching the time of Einstein's visit chronologically. But first, some important concepts.
## 3 Essential Principles Conforming the Theories of Relativity
On 25 November 1915, in his intervention at the Prussian Academy of Sciences session, entitled _"Die Feldgleichungen der Gravitation"_, Albert Einstein announced his General Theory of Relativity, on which he had been working tirelessly for nearly ten years. A year and a half later, on 8 February 1917, in another speech at the same Academy--this one with the title _"Kosmologische Betrachtungen zur allgemeinen Relativitatstheorie"_--he applied his new theory, for the first time, to the description of the universe. In this Section, the critical significance of these facts will be discussed, among other early episodes--also very relevant and corresponding to astronomical observations--which culminated in the birth of Modern Cosmology. This discipline eventually took, as a solid theoretical basis, the so-called field equations of the General Theory of Relativity.
The present discussion's originality relies on the fact that it has for a reference and guiding thread the very important scientific, social, and economic environment of the above-mentioned Einstein's visit to Catalonia and Spain, of which we are now celebrating the centenary. It will be noted, in particular, that one of the most important episodes of the conversion of Cosmology into a modern science occurred precisely around that trip, a crucial fact that tends to go completely unnoticed everywhere.
One of the questions I was asked at a round table celebrated on the occasion: _"Giving curiosity a voice"_, an event commemorating the mentioned ephemerid and organized by the Catalan Foundation for Research and Innovation (FCRi), with the collaboration of Divulcat and Astro Barcelona, was: please continue this sentence _"The theory of relativity is based on..."_. A short answer had to be provided in just a few minutes, so I did it on the spot. Here, we will have more lines to complete in more detail.
### Galilean Relativity
The remarkable Galileo Galilei (1564-1642), considered the founder of modern science (Figure 1), was the first to formulate a principle of relativity, or covariance, in an evident and beautiful way. This principle clearly expresses the very important fact that "it makes sense to talk about laws of physics"; that is to say, these laws do not change. They are immutable when we move from here to any other place in the universe or get on board a vehicle that moves in a straight line and at a constant speed. This is called an inertial frame that will remain the same forever if no force is acting.
Galileo, in his famous book of 1632 _"Dialogo sopra i due massimi sistemi del mondo"_ masterfully expressed this principle in the words of Salviati, when he proposes (on the second of the four days of dialogues) the following experiment [silence, please, it is Galileo himself who speaks to us] [9]:
_"Lock yourself up with a friend in the main cabin, under the deck of a rather large ship, and bring flies, butterflies, and other small flying animals. Hang a bottle so that it drains, drop by drop, into a large container below. Make the ship go at the speed you prefer, but always the same: a smooth motion without fluctuations in one direction or the other. The drops will fall into this container without being diverted aft, even if the ship has moved forward while the drops are still in the air. The butterflies and flies will continue their usual flight from side to side as if they never tire of following the ship's course, however fast it may go, and it will never happen that they concentrate on the stern of it."_
It is certainly an accurate and most precious description of the principle of relativity. To get it right, although the laws of physics do not change when passing to a different system, their specific manifestation changes. As the law itself, the mathematical equations expressing the law remain the same, but the solutions look different in different frames and are connected by a Galilean transformation. In other words, the evolution of a particle, i.e., its world-line, is different since the initial conditions are different in different inertial reference frames. This is taken care of by applying a transformation of what is now known as the Galilean group, in order to go, in the case considered, from the reference system located on land to the reference system fixed on the ship, or vice versa. In the latter frame, the ship remains still, and it is the sea around it and the port from which it set sail that is constantly moving. This is where the name relativity comes from: the description of each reference system is different, although the physical law, the essence, is unaltered.
Gaillean relativity is the simplest of all relativistic theories. Galileo had great intuitions that he linked with observations of nature and some experiments that he carried out personally (although there are still discrepancies about how many of them, he did, in reality). He lacked knowledge of mathematics, which he proclaimed was the language in which the laws of nature should be written. But Galileo used geometry almost exclusively. In the opinion of the great Nobel laureate Steven Weinberg, if he had used more algebraic tools, he could have gone much further [10]. The fact that he could formulate so accurately, in plain words, the principle of relativity proves that this statement is not misguided.
Fifty years ago, Jean-Marc Levy-Leblond devoted his efforts to rescuing and polishing Galilean relativity, formulating it mathematically like Einstein's special relativity. An elaborate and beautiful theory emerged from his work [11; 12], parallel to Einstein's much more famous theory, to be discussed below. There is only one difference, apparently small but essential: the constancy of the speed of light, c, in any inertial frame of reference, implying that its value can never be exceeded (for transmitting information of any kind).
We will end this summary of Galileo's relativity with two comments. The first Weinberg's criticism of Galileo could also be extended to Isaac Newton (1642-1727) who, despite being the creator--together with Wilhelm Leibniz (1646-1716)--of the very powerful in
Figure 1: Galileo Galilei (1564–1642). Oil portrait of an Italian painter believed to be from the 18th Century. Reproduced with permission from Wellcome Trust, UK. Fair use.
fintestinal calculus [13; 14; 15], never used it, practically, in the formulation of the laws of his mechanics, which in the Principia are given in the form of endless paragraphs challenging to digest and to use.
The second comment concerns my work on this topic, particularly the one I carried out for my PhD thesis [16; 17; 18]. Starting from the papers of Levy-Leblond and other authors, we came to connect, both ways, the theory of Lie groups of the respective transformations: using techniques of contraction and dilation of groups in changing dimensions, we related the groups of Galileo and those of Lorentz and Poincare, which correspond to the special theory of relativity [19; 20; 21; 22] (Figure 2). It is not time to go deeper into these concepts. Still, it must be emphasized that all these developments have given even more relevance to the ideas and formulations of Galileo as the true pioneer of relativistic theories.
### The Special Theory of Relativity
Towards the end of 1905, in one of the four historical works written during what is often called his _annus mirabilis_, Albert Einstein published his special theory of relativity (Figure 3). Masterfully, in one of these works, he was able to derive the Lorentz transformations under only two assumptions of the principle of relativity or covariance (Galileo's one, which we have already seen) and of the constancy of the speed of light (in ideal vacuum conditions) in any inertial reference system--a fact that had been already checked in the famous experiment of Michelson and Morley--and at the same time abandoning the ether as just unnecessary [23].
In this way, Einstein filled with meaning the Lorentz transformations (consisting of rotations and displacements at constant speed) and the Poincare ones (also including space translations), which had been previously considered by different physicists since 1887. All these transformations reduce to those of Galileo when the speed between the two reference systems is much smaller than that of light.
Figure 2: On the left, Hendrik Lorentz (1853–1928) and, on the right, Henri Poincaré (1854–1912)—Image: Wikimedia Commons. Public Domain.
Summing up, regarding the postulates, in his special theory of relativity, Einstein only added to the principle of relativity due to Galileo a second postulate, which states that the speed of light in a vacuum is the same for any inertial reference system. The consequences of these two simple postulates are amazing and very difficult to understand by those of us who always move at insignificant speeds compared to light. Completely improbable phenomena appear, even seemingly absurd situations, such as the fact that the simultaneity of two events is relative (to the reference system), the phenomena of time dilation, length contraction, a relativistic contribution to the Doppler effect, and many other weird phenomena.
They are immediate consequences of the two postulates and are obtained simply using the corresponding Lorentz transformation. It is true that they only manifest themselves when the speed at which one system travels concerning the other is close to that of light, but it must be observed that this condition already occurs nowadays in a multitude of laboratory experiments carried out with elementary or just very small particles, also in photonics, and at very different levels (think of the ubiquitous GPS signals, which we are using all the time [24]).
But maybe the most extraordinary consequence for human society that Einstein's special theory of relativity had was the realization of the equivalence between mass and energy, very simply expressed by his most famous formula: \(\mathrm{E=mc^{2}}\). Einstein took time to write it in this form; he did not do so in his already-mentioned work of 1905, in which he expressed it indirectly, already. In principle, the formula describes the values that the magnitudes take in a reference system at rest, but it also extends to the values of relativistic mass and energy for a system in motion. Einstein clearly stated that the laws of energy conservation and conservation of mass were _"the same"_[25].
Figure 3: Albert Einstein, around 1905, his “annus mirabilis”, in which he published four momentous articles. One of them: “Zur Elektrodynamik bewegter Körper”, the work in which he built the special theory of relativity. Public Domain.
Anyway, some physicists consider this formula to be overrated, as it has little real utility in designing how to carry out, in practice, nuclear fission processes. Be that as it may, it is very accurate that Einstein's formula served as a guide when evaluating this possibility for the first time in history, namely, to understand whether nuclear fission had occurred in a certain laboratory experiment. We have a magnificent, first-hand description of how this occurred, in the words of Otto Frisch, one of the main actors of this play.
During the Christmas holidays of 1938, Frisch spent some days in Stockholm at the invitation of his aunt, the great Lise Meitner. One day, both went out for a walk in the cold of the snowy city. While talking at length about the issue, they managed to understand the meaning of the experimental results of their colleagues Otto Hahn and Fritz Strassmann in Berlin. By bombarding uranium atoms with neutrons, those had obtained what appeared to be barium and an excess of neutrons, a completely unexpected and mysterious result. Frisch and Meitner understood that the uranium nucleus had been split and introduced the idea of what would later be called atomic fission. They directly used Einstein's equation to quantify the energy of a reaction that should be able to overcome forces such as the surface tension, which holds the nucleus together, to allow the fission fragments to drift apart a little, resulting in a configuration from which their charges could force them (by electrostatic repulsion) into an energetically favorable final fission. Using the packing fraction, or value of nuclear binding energy per nucleon, along with the formula E = mc\({}^{2}\), they realized that the basic process of fission "_was indeed energetically possible_". As Frisch described it [26]:
_"We walked up and down the snow, me on skis and Lise on foot...and little by little the idea took shape... based on Bohr's conception of the nucleus as a drop of liquid; the drop could stretch out and split apart... We knew there were very strong forces that would oppose it,... like the surface tension. But nuclei are different from normal droplets. At this point, we both sat on a tree trunk and started calculating on scraps of paper... the uranium nucleus could become a very unstable blob, ready to split... But... when the two blobs separated, they would be further separated by electrical repulsion, the equivalent (in energy) of about 200 MeV. Fortunately, Lise remembered by heart how the masses of nuclei were calculated... and discovered that the two nuclei formed... would be lighter by about one-fifth the mass of a proton. Now, every time mass disappears, energy is created, according to Einstein's formula \(E=mc^{2}\), and... the loss of mass was equivalent to 200 MeV! Everything fit!"_
Fission could therefore take place! When, sometime later, Einstein found out that it had been carried out and its dramatic consequences, it is said that he exclaimed: _Woe is me!_
### The General Theory of Relativity
Regarding the special, it contains only one additional postulate, the principle of equivalence. Einstein formulated it one day after having what he would later describe as _"the happiest idea of my entire life"_ (other authors place this famous sentence at the moment he found the formula we discussed above).
It was Einstein who explained (although there is no written record of it) that the idea came to him in 1907 while working at the Patent Office in Bern. He was sitting in his usual chair in front of his desk when suddenly he was very startled by a thought that occurred to him about what would happen if, at that very moment, he fell upright from the roof of his house. He continued to reason slowly... At that instant, while he was falling, no gravitational field would exist for him as an observer, at least not in his surroundings. Indeed, if he had an object in his hand, say an apple or a coin, and simply let it go, the object would not fall at his feet; it would always remain next to his hand without separating from it: it would not experience, therefore, any gravity! If he could not see anything except himself and the object, he might reasonably conclude that he was in a zero-gravity place. Later, the example of an elevator in free fall with a person inside has been commonly used
as an alternative to illustrate the same idea (in this case, the elevator walls already isolate the experimenter from the rest of the world).
Expressed in another way, the conclusion is that the force of gravity is not special at all: it is just like any other mechanical force that sets an object in accelerated motion. Another alternative version of the same principle is to consider that the mass of a body involved in Newton's formula of universal gravitational attraction (the gravitational mass, \(\mathrm{m_{g}}\)) is the same as the one which appears in the formula \(\mathrm{F=m_{i}}\) a (called inertial mass, \(\mathrm{m_{i}}\)), which is inversely proportional to the acceleration that the body acquires when a mechanical force is applied to it. In short, \(\mathrm{m_{g}=m_{i}}\). All these formulations of the principle of equivalence are equally valid.
That thought of Einstein (one of his most famous _gedanken_ experiments) was quite happy since it led him to build a whole new theory of gravitation, which he called the general theory of relativity and has gone much further than Newton's universal gravitation. In it, as was already the case for special relativity, space and time are united in a continuous "fabric" of space-time; but, as a great novelty, the presence of matter now results in a local curvature of this fabric, similar to what happens when a child gets ready to jump on an elastic bed, at a fair, thus making the fabric, originally flat, to collapse under its weight. In this theory, the curvature of space-time gives rise to the effect we call gravity.
In more technical terms, Einstein's equivalence principle for a uniform gravitational field states that the motion of an object in an inertial frame of reference is indistinguishable from the motion of the object in the absence of that field but concerning a suitably uniformly accelerated reference system. In his own words (Einstein, 1907) [27]:
_"We assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system"._
Continuing with the "happiest thought of his life", Einstein also referred to two reference systems, K and K'. K has a uniform gravitational field, while K' has no gravitational field but is uniformly accelerated so that the objects in the two systems experience identical forces (Figure 4). Again, in his own words (Einstein, 1911) [28]:
Figure 4: On the (**left**), a ball falls to the ground in a suitably accelerated rocket in the absence of gravity. On the (**right**), the ball falls to the ground in the usual way. The effect is identical in both situations, completely indistinguishable in the chamber that isolates the observer from the outside world. Fair use.
"We arrive at a very satisfactory interpretation of this law of experience if we assume that the systems K and K' are, physically, completely equivalent; that is, if we admit that we can also consider the system K as a space free of gravitational fields, but at the same time as a uniformly accelerated system. This assumption of exact physical equivalence makes it impossible for us to talk about the absolute acceleration of the reference system, just as the theory of special relativity forbids us to talk about the absolute speed of a system. And this makes the equal fall of all bodies in a gravitational field seem then to be a most natural thing."_
However, we should not be fooled by the apparent simplicity of all these concepts. Here we are talking only about the fundamental principles of the general theory of relativity, but it is necessary to mention that it took Einstein ten full years of his life, working without rest, to arrive at his final field equations from these principles. This would give to another article, quite long and much more complex. Here, we will limit ourselves to writing his field equations.
\[\mathrm{R}_{\mu\nu}-\frac{1}{2}\mathrm{R}_{\mu\nu}+\Lambda\mathrm{g}_{\mu\nu} =8\pi\mathrm{T}_{\mu\nu}\;,\]
where \(\mathrm{R}_{\mu\nu}\) is the Ricci curvature tensor, which represents the curvature of spacetime caused by matter; R is the scalar curvature, a measure of the overall curvature of spacetime; \(\mathrm{g}_{\mu\nu}\) is the metric tensor, which describes the geometry of spacetime; \(\Lambda\) is the cosmological constant, which in its modern conception represents the energy density of the vacuum state of space; and \(\mathrm{T}_{\mu\nu}\) is the stress-energy tensor, which describes the distribution of matter and energy in spacetime. The Ricci curvature tensor is a geometric object obtained by contraction of the first and third indices of the Riemann curvature tensor, \(\mathrm{R}^{\rho}\,_{\sigma\mu\nu}=\partial_{\mu}\Gamma^{\rho}\,_{\nu\sigma}- \partial_{\nu}\Gamma^{\rho}\,_{\mu\sigma}+\Gamma^{\rho}\,_{\mu\Lambda}\Gamma^{ \lambda}\,_{\nu\sigma}-\Gamma^{\rho}\,_{\nu\lambda}\Gamma^{\lambda}\,_{\mu\sigma}\), namely \(\mathrm{R}_{\mu\nu}=\mathrm{R}^{\rho}\,_{\mu\rho\nu}\), where \(\partial_{\mu}\) stands for \(\partial/\partial\mathrm{x}^{\mu}\), \(\Gamma\) are the Christoffel symbols of the Levi-Civita connection corresponding to the spacetime metric, to wit: \(\Gamma^{\rho}\,_{\mu\nu}=1/2\;\mathrm{g}^{\rho\sigma}\,(\partial_{\mu}\mathrm{ g}^{\sigma\nu}+\partial_{\nu}\mathrm{g}^{\sigma\mu}-\partial_{\sigma}\mathrm{g}_{\mu\nu})\). One must observe that Einstein only introduced the cosmological constant term in his paper of 1917, "_Kosmologische Betrachtungen..._" [6], where he used his field equations for the first time to obtain a static model for the universe.
We cannot refrain from bringing up a passage where Einstein commented on some aspect of math necessary to translate his principles into useful formulas and equations. In a letter to Arnold Sommerfeld from the year 1912 (that is, about sixty years after Bernhard Riemann's famous habilitation work) [29], Einstein commented on the efforts he was making to learn Riemannian geometry. He says:
"Aber eines ist sicher, dass ich mich im Leben noch nicht annuhend so geplagt habe und dass ich grosse Hochachtung vor der Mathematik eingeflost bekommen habe, die ich bis jetzt in ihren subtileren Teilen in meiner Einfalt fur puren Luxus gehalten habe!"
Which translates to:
"But one thing is certain, that I have never nearly so toiled in life and have instilled in me such a high regard for mathematics, which I had considered until recently, in my natievty, as for its subtler parts, as a simple luxury!"
Going a little deeper into the principle of equivalence, three forms are currently being considered: the weak (or Galilean), the Einsteinian and the strong equivalences. In the weak equivalence principle, also known as the universality of free fall or Galilean equivalence principle, the universality of free fall is restricted to ordinary bodies (the ones Galileo had in mind only), which are bound just by non-gravitational forces (for example, a stone, a piece of metal, a block of wood, etc.). We thus see that Galileo was also a pioneer in the conception of an equivalence principle, which Einstein would extend in his general theory of relativity.
In his precise words, the Einsteinian form of it is the one we have already considered above (Figure 5). And the principle of strong equivalence is a generalization of the two,
which also includes as bodies the astronomical objects, such as pulsars and black holes, which are very unusual, for they have a very important part of the energy that holds them together being of gravitational type. Strong equivalence can be tested by looking for a variation in Newton's gravitational constant, G, or a variation in the masses of the fundamental particles over the universe's lifetime. Observations of an independent nature, such as precision measurements of Solar System orbits and studies of Big Bang nucleosynthesis, have shown that G cannot have varied by more than 10% along time. The strong equivalence principle can also be checked by looking for some kind of fifth force, in terms of deviations from the gravitational law predicted by general relativity, as searching for inverse square law errors, in terms of Yukawa forces or violations of Birkhoff's theorem.
And with that, we have reached the more than reasonable ceiling of what may be explained in a short introduction, such as this one, to the theories of relativity.
Just for completeness, as has been mentioned already, the first exact solution to Einstein's field equations was found by Karl Schwarzschild in 1916, only a few weeks after Einstein presented them to the Prussian Academy. The Schwarzschild solution reads:
\[\mathrm{ds}^{2}=(1-2\mathrm{GM/rc^{2}})\;\mathrm{c}^{2}\mathrm{dt}^{2}-\mathrm{ dr}^{2}/(1-2\mathrm{GM/rc^{2}})-\mathrm{r}^{2}(\mathrm{d}\theta^{2}+\mathrm{ sin}^{2}\theta\;\mathrm{d}\varphi^{2}).\]
It is an exact solution that describes the gravitational field outside a spherical mass on the assumption that the electric charge of the mass, its angular momentum, and the cosmological constant are all zero. It is a useful approximation for describing slowly rotating astronomical objects such as usual stars and planets, including our Earth and the Sun.
Figure 5: Albert Einstein in 1916, in the house library by Paul Ehrenfest, in Leiden, where he stayed for a few days. Public Domain.
For future reference in the present article, let us also write down the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric, which describes a homogeneous and isotropic universe. In reduced-circumference polar coordinates, the metric has the form:
\[\mathrm{ds}^{2}=-\mathrm{c}^{2}\mathrm{dt}^{2}+\mathrm{a}(\mathrm{t})^{2}\frac {\mathrm{dr}^{2}}{1-\mathrm{kr}^{2}}+\mathrm{r}^{2}(\mathrm{d}\theta^{2}+ \mathrm{sin}^{2}\theta\mathrm{d}\phi^{2}).\]
Here, \(\mathrm{a}(\mathrm{t})\) is known as the scale factor, \(\mathrm{k}\) is a constant representing the curvature of the space, and \(\mathrm{r}\) is sometimes called the reduced circumference, which is equal to the measured circumference of a circle (at that value of \(\mathrm{r}\)), centered at the origin, divided by \(2\pi\) (like the \(\mathrm{r}\) of Schwarzschild coordinates)
Before ending this Section with a telegraphic summary of everything exposed above, the following, most relevant observations are in order.
### Two Important Observations
First. The first and third principles, general relativity or covariance and equivalence, are the two basic postulates of the theory of general relativity (apart from the second, equally key but already inherited from special relativity). There has been a lot of discussion about its independence or not. The answer is not so immediate. The two principles are usually presented as independent, and they are quite so in their most strict formulation. But it happens that, in practice, they are connected by inaccuracies that come from the approximations made in formulating the equations of general relativity, which, for simplicity, Einstein reduced to the second order. This means that the equivalence principle is approximate (in its implementation in Einstein's equations) and valid only for second-order terms (accelerations). Indeed, the accelerations are indistinguishable at any given point; but the differentials of these and the gradients of higher orders are not identical. And this error in the equivalence (although very small in practical situations) introduces problems with the covariance, which causes the terms higher than the second order to be truncated. In short, the curvature of space is well represented in the equations, but not higher-order spacetime deformations.
Einstein was the first to admit that his final theory was approximate and incomplete. He hoped other scientists would improve it soon, which has not happened, not even now, although many have tried, and many alternative theories have certainly appeared in the meantime. General relativity works very well up to high values for the energy: it has recently done so to high accuracy in describing black hole collisions of about thirty solar masses and more. But if the kinetic energy were even much larger, high enough to fold space-time into layers, then the problems would already appear in a clear way. Among several others, a candidate aiming to improve the theory at much higher energies is topological geometrodynamic (TGD), a theory involving highly complex mathematics [30].
Second. Einstein's attempt to somehow materialize the ideas of Ernst Mach in his construction of general relativity was undoubtedly a very important stimulus for creating his theory. However, it does not appear as one of the fundamental principles of the same (Figure 6). The name Einstein gave to his theory came from his conviction that he could do justice to Mach's criticism of Newton's notion of absolute space. According to Mach, space had to be relativistic (or covariant) concerning the most general possible transformations of the spacetime coordinates.
which the metric field takes the simplest possible form. But we should be able to place ourselves further, before choosing this field, before the actual specification of space and time. This is demanded, in special, if we want to approach the very origin of the universe. But, again, this would take us too far.
To conclude, when the problem is cast in this way, it is very similar to the concept of symmetry breaking, so crucial in the modern formulations of theoretical physics. Recall, for example, that in the electroweak standard model, a Higgs field appears that breaks the symmetry of the primary equations and gives mass to the elementary particles; on the other hand, in quantum chromodynamics, a quark-antiquark condensate arises that also breaks the symmetry; and in the schemes of great unification, several generalizations of the same idea are used. The symmetry perspective would contemplate the possibility of primary theories enjoying greater symmetries than those realized in the equivalence principle of general relativity. In this context, Mach's principle would be the hypothesis that the most general primary theory should include the principle of total relativity, that is, the physical equivalence among all possible coordinate systems (Wilczek, 2004) [31]. Again, we have flown too far: no one has yet advanced substantially, in practice, along this path.
### Summary of the Relativity Theories
In summary, the three fundamental principles of the General Theory of Relativity are as follows (Figure 7):
1. The principle of relativity or general covariance
* Galileo: it makes sense to formulate laws of physics (inertial systems)
* Laws for inertial or accelerated systems. Form of eqs changes (Galilean, Lorentz, Poincare transf.)
Figure 6: Ernst Mach, Austrian physicist, and philosopher (1838–1916)—Public domain.
* No total relativity (Mach's principle). Truncated to 1st-/2nd-order eqs.
* The speed of light in a vacuum is constant, c, in all inertial systems
* Together with Galileo's relative principle (inertial system) \(\rightarrow\) Special Theory of Relativity
* The principle of equivalence
* Gravity is like all other forces. Equiv. of inertial mass and gravitational mass: \(\rm m_{i}=m_{g}\)
* Spacetime is a mathematical manifold, locally Minkowskian
They are completed with two more conditions of a technical nature and refer to the field equations of the theory.
* Zero torsion hypothesis (\(\nabla_{\rm X}\rm Y-\nabla_{\rm Y}\rm X=[X,Y]\))
* Christoffel symbols are symmetrical. You can relax it (Einstein-Cartan, string theory)
* Reduction to Newton's laws (at small speeds as compared to c)
* To define the universal constants of the new theory
* And a summary of the two important observations:
* Are the 1st and 3rd principles independent?
* The answer is tricky: yes and no
* They are in their presentation
Figure 7: Albert Einstein giving the 11th Josiah Willard Gibbs lecture at the American Association for the Advancement of Science meeting on 28 December 1934—Public domain.
But the approximations made in the math formulation of the GTR (cut to 2nd order) render the equivalence principle also approximate
Higher-order differentials and gradients do differ
This will become noticeable in extremely high-energy processes.
Einstein's theory is not the final one (AE _dixit_)
Mach's principle of general relativity is not fulfilled
Einstein was the first to admit his theory was approximate and was convinced that someone would soon perfect it
We are trying to do this now, a century later: S-T and f(R) theories, QG? etc.
The symmetry-breaking paradigm could be useful
## 4 Events of 1923 and Einstein's Six-Month Journey
### Notable Events in the World, in 1923
Developing further the proposal sketched in Section 1, we shall now put into context Einstein visit to our country. To start, we summarize the most notable ephemerids of the year 1923. On 9 January, Juan de la Cierva made the first flight in his autogiro (a precursor of the helicopter). On the 11th, despite strong protests from the British, troops from France and Belgium occupied the Ruhr region to force Germany to pay them the war reparations that had been agreed upon in the Versailles Treaty. Already entering the month of February, we find in the international press the worrying news that, in Germany, inflation seems to have no ceiling: one dollar is exchanged for 57,500-mark notes. On 23 February, just as Einstein set foot in Barcelona, the German Parliament approved a decree law against speculators. However, hyperinflation in the Weimar Republic (Germany, at the time) continued to rise. In July, the number of mark notes needed to buy one American dollar reached 353,000, more than 200 times the amount needed at the start of 1923. And, on 15 November, hyperinflation became dramatically present in Germany and reached its peak: one dollar was exchanged for 4,200,000,000,000-mark notes (4.2 trillion!). These unbelievable figures have gone down in history as a record. This information must be kept in mind when we talk about Einstein's trip and visit to Spain and other circumstances that will be exposed next.
On 13 February 1923, Tutankhamun's tomb was discovered in Egypt. On 10 March of that year, the anarchist Salvador Segui was murdered in Barcelona and, on 20 July, in Mexico, the popular leader Pancho Villa. On 13 September, the _coup d'etat_ led by General Primo de Rivera took place in Spain, which suspended the Constitution, dissolved the Parliament, and established the country's first dictatorship of the 20th century. On 16 October, Walt Disney and his brother Roy, with animator Ub Iwerks, founded Disney Bros. And, already towards the end of 1923, on 9 November, in Germany, the attempted _coup d'etat_ in Munich, known as the Beer Hall Putsch, failed. For this, Adolf Hitler and Rudolf Hess were shortly after prosecuted and sentenced to prison. We know well what happened later. This was the atmosphere, the air that was breathed in that year of 1923.
### Einstein's Long Journey
Focusing now again on Einstein's activity, several very valuable publications record in all detail the long journey he undertook, accompanied by his wife at that time, Elsa Einstein, from the beginning of October 1922 to the end of March 1923, and which brought him to lecture at the Far East, Palestine, and Spain [32]. Those were places that the, already by then, very renowned physicist had never visited before. Einstein's long itinerary included stops in Hong Kong and Singapore, two short stays in China, a six-week lecture tour of Japan, a twelve-day tour of Palestine, and a three-week visit to Spain. Much more than a simple curiosity to see the world and make himself known personally, by giving talks everywhere, Einstein's trip responded to the purpose of getting away for a prudent time from Berlin, where the German nationalists had murdered, shortly before, the philosopher and Jewish diplomat Walther Rathenau. The brutality of his death, which happened while he was sitting in his car, in the street, due to the explosion of a hand grenade, had greatly impressed Einstein. He knew, moreover, that he and his wife were on "a list" and that it
suited them to leave the country no matter what. We have the complete diary that Einstein wrote during those days [33]. To see the character of his narrative, here is a short extract from its first page:
**Travel diary for Japan, Palestine, and Spain [6 October 1922-12 March 1923].**
6 October _Night trip in overfilled train after reunion with Besso and Chavan. Lost wife at the border._
7 October _Sunrise shortly before arrival in Marseille. Silhouettes of austere flat houses surrounded by pines. Marseille, narrow alkylavays. Voluptuous women. Vegetative living. We were taken in two by seemingly honest youth and dropped off at a glastly inn by the railway station. Bugs in morning coffee. Made our way to the shipping company and the old harbor near the old city quarter. At the ship..._
According to Walter Isaacson's well-documented biography, entitled "_Einstein: his life and Universe_" [34], Mr. Koshin Morobushe Kaizosha, who was Einstein's Japanese host and publisher at the time, offered him the equivalent of two thousand pounds sterling (which would be about 150,000, at current exchange) for a series of lectures. They finally turned out to be fifteen, eight of which were scientific and six public, plus a memorable talk with students at Kyoto University, not previously planned.
It was on board the ship, during the trip that took him and his wife to Asia, when Einstein, then 43 years old, learned that he had been awarded the Nobel Prize in physics. He could not accept the prize in person at the Nobel ceremony in Stockholm in December 1922. On his behalf, the banquet speech was delivered by the German ambassador, who praised Einstein not only as a scientist but also as a man of peace and international activist. It was after the return of his long journey when, finally, on 11 July 1923, in Gothenburg, Einstein was able to deliver his speech in person. It was on the occasion of the meeting of the Scandinavian Scientists in an impressive auditorium and with the King of Sweden, Gustav V, sitting in the first row (Figure 8). Einstein chose to speak on the theory of relativity, even though the prize had not been formally awarded to him for this subject.
Figure 8: On 11 July 1923, Einstein spoke at the Congress Hall in Gothenburg, Sweden, at the Scandinavian Congress of Naturalists. Public domain.
He and his wife Elsa visited Japan from 17 November to 29 December 1922. His trip, meticulously organized by Kaizosha Publishing, made big international news. Japan was, in fact, the most important stop on his full tour, a tour that involved considerable effort for him. His thoughts, reflected in the meticulous personal diary Einstein kept, reveal a man trying to understand cultures very different from his own (Figure 9). His observations begin as early as they boarded the ship S.S. Kitano Maru, manned by a predominantly Japanese crew. It is clear from his notes that Einstein did not have much sensitivity when describing people from other cultures. On the ship, he sees, in his description, _"Japanese women crawling on deck with children"_; he sees them _"adorned and bewildered, almost as if they were sketchy, stylized, black-eyed, black-haired, big-headed, running..."_. Just before docking in Kobe, the ship stopped in Shanghai, and Einstein describes in his diary his frustration with Asian cuisine: _"The food, extremely sophisticated, endless. It is fished constantly, with sticks, of common bovls placed on the table in great numbers"_.
Upon arriving in Japan, Einstein was given a literal hero's welcome, and at times the fame and excessive attention overwhelmed him. One day on his tour, he was looking out the window just before sunrise. Below, thousands and thousands of Japanese were gathered, in vigil in front of the hotel. He shook his head and commented to Elsa: _"No living person deserves a reception like this. I see us as scammers. We will end up in prison"_. Both spent Christmas in Fukuoka, but mostly they toured the island of Honshu, making stops in Kobe, Kyoto, Tokyo, Sendai, Nikko, Nagoya, Osaka, Nara and Hiroshima. In Tokyo, Einstein felt, once again, suffocated by the attention: _"Arriving at the hotel, completely exhausted, among gigantic crowns of bouquets. Still to come: visit by the Berliners and live burial"_. To complete the day, he had been invited to the burial ceremony of a Japanese personality.
His scientific lectures were held at the Todai Institute of Physics and Tokyo Imperial University. Lasting four hours, the content was challenging for many aspiring Japanese scientists. As it had been before and would always be the case everywhere he imparted them; also, in particular, in the case of the several lectures he gave later in Barcelona, Madrid and Zaragoza, as should be obvious. In a letter dated 17 December 1922, he confessed to his sons:
_"The Japanese people attract me... even more than all the peoples I have met so far: quiet, modest, intelligent, appreciative of art and considerate. Nothing is about appearances, but everything is about substance..."_
Figure 9: Albert and Elsa Einstein in Japan, November–December 1922. Author unknown, courtesy of Meiji Seihanjo. Public domain.
Since his comments were purely personal and not meant to be published, the reader now gets a clear look at Einstein's thinking process. The diary is a revelation of Einstein's mind at that time because it reflects his way of thinking outside of physics. However, it also contains several revealing notes on this matter, which we will discuss later.
Another example: on Christmas Day, in Fukuoka, Einstein traveled to the Moji YMCA (Figure 10), where he was photographed _"ten thousand"_ times and felt lifeless: _"I was dead and my corpse back to Moji where it was dragged to a children's Christmas and had to play the violin"_. As he describes in his diary, exhausted, Einstein played _"The Ave Maria before collapsing at ten o'clock at night"_.
His last stop in Japan was the great city of the south, Hiroshima, which deserves special consideration. He arrived there by train on 19 December, close to the end of his stay. The previous day, the Einstein had been sightseeing in Nara, the country's old capital, from where they had taken the train for a twelve-hour journey. The next day, after recovering, Einstein took a _"fascinating walk along the coast"_ of Miyajima and saw Itsukushima Shrine, one of Japan's three holiest sites. In the afternoon, he walked _"to the top of the mountain that gives the island its main shape"_, Mount Misen. It takes a few hours to reach the top, where he wrote, he saw _"the subtlest of colors"_, as well as
_"...counlless small temples, dedicated to natural deities. Stone figures are often delightful. Of steps cut in granite rocks (altitude around 700 m). Memorial to the Japanese love of nature and all kinds of endearing superstitions."_
On the beautiful summit of Mount Misen, Einstein was surrounded by _"pure souls like nowhere else in the world. One cannot but love and admire this country"_.
One may imagine that, from there, he could have glimpsed the center of Hiroshima and the Prefectural Industrial Promotion Hall, now known as the Peace Memorial or Genbaku dome (for the atomic bomb). Twenty-two years later, on 6 August 1945, when the atomic bomb detonated over the city center of Hiroshima, the explosion wave must have shattered the windows of the houses on the island that Einstein was admiring. His famous formula, E = mc\({}^{2}\), had led Lise Meitner, Otto Hahn, and other scientists [35], as discussed before, to investigate whether it could make real sense. And later, this led to advances in creating a chain reaction of uranium and plutonium and its use in the Second World War.
Figure 10: Einstein at the YMCA in Moji, Japan, December 1922. Courtesy of Kenji Sugimoto. Einstein Archives Collection, Hebrew University of Jerusalem. Fair use.
Although this had never been his intention, the moment Einstein heard the tragic news, he exclaimed: _"Woe is mel"_.
After a very demanding journey through Japan (Having personally been, thirty-five years ago, to the south of Japan (specifically, to the university of Hiroshima), in the capacity of guest as a visiting scientist, it is not difficult for me to take charge of the feelings of Einstein and the overwhelming reception they gave him. In my case, I was greeted with a spectacular firework display that ended with a thunderous crash, followed by a banquet with very select and plentiful food and drinks. The students, above all, were very happy about my visit, which gave occasion to such a great celebration, since these took place very sporadically at that time. I have been back there several times, but now everything has changed a lot. During my stay in Japan, I followed an itinerary almost identical to Einstein's and I feel very identified with what he says, with the deep impression he was made by Miyajima, for example. Those are enchanted sites and places that touch your heart.), [36] on 29 December 1922, Einstein and his wife set sail for the British protectorate of Palestine, where they arrived a month later (Figure 11). He had already started planning the visit to the Jewish community in those territories in 1921, but it had not been confirmed until shortly before he left Berlin for Japan. He would therefore combine the two stays, which became three, with the subsequent visit to Spain, also planned in 1921 and scheduled as the last stage of the long journey.
In Palestine, Einstein stayed for twelve days, visiting the entire territory, the most important cities, and various agricultural settlements, as well as the main economic, social, and cultural institutions of the Jewish community. He also interviewed Arab leaders and the Christian community. In this case, Einstein's diary offers a priceless insight into the Jewish community in Palestine then. During those twelve days, Einstein visited Jerusalem, Tel-Aviv, and Haifa, traveled to the Dead Sea and Galilee, and lectured on relativity at the site that would later become the home of the Hebrew University.
Figure 11: The Einstein s at the Government House in Jerusalem, with the British High Commissioner. February 1923. Einstein Archives Collection, Hebrew University of Jerusalem. Fair use.
### Einstein in Spain
Einstein had been invited to visit our country by Esteve Terradas, a physicist like himself, working in Barcelona, and Julio Rey Pastor, a mathematician, working in Madrid. Independently, he had got an invitation to visit Zaragoza, as well. Terradas had offered him 7000 pesetas--in those days, twice the annual salary of a university professor--to give lectures in Barcelona and Madrid (Figure 12).
Returning from their stay in Palestine, the ship had docked at Marseilles where, as he carefully explains in his diary, Einstein had trouble checking the main part of their luggage directly to Berlin or Zurich failing that. After resolving the mishap, they set off by train for Barcelona, where they arrived on 22 February 1923. They had been unable to tell their hosts which train they had taken, and no one came to meet them at the station. With his wife Elsa, he headed to the modest hotel Cuatro Naciones, which is at the end of the Ramblas, to the left, going down. Numerous anecdotes refer to Einstein's humility, and this one is among the most famous. But they did not spend a single night there: when the owner of the hotel found out who Einstein was, he immediately told him that such was no place for him and his wife and sent them to the higher-class Colon hotel, located then in Catalunya Sq., corner with Passeig de Gracia, where they spent the seven days of their visit. The stay at the Colon cost 692 pesetas. It was paid for, together with other expenses (including food and a bouquet), mounting to 883 pesetas by the city council of Barcelona. The entrance to each of his conferences cost 25 pesetas, quite a high price at the time; despite that, the classrooms were always full. We leave more details like those to expert historians with whom we do not intend to compete. It is interesting to read first-hand Einstein's notes corresponding to the days of his visit [37].
**Doc. 379. Travel diary [March 1923], p. 325-326**
_"17, 18, 19 February. Indigestion from bad food. High seas and rain. 19 in the morning, Stromboli well in sight. Afternoon, 6 o'clock, Naples. Vesuvius with gray clouds cloudy sky. So cold and unpleasant that one is glad to stay on the boat. An Englishman from Australia turns out to be from Mecklenburg. News of a rail strike in France and more and more retaliation in the Ruhr, how will things go? In Toulon, friendly people. In Marseille,
Figure 12: Albert Einstein in front of the Fonda Iberica, in Espluga de Francoli, on 25 February 1923. What attracted the children’s interest was not Einstein’s charm but the magnificent automobile in which he had arrived, from Casa Elizalde, a type 29 torpedo, unmistakable (in 1922, its price was 33,000 pesetas)—public domain.
dangerous to speak German. The manager of the freight depot refuses to send our baggage to Berlin or even to Zurich._
22-28 February. _Stop in Barcelona. We are tired but friendly people (Terradas, Campalans, Lana, Tirpitz's daughter). Popular songs, dances. Refectory. How beautiful it was!_ (Figure 13)
1 March _Arrival in Madrid. Departure from Barcelona, farewell. Terradas, German consul with Tirpitz's daughter, etc._ (Figure 14)
3 March _First lecture at the university._
4 March _Car ride with Kocherthaler--answer to Cabrera. I wrote the academy speech. Academy session in the afternoon chaired by the king. Magnificent speech by the president of the acad. Afterwards, tea in the society of artists. Ladies. You felt at home but in a very Catholic atmosphere._
5th _In the morning. Honorary member of the Mathematical Society. Debate on general relativity. Lunch at Kuno's. Visit with Kuchal. A wonderful old thinker. Very sick, good conversation. Invitation to dinner in the afternoon from Mr. Vogel. He has a good heart, humorous pessimism._
6th _Excursion to Toledo hidden through many lies. One of the best days of my life. Radiant skies Toledo is like a fairy tale. We were guided by an old enthusiast, who supposedly had written something important about Gra [EI Greco]. Streets and market, city views, the Tagus with stone bridges, stone covered hills, lovely level cathedral, synagogue, sunset on the return trip with brilliant colors. Small gardens with views near the synagogue. Greco's magnificent fresco in a small church (burial of a nobleman) is one of the most profound images I have ever seen. Wonderful day._
7th _Audience at noon with the King and Queen Mother. The latter shows that she knows about science. You realize that no one tells her what they think. The king, simple and dignified, I admire him for his ways. In the afternoon, the third university conference, a devoted audience that probably couldn't understand practically anything because the latest problems were being discussed. In the evening, a great reception at the home of the German enroy. The enroy and family are magnificent and modest people. Socializing is as heavy as ever._
8th _Honorary doctorate. Spanish speeches with associated firecrackers. Long but with good content that of the enroy on German-Spanish relations but in genuine German. No rhetoric. Then, in the afternoon, visit with the technical students. Speeches and nothing more than speeches, but very meaningful. Talk in the evening. Then playing music at Kuno's. A professional (director of the conservatory), Pors [Bordas] played the violin exquisitely._
9th _Excursion to the mountain and El Escorial. Clorious day. An evening reception in the student residence with talks by Ortega and me._
10th _Prado (mainly looking at paintings by Velazquez and El Greco). Farewell visits. Lunch at the home of the German enroy. A night with Lina and the Ullmanns in a small and primitive dance venue. Fun evening._
11th _Prado (splendid masterpieces by Goya, Raphael, Fra Angelico)._
12th _Trip to Zaragoza."_
Finally, Einstein also visited Zaragoza, where he stayed from the 12th to the 14th of March and gave two lectures. The day he took the train back to Berlin, he turned 44.
Regarding Catalonia, it should be mentioned that he visited (in addition, obviously, to Barcelona) Sant Cugat del Valles [38], Terrassa, Espluga de Francoli and Poblet. In his diary, the contrast between the only three or four lines dedicated to the stay in Catalonia and the more than forty that occupy the notes corresponding to the rest of the trip is evident. It must
be said, however, that after the notes on Catalonia, he had left a blank page. Everything suggests that he intended to fill it later, which, unfortunately, he never did.
After the three-week visit, Einstein and his wife returned by train to Berlin, where they arrived on 21 March 1923, thus putting an end to their long journey. They had been out of Germany for nearly six months.
Historians, such as Thomas Glick, author of the highly recommendable reference book [39]_"The Spaniards and Einstein"_ or also Ana Romero de Pablos, co-author of the book [40]_"Einstein en Espana"_, point out in a very similar way, that Einstein's visit did not serve to Europeanize Spanish science, nor did it open up new lines of research; they agree that people were left with, at the very least, a great feeling of admiration for the genius. Regarding its impact in Barcelona, in particular, it is good to read the article by Antoni Roca Rossell, _"Albert Einstein in Barcelona,"_ in the work _"History, politics, society and Culture of the Catalan Countries"_ and other publications (all in Catalan). More information about the visit to Catalonia and much of what has been written here can be found on the websites [41; 42; 43; 44; 45; 46; 47]. There is also an interesting "Einstein trail" through Barcelona [48].
One of the main objectives of the commemoration in memory of the centenary of Einstein's visit was to expose how this situation has changed radically. In a few words, in the Century that elapsed, Spanish scientists have transitioned from being passive admirers of scientific advances [39] to becoming actors at an international level who actively participate in cutting-edge scientific projects. Reliable evidence has been presented at conferences and scheduled events and can be obtained from international databases.
Figure 13: Einstein at the event held at the Industrial School of Barcelona on 28 February 1923, where he attended the performances of the Barcelona sardanist couple and the Penya de la Dansa of the New University Student Association. Public domain.
## 5 The Important Scientific Context Around Einstein at the Time of His Visit
As it becomes clear from reading Einstein's travel diary, his mind was regularly occupied with questions having to do with physics. For example, on 9 October 1922, while on the ship that was taking him to Japan, he wrote in his diary that he was reading Ernst Kretschmer's book "Physics and Character" [49], as well as one of Henri Bergson's on relativity [50]. He explains his reflections on the matter in detail, filling more than one page with brief annotations. In particular, he says he is comparing the approaches of Riemann and Weyl to the problem of the unification of gravity with electricity, among other issues. He spoke about this subject at one of the conferences he gave, both in Barcelona and Madrid [39]. And it is quite well known that, already at that time and for several years of the rest of his life, Einstein devoted much of his time to trying to find a theory to unify gravitation and electromagnetism.
But it is not, in any case, this issue (although very important and still to be resolved) that will be described here, but rather, scientific events in a very different area, which took place in those days, around Einstein himself and his general theory of relativity (although he was not, this time, the hero of the confrontation). Those discoveries revolutionized our understanding of the Universe radically and eventually resulted in the creation of modern cosmology.
Alexander Friedmann (also sometimes spelt Alexandr Fridman) was born in 1888 in Saint Petersburg, where he remained for much of his short life. His father was a composer, and his mother was a dancer and pianist. He earned his bachelor's degree from St. Petersburg State University in 1910 and later became a professor at the city's Mining
Figure 14: Einstein inside the train, at the France station in Barcelona, before leaving for Madrid on 1 March 1923. Public domain
Institute. Friedmann had acquired a great interest in the mathematics used in Einstein's general theory of relativity. Although already published a few years earlier, this theory was still not widely known in Russia due to the situations experienced in the First World War and, subsequently, the bloody Patriotic Revolution.
Friedmann was a friend of Paul Ehrenfest. They had known each other during the five-year stay of the last in St. Petersburg. Towards the end of 1920, Friedmann wrote Ehrenfest a letter in which he said:
_"...I have been working on the axiomatics of the principle of relativity, starting from two propositions: a) uniform movement continues to be uniform for all observers; b) the speed of light is constant (the same for both a static and a moving observer). Moreover, I have obtained formulas for a Universe with only one spatial dimension, which are more general than the Lorentz transformations..."_
The Ehrenfest archive at the Lorentz Institute in Leiden (The Netherlands) also contains other letters and manuscripts that Friedmann sent to Ehrenfest, starting in early 1922. The translation of a letter he wrote to him, in Russian, in April of that year says:
_"...I am sending you a short note on the shape of a possible Universe, more general than Einstein's cylinders and De Sitter's spheres. Apart from these two cases, a world also arises whose space has a radius of curvature that varies with time. I thought this question might be of interest to you. As soon as I can, I will send you a German translation of this note. And, if you think the matter is interesting, please be so kind as to endorse me with a view to its publication in a scientific journal..."_
This paper, "K Borroso o geometriri Khriyki prostranst" ("On the Question of the Geometry of Curved Spaces"), dated 15 April 1922, does not appear in the surviving list of Friedmann's publications, which suggests that it was never published. Ehrenfest is known to have sent the manuscript--along with an (undated) letter Friedmann had written to Hermann Weyl--to the mathematician Jan Schouten, who was working in Delft. Schouten replied to Ehrenfest in a letter dated 29 June 1922, in which he criticized Friedmann's analysis (which did not prevent Friedmann and Schouten from collaborating, a few years later, on another purely mathematical subject).
In the same year, 1922, and while all this was happening, Friedmann translated his article into German. He elaborated it further and changed the title to: "O Khriykihei prostranst" ("On the Curvature of Space"). Now he introduced more clearly the idea of a possible curvature and expansion of space and decided to send it directly to the important journal Zeitschrift fur Physik for publication.
The article was received by the journal on 29 June 1922. Friedmann demonstrated in it that the radius of curvature of the Universe could be an increasing or periodic function of time. He commented on the results of this paper in a book he wrote later, explaining them as follows:
_"...The case of a stationary universe includes only two possibilities, which have already been considered by Einstein and De Sitter. The case of a variable universe admits, on the other hand, many possible situations. In some cases, the radius of curvature of the universe increases steadily with time. And other situations correspond to a radius of curvature that changes periodically..."_
Einstein analyzed Friedmann's paper quite quickly, as evidenced by the fact that Zeitschrift fur Physik received his reply on 18 September 1922 [51] (a few weeks before embarking on the six-month-long journey described above):
_"...As for the non-stationary universe, the results contained in the work seem suspicious to me. The solution given for this case turns out not to satisfy the field equations..."_
Friedmann learned of Einstein's criticism through his friend Yurii Krutkov, who was visiting Berlin then. And, on 6 December, Friedmann wrote a letter to Einstein responding to his objections:
_"...Considering that the possible existence of a non-stationary universe is of interest, 1 would like to present the calculations I have made here so that you can verify and critically evaluate them. [He details all mathematical operations]. If you find the calculations, 1 present in this letter to be correct, please be so kind as to inform the editors of Zeitschrift fur Physik about this conclusion. Perhaps in that case, you would like to publish a correction to the statement you have made, or at least allow me to publish the calculations part of this letter..."_
However, when Friedmann's letter reached Berlin, Einstein had already embarked on the long journey that took him and his wife to Japan, Palestine, and Spain. He did not return to Berlin, as we have seen before, until March of the following year. But, even after returning, Einstein did not read Friedmann's letter for the time being (or perhaps deliberately ignored it altogether, this is unknown).
Anyway, in May 1923, Krutkov and Einstein met again in Leiden, where both gathered there for the last master class of Hendrik Lorentz, who was retiring as a professor of his own will. They met, face to face, in the house of Ehrenfest, who was precisely the one who would succeed Lorentz in his chair. There, Krutkov could explain to Einstein the details in Friedmann's letter.
The result of the scientific discussion that subsequently took place is known through two short paragraphs of respective letters that Krutkov wrote to his sister in St. Petersburg a few days later. In the first, he explains:
_"...On Monday, May 7, I was with Einstein, reading Friedmann's article of Zeitschrift fur Physik in detail..."_
And finally, in the other letter, written on 18 May 1923, he states:
_"...I managed to defeat Einstein in the argument of Friedmann's work. Petrograd's honor is saved!"_
Einstein had finally admitted his error and immediately wrote a note to Zeitschrift fur Physik retracting his earlier observation:
_"...In my previous note, I criticized Friedmann's work on the curvature of space. However, a letter from Mr. Friedmann, which Mr. Krutkov handed me, has convinced me that my criticism was based on an error in my calculations. Now I consider that the results of Mr. Friedmann are correct and bring new light. It is shown that the field equations and the static solution also admit dynamic solutions (i.e., with a variable time-coordinate), with central symmetry for the spatial structure."_
The retraction note [52] was received in Zeitschrift fur Physik on 31 May 1923 (Figure 15). In any case, this did not mean, at all, that Einstein had become convinced that Friedmann's equations were of any use, that they could have anything to do with physical reality (although they appeared to be mathematically correct, at least).
Friedmann's equations, coming from Einstein's and corresponding to a homogeneous and isotropic universe, are given in terms of two expressions. The first one is derived from the 00 component of Einstein's field equations:
\[H^{2}\equiv\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G\rho+\Lambda c^{2}}{ 3}-\frac{Kc^{2}}{a^{2}}\.\]
The second one reads:
\[3\frac{\ddot{a}}{a}=\Lambda c^{2}-4\pi G\left(\rho+\frac{3p}{c^{2}}\right)\!,\]
and comes from the first, together with the trace of Einstein's field equations (the dimension of the two equations being T\({}^{-2}\)).
In these equations, a is the scale factor, G, \(\Lambda\), and c are universal constants: namely, G is the Newtonian constant, \(\Lambda\) the cosmological constant with a dimension of L\({}^{-2}\), and c is the speed of light in vacuum. Moreover, \(\rho\) and p are the volumetric mass density and pressure, respectively; k, corresponding to the curvature, is constant throughout a particular solution but may vary from one solution to another.
### Notiz au der Arbeit von A. Friedmann
,,Uber die Krummung des Raumes".
Von A. Einstein in Berlin.
(Kingvenagen an 31. Mai 1993.)
Ieh habe in einer fruheren Notiz 1) an der genannten Arbeit 9) Kritik geubt. Mein Einwand bernhe aber -- wie ich meib auf Anregung des Hern Krutkoff an Hand eines Briefes von Herrn Friedmannn uberzegt habe -- auf einen Reebenfehler. Ieh halte Herrn Friedmannn's Resultate fur richtig und aufklarend. Es zeigt sich, dass die Feldgleichungen neben den statischen dynamisehe (d. b. mit der Zeitkoordinate veranderliche) kentrisch-symmetrische Losungen fur die Raunstriktar znlassen.
[MISSING_PAGE_POST]
negative Krummung des Raumes"_ ("On the possibility of a world with constant negative space curvature"). This work completed the previous one from 1922, obtaining all possible cases for the values of the curvature of the Universe: positive, negative or null. Ten years later, Howard Robertson and Arthur Walker rigorously showed that if the Universe is homogeneous and isotropic (e.g., it satisfies the cosmological principle), the only family of Friedmann solutions that survives is the one that originates with a singularity--the curvature can still be positive, negative or zero.
To conclude, after spending years completely unnoticed, the dynamic cosmological model of general relativity, originating in Einstein's equations and completed by Friedmann, would become the only possible solution for our Universe. It can be affirmed that, in this way, the magnificent framework of general relativity finally became the theory _par excellence_ for the description of our Universe. To the particular beauty in its conception from seemingly natural principles (which we will never tire of underlining) was now added the extremely important fact that it is the only possible theory (within the framework of the postulates, as established by Einstein) (Figure 16) and that it has a unique solution: the one found by Friedmann, now commonly called of Friedmann-Lemaitre-Robertson-Walker (FLRW).
A last note on Friedmann. In June 1925, he obtained the position of director of the Main Geophysical Observatory in Leningrad (the new name of his city), and, in July, he took part in a balloon flight that set an altitude record for the time of 7400 m. He died soon after, on 16 September 1925, at the age of 37, of a misdiagnosed typhoid fever; which he contracted on his way back from his honeymoon in Crimea after eating an unwashed pear he had bought at a railway station.
## 6 Summary and Conclusions
What we have described in this article is a crucial development in the middle of what is now known as the first revolution in modern cosmology. A revolution that the author has framed in the twenty years from 1912 to 1932. It started with the transcendental astronomic discoveries made by Henrietta Leavitt and Vesto Slipher and extended up to those of Edwin Hubble; and included the no-less important theoretical advances of Albert Einstein, Alexander Friedmann, Willem de Sitter, and Georges Lemaitre [34]. There is a consensus that it peaked in 1929 with the publication of Hubble's results. It is too often forgotten that it was Lemaitre who, in 1927, had published Hubble's law, together with his perfectly reasoned and documented conclusions on the fact that the Universe was expanding. And
Figure 16: Albert Einstein in Princeton, in 1947. Public Domain.
this is what he told Einstein in person that same year, at the famous Fifth Solvay Conference, held in Brussels. However, Einstein did not accept his conclusions and let him know that his physical intuition was "abominable". Eventually, the theory of the expansion of the Universe, having an origin in the past, was adopted by all the main specialists and was crowned with the famous model by Einstein and De Sitter in 1932.
Notwithstanding that, the final confirmation of the Universe's expansion as a true scientific theory had still to wait for a very elaborate formulation of the Big Bang model, its definitive verification through the detection of the cosmic background radiation (CMB), and yet a major and crucial reshaping (inflation), which would only arrive fifty years later. And which was already the prelude to a second revolution (1985-2005) [3]: The expansion of the Universe is accelerating. To wit, according to the most recent and accurate astronomical observations, it is very likely that our universe had an origin from (almost) nothing (e.g., from a vacuum state of a tiny quantum system including space-time and a scalar field or two) some 13.8 billion years ago and is currently in accelerated expansion. No one has been able to explain this last fact convincingly yet, which has turned into one of the main open problems of present-day cosmology. All attempts to keep Einstein's general relativity and explain the acceleration using a (possibly running) cosmological constant--that would come from feasible contributions of quantum vacuum fluctuations--have failed to date.
Anyway, Einstein's theory will have to be modified, this is for sure. We noted already that it was Einstein himself the first to recognize this fact, having failed to incorporate Mach's ideas properly. He dared to preview that some of his colleagues would improve his equations soon. More than a century later, theoreticians are still working on this issue, mainly because of the need to cope with experiments and observations on two different playgrounds, which are, in fact, closely connected. On one side, on small distances, e.g., the realm of quantum physics and beyond, up to the GUT's scale or the inflationary one. And, on the other, in the realm of black-hole collisions and other extremely energetic processes in the universe, such as GRBs and others. In the absence of a quantum theory of gravity, perturbation terms under the form of powers and other functions of the curvature are being added to the original Einsteinian, second-order theory, using convincing arguments of different sorts (What we can also understand as going in the direction of trying to fulfill Mach's principle.). A lot of work is being done in those directions, and our research group in Barcelona and different collaborators have issued pioneering papers on some of these subjects (see, e.g., Refs. [55; 56; 57; 58; 59; 60; 61; 62; 63; 64]).
What we have just described above constitutes, without a doubt, a pivotal episode in the history of physics, cosmology and, even further, in all human history. And, as we can appreciate--this was the purpose of the last section--a crucial act of this episode occurred in 1922-1923, around the time of Einstein's visit to our country.
Although we cannot say that his contribution to this issue was as brilliant as in many other cases (Section 2), we should appreciate that the fundamental basis for the whole discussion continues to be the field equations of his general theory of relativity, conveniently investigated further by other researchers of great insight and intuition.
Finally, it is a fact that Einstein himself could not grasp all the consequences of the exceptional theory he had created, starting from a few very basic and natural principles. It has taken more than a century and the dedication of thousands of researchers worldwide to get an extended idea of them. This shows us, palpably, that, despite the importance of the great geniuses at times may appear to be infinite, progress in knowledge is always, without exception, a collective task.
This research was funded by the Spanish State Research Agency program AEI/10.13039/ 501100011033, project number PID2019-104397GB-I00, by AGAUR, Catalan Government, project 2017-SGR-247, and by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M.
**Data Availability Statement:** Not applicable.
**Acknowledgments:** This paper is based on the author's opening talk at the Fourth International Conference on Symmetry and two more talks at the Royal Academy of Sciences and Arts of Barcelona and the Institute of Space Sciences in Bellaterra. Comments from the participants in these events and very helpful observations from two manuscript referees are gratefully acknowledged.
**Conflicts of Interest:** The author declares no conflict of interest.
|
2308.05031 | High-fidelity simulation of pebble beds: Toward an improved
understanding of the wall channeling effect | Wall channeling is a phenomena of interest for Pebble Bed Reactors (PBRs)
where flow is diverted into high-porosity regions near the wall. This diversion
of flow can have a significant impact on maximum fuel temperatures and core
bypass flow. Porous media models that are currently used to model PBRs for
design scoping and transient simulation are lacking in their capabilities to
model the wall channel effect. Recent efforts at Penn State have produced an
improved porous media pressure drop equation that is more capable of modeling
the velocity variations caused by the wall channel effect in a porous media
model. Several pebble beds were divided into concentric rings of $0.05D_{peb}$,
and average flow quantities and porosities were extracted for the ring. A
correlation between the form loss coefficient and the local ring porosity was
found, allowing for the addition of a correction factor to the form loss term
of the KTA equation. The developed correlation was purely empirical, and thus a
more thorough understanding of the underlying flow phenomena is desired. This
study investigates geometric and flow features that can explain the observed
correlation between the form coefficient and the local porosity that was used
to generate the improved pressure drop equation. The solid surface area to
volume ratio $S_v$ along with the production of Turbulent Kinetic Energy (TKE)
is analyzed. A relationship between $S_v$ and the local porosity and an inverse
relationship between the negative TKE production and the local porosity were
found, pointing to the idea that inertial effects caused by different pore
geometry in each ring contribute to the variation of the form constant with the
local porosity. | David Reger, Elia Merzari, Saya Lee, Paolo Balestra, Yassin Hassan | 2023-08-09T15:59:13Z | http://arxiv.org/abs/2308.05031v1 | High-fidelity simulation of pebble beds: Toward an improved understanding of the wall channeling effect +
###### Abstract
Wall channeling is a phenomena of interest for Pebble Bed Reactors (PBRs) where flow is diverted into high-porosity regions near the wall. This diversion of flow can have a significant impact on maximum fuel temperatures and core bypass flow. Porous media models that are currently used to model PBRs for design scoping and transient simulation are lacking in their capabilities to model the wall channel effect. Recent efforts at Penn State have produced an improved porous media pressure drop equation that is more capable of modeling the velocity variations caused by the wall channel effect in a porous media model. Several pebble beds were divided into concentric rings of \(0.05D_{peb}\), and average flow quantities and porosities were extracted for the ring. A correlation between the form loss coefficient and the local ring porosity was found, allowing for the addition of a correction factor to the form loss term of the KTA equation. The developed correlation was purely empirical, and thus a more thorough understanding of the underlying flow phenomena is desired. This study investigates geometric and flow features that can explain the observed correlation between the form coefficient and the local porosity that was used to generate the improved pressure drop equation. The solid surface area to volume ratio \(S_{v}\) along with the production of Turbulent Kinetic Energy (TKE) is analyzed. A relationship between \(S_{v}\) and the local porosity and an inverse relationship between the negative TKE production and the local porosity were found, pointing to the idea that inertial effects caused by different pore geometry in each ring contribute to the variation of the form constant with the local porosity.
Wall Channel Effect Porous Media Pebble Beds CFD
## 1 Introduction
The Pebble Bed Reactor (PBR) design has seen a resurgence in interest in recent years. PBRs are currently being developed in the United States and China is currently constructing several PBRs. As these systems are approaching more widespread deployment, fast and accurate simulation tools are necessary for design scoping and simulation of accident scenarios.
Porous media modeling is the current state-of-the-art for intermediate-fidelity simulation of packed beds. Especially in the case of randomly organized porous media like pebble beds, resolution of the complex void regions and all fluid-solid interfaces is incredibly expensive, requiring billions of gridpoints [1]. It is worth noting, however, that these pebble-resolved computations are not impossible, as recent increases in computing power have made such calculations feasible [2]. Regardless, the immense cost involved with pebble-resolved simulations makes them unrealistically expensive for design scoping and plant-level simulation. Porous media models spend this issue by homogenizing the porous media with spatial averaging, reducing the computational cost by several orders of magnitude. Closure models are also required to approximate the effects of small flow features on the macroscale behavior of the flow. These closure models provide estimations of drag coefficients, effective conductivities, and interphase heat transfer coefficients among
other parameters [3]. The accuracy of the closure models will directly influence the accuracy of the porous media model as a whole, and thus it is important to ensure that the models effectively represent real-world physics.
One area where current closure models are insufficient is in the near-wall region of PBRs. In this region, the presence of the wall influences the packing of the pebbles, causing them to form more orderly structures and increasing the porosity. This effect can be seen in the projection of the pebble centers in a PBR found in Figure 1. Many existing closure models have been developed to predict the average behavior of the pebble bed as a whole, leading to inaccuracies when applied to the near-wall region that differs greatly from the bed interior. Accurate prediction of the flow in the near-wall region is critical, as it can have a significant impact on bypass flow where coolant is diverted through gaps between reflector blocks. Additionally, depending on the specific PBR design, there may be a peak in the fuel power near the wall where neutrons are reflected and thermalized by the graphite reflector. **Accurate modeling of the near-wall region is therefore critical to ensure accurate prediction of fuel temperature maximums and provide confidence in predicted safety margins.** For these reasons, improving the understanding and modeling capabilities of the near-wall region has been identified by the United States Nuclear Regulatory Commission (NRC) as an issue of high importance [4].
The near-wall region of pebble beds has been a topic of research interest for many years, with many researchers looking to improve understanding of this region to enhance modeling of the near-wall porosity, pressure drop, and heat transfer. The phenomena has been studied experimentally by Amini [5]. Their study used hot wire anemometry probes to measure the near-wall flow velocities in several differently-shaped near-wall gaps. Their results experimentally confirmed the existence of the high-velocity flow channels that form in the gaps near the wall. They then examined the different behaviors observed in the two different near-wall gap geometries. Nguyen has also experimentally studied the flow behavior in the wall region through the use of Particle Image Velocimetry [6]. They examined the cross flows between pebbles and the bypass flow in the near-wall region through the use of Proper Orthogonal Decomposition (POD). Computationally, the near-wall region has been studied by Fick and Merzari [7]. Their study employed a regular arrangement of pebbles with a confining wall. They performed a Direct Numerical Simulation (DNS) and examined the 2nd and 3rd order flow statistics near the wall. Their study began to reveal some of the defining flow characteristics present in the near-wall region. Modeling of the near-wall region has been improved through the use of porosity, pressure drop, and heat transfer equations. De Klerk developed a porosity correlation that is capable of accurately modeling the oscillatory porosity variation near the wall [8]. Reichelt was one of the first to investigate the effects of wall-channeling on the bed pressure drop. He found significant errors in the Ergun equation, one of the widely
Figure 1: (left) Instantaneous velocity field for a bed of 7,000 pebbles (\(D/d_{peb}=30\)). (right) Projection of the 7,000 pebble centers in a PBR onto an axial plane. The wall-channeling effect is visible in the organized ring of pebbles near the wall.
used pressure drop equations at the time, when applied to slender beds where the near-wall effects make up a large portion of the bed [9]. He then suggested a new correlation that accounts for the ratio between pebble diameter and bed diameter. Eisefeld and Schnitzlein [10] identified the effects that wall-channeling may have on the pressure drop by comparing experimental results to correlations available at the time. They identified the approach by Reichelt as being promising in effectively modeling the influence of the near-wall region and developed their own improved correlation based on the findings from their study. With regards to heat transfer, work by Achenbach has studied the near-wall region and developed a correlation to model the near-wall heat transfer coefficient in packed beds [11].
This work reviews the current findings by our group at Penn State with regards to near-wall flow behavior in a pebble bed reactor [12; 13]. Additional investigation into the local geometry and Turbulent Kinetic Energy (TKE) production is then presented to better explain the observations that have been made thus far. Section 2 details the methods used to study the near-wall flow, and results and analysis are then presented in Section 3.
## 2 Codes and Methods
### Introduction to NekRS
Argonne National Laboratory's spectral-element CFD code NekRS [14] was chosen as the high-fidelity code for this study. NekRS is a GPU-oriented variant of the well-established open-source code Nek5000 [15]. It demonstrates excellent scalability [16], and is capable of linking to Nek5000 to utilize its existing pre- and post-processing features.
The simulations performed in this work use the incompressible, constant-properties Navier-Stokes equations in dimensionless form:
\[\frac{\partial\vec{v_{i}}}{\partial t}+\vec{v_{i}}\cdot\nabla \vec{v_{i}}=-\nabla P+\frac{1}{Re}\nabla^{2}\vec{v_{i}} \tag{1}\] \[\nabla\cdot\vec{v_{i}}=0 \tag{2}\]
where \(v\) is the fluid velocity, \(P\) is the fluid pressure, and \(Re\) is the Reynolds number based on the pebble diameter and inlet velocity (\(\frac{\rho v_{inlet}D_{pub}}{\mu}\)). The nondimensionalization of variables follows the following scheme:
\[x^{*}=\frac{x}{D_{pcb}} \tag{3}\] \[v^{*}=\frac{v}{v_{inlet}}\] (4) \[t^{*}=\frac{tv_{inlet}}{D_{pcb}}\] (5) \[P^{*}=\frac{P}{\rho{v_{inlet}}^{2}} \tag{6}\]
where \(D_{pcb}\) is the pebble diameter, \(v_{inlet}\) is the inlet velocity, \(t\) is the time, and \(\rho\) is the fluid density. The simulations performed in NekRS are wall-resolved LES, where an explicit filter was used to approximate the effects of dissipation on the subgrid scales [17]. Simulations in NekRS were run with a polynomial order of 7.
### Mesh Creation
The pebble beds created for high-fidelity simulation in this work were generated using the Discrete Element Method (DEM) in the open-source code Project Chrono [18].
The DEM simulations used to generate the beds for this work used the material properties of graphite, found in Table 1[19; 20; 21]. Additional information about the contact model used in Project Chrono can be found in [22].
The beds were generated by randomly sampling sheets of pebbles separated by \(2D_{pcb}\) of axial distance. These sheets were then dropped into a cylindrical vessel to randomly pack. Several thousand pebbles were used for the packing, and then a section of pebbles was extracted from the center of the resulting bed to avoid any influence of the cylinder bottom or top.
A meshing script is then used to generate an all-hexahedral mesh for simulation with NekRS. Developed as part of the Cardinal multiphysics project [2], the script uses a novel Voronoi-cell approach. It receives the pebble center
coordinates along with the pebble and cylinder diameters as inputs and begins by generating a voronoi cell for the void region around each pebble. The faces of each voronoi cell are equidistant between pebbles. At the top and bottom of the bed, additional pebble centers are provided to determine the top and bottom voronoi faces, although these pebbles are not included in the resulting mesh. The voronoi faces are then modified to improve the resulting mesh quality by collapsing small edges or dividing very long edges. Quad elements are then generated on the faces and are projected down onto the pebble surfaces, producing an all-hexahedral mesh. Pebble-pebble and pebble-wall contacts are handled by adding a small chamfer at the point of contact. This simply inserts a small cylinder through the contact point to widen it slightly. This method has been shown to have a minimal effect on the resulting porosity and pressure drop compared to other methods, such as shrinking each pebble to avoid contacts [23]. An example of the resulting mesh is pictured in Figure 2.
We note that we designed the mesh to have resolution sufficient to resolve the Taylor micro-scales based on estimates from a previous work [24] and additional calculations computed for this work with RANS. We also design the mesh to resolve the boundary layer and have the first grid point at \(y^{+}<1\) and a sufficient number of points in the viscous sub-layer.
A flat-profile inlet velocity condition and a stabilized outflow condition [25] are then applied along with no-slip conditions on the pebble and cylinder walls.
### Turbulent Kinetic Energy Budgets
The Turbulent Kinetic Energy (TKE) represents the mean kinetic energy that is carried by the eddies found in turbulent flow. TKE is typically produced through shear, forcing, or friction. It is then transferred donw the energy cascade where it is eventually dissipated by viscous forces in the smallest eddies. The TKE can be represented as the sum of the individual contributing mechanisms to produce the TKE equation:
\begin{table}
\begin{tabular}{|c|c|} \hline Property & Value \\ \hline Density & 2260 \(\mathrm{kg/m^{3}}\) \\ Elastic Modulus & 8 GPa \\ Poisson’s Ratio & 0.12 \\ Coefficient of Restitution & 0.6 \\ Sliding Friction Coefficient & 0.3 \\ Rolling Friction Coefficient & 0.1 \\ Simulation Timestep & \(5\times 10^{-5}\) s \\ \hline \end{tabular}
\end{table}
Table 1: Graphite material properties used in the DEM simulation. Properties were used for both sphere-sphere and sphere-wall contacts. [19; 20; 21]
Figure 2: Example of the high-fidelity model meshes used for this work showing the chamfer between pebbles.
\[\frac{\partial k}{\partial t}\underbrace{\partial t}_{\text{TKE Derivative}}+ \underbrace{\overline{u_{j}}}_{\text{Advection}}\underbrace{\partial k}_{ \text{Pressure Diffusion}}\ -\ \underbrace{\frac{1}{2}\frac{\partial\overline{u_{j}^{\prime}u_{i}^{\prime}} \overline{u_{i}}}{\partial x_{i}}}_{\text{Turbulent Transport}}\ +\ \underbrace{\nu}_{\text{Molecular Transport}}\underbrace{ \overline{u_{i}^{\prime}u_{j}^{\prime}}\overline{\partial x_{j}}}_{\text{ Production}}-\underbrace{\overline{\partial u_{i}^{\prime}}\overline{\partial x_{j}} \overline{\partial x_{j}}}_{\text{Dissipation}} \tag{7}\]
These terms are commonly referred to as the budgets of the TKE. Analysis of the production term for a packed bed is carried out in section 3.
### Validation of NekRS
Validation of NekRS's ability to reproduce the velocity and pressure drop in a packed bed has been performed with experimental data from Texas A&M University. Velocity validation was performed on a bed of 146 pebbles in work by Yildiz [24]. Additional validation has been performed for a bed of 67 pebbles, a comparison of velocity profiles between experiment and simulation can be seen in Figure 3. Validation of the pressure drop was performed by comparing results between experimental and simulation pressure gradients over five Reynolds numbers for a second bed of 789 pebbles. This comparison is shown in Figure 4, where it can be seen that NekRS falls within the range of experimental uncertainty at all points of comparison.
## 3 Results
### Near-Wall Form Coefficient Trends
Current work by our group has investigated the effect of the cylinder wall on the form loss coefficients [12; 13]. An understanding of this effect is critical to properly model the radial variation in the streamwise velocity of a PBR. The goal of this study was to improve the capabilities of the KTA drag correlation[27] to more accurately model the near-wall velocity profile in a porous media model. NekRS simulation was used to produce an LES flow dataset for two pebble beds of roughly 1,568 and 1,700 pebbles at aspect ratios (\(D_{bed}/D_{peb}\)) of 13 and 14. This analysis was then performed by separating the beds into concentric rings of \(0.05D_{peb}\) width. The ring-volume-average porosity, fluid velocity, and pressure drop could then be calculated for each ring. This information can then be used to calculated the form loss coefficient in each ring. An overview of the data extraction method is shown in Figure 5 with more information available in Reference [13].
The form loss terms were then plotted against the Reynolds number in each ring, and the constant in the numerator of the KTA form term was calculated for each ring. These constants were then plotted against the ring porosity, shown in Figure 6. It was determined that there is a correlation between the form constant and the ring porosity. A fourth-order polynomial was fit to the data to describe this correlation:
\[f(\epsilon)=253.9\epsilon^{4}-499.3\epsilon^{3}+364.7\epsilon^{2}-115.6 \epsilon+14.21 \tag{8}\]
Figure 3: In-Plane velocity magnitude over one sampling line for the 67-pebble validation study. The shaded gray region indicates the location of a pebble. Retrieved from [26]
Figure 4: Comparison of pressure gradients between experiment and corresponding simulation.
Figure 5: Data extraction method used to generate the improved pressure drop equation from [13].
where \(\epsilon\) is the local ring porosity. This correction term may be added to the KTA equation to produce an improved drag correlation:
\[\frac{\Delta P}{L}=\left(\frac{320}{Re_{m}}+\frac{6f(\epsilon)}{Re_{m}{}^{0.1}} \right)\left(\frac{1-\epsilon}{\epsilon^{3}}\right)\left(\frac{\rho^{2}{v_{s}} ^{2}}{D_{p}}\right)\left(\frac{1}{2\rho}\right) \tag{9}\]
It should be noted that this correlation can currently be stated to be valid for \(10^{2}<Re_{m}<10^{4}\) and \(0.2<\epsilon<0.9\). Additional data may help to further quantify the uncertainties of this correlation at the high and low limits of the porosity where the data is currently sparse.
After the form analysis was performed and a new correlation was determined, several knowledge gaps still persisted. Most importantly, it was not immediately clear as to _why_ this correlation between the region porosity and the form loss existed. Similarly, it was also unclear as to whether this variation is truly a correlation with porosity, or rather with wall distance. The next section presents some preliminary analysis into possible explanations for the dramatic increase in the form loss constant as the ring porosity increases.
### Toward an improved understanding of the wall-channeling effect
Among previously derived equations for the pressure loss in a packed bed of spheres, the pebble surface to volume ratio \(S_{v}\) is commonly used. Carman and Kozeny [28] used this definition in their derivation of the viscous and inertial losses in a packed bed. It was also used by Ergun [29] who built off of the work of Carman and Kozeny. In these previous derivations, \(S_{v}\) of a sphere is used, which can easily be calculated as \(6/D_{p}\). The actual surface to volume ratio was calculated for concentric rings of \(0.05D_{peb}\) width for several computational pebble beds of different sizes. These values were then plotted against the porosity in each ring, shown if Figure 7. From this plot, it can be seen that \(S_{v}\) is roughly equal to \(6/D_{peb}\) in low and medium porosity regions. The average porosity of most beds falls around 0.35-0.5 depending on the bed aspect ratio, meaning that the \(6/D_{p}\) value that many correlations have assumed for \(S_{v}\) is fairly accurate when applied to an averaged bed. As the porosity increases, however, this value increases significantly. Also shown in Figure 7 is the trend of the form constant \(6f(\epsilon)\) for comparison. The trend in \(S_{v}\) is significantly more linear than that of \(f(\epsilon)\), although this increase is the surface to volume ratio may be a contributor to the additional dependency of the form constant on the porosity.
Pressure drop equations such as the Ergun, Carman-Kozeny, KTA, and our improved KTA equation separate the porosity effects from the viscous and inertial loss coefficients. This has previously allowed for the representation of the losses with a quantity that represents the losses per pore and a porosity-dependent quantity that represents the number of pores.
Figure 6: Form loss term constants plotted against the porosity of each respective ring. A fourth order fit is applied to the data.
The multiplication of these two terms then yields a drag coefficient. The result from Figure 7 suggests that this previous approach may not be accurate, as the loss per pore changes as a function of the pore shape (which is itself a function of the porosity). This result suggests higher losses per pore as the porosity is increased. This at first may seem counter intuitive, as one would expect a more open and uniform flow geometry to have lower pressure loss. This intuition remains correct even with the findings from Figure 7, as although the losses per pore are suggested to be higher at high porosities, the total loss coefficients will still remain lower than a lower-porosity region, as the \(\frac{1-\epsilon}{\epsilon^{3}}\) term that represents the number of pores will be small for high porosity regions.
#### 3.2.1 Investigation of the Turbulent Kinetic Energy production
A better understanding of the near-wall flow physics can be obtained by examining the budgets of the TKE in the near-wall region. The budget terms were calculated from a Direct Numerical Simulation (DNS) of a small bed of 67 pebbles at Re = 1,460 that was previously used for experimental velocity validation in section 2.4. Previous work examined line samples of the fluid velocity at three locations [26], one of which is shown in Figure 3. For the DNS in this work, the bed was simulated at a polynomial order of 9 for 50 convective units to converge the 3rd-order statistics. The bed is shown alongside a centerplane slice of the TKE in Figure 8. Some of the notable features are the peaks in the TKE, typically in the wake regions behind the pebbles. There is also a noticeable left-side bias in the TKE at the bed outlet. This is can be attributed to the distribution of the top layer of pebbles, causing a turbulent plume in this region. Analysis of the TKE production term, as was done in previous work [7], can reveal additional information on the inertial effects throughout the bed.
Figure 9 shows the isosurfaces of highly positive and negative TKE production in the bed. The areas of negative production are particularly interesting. In many canonical cases, such as channel flow, the production term is positive and acts as a source of TKE. A complex case such as a pebble bed, however, can see this term change sign and become negative, suppressing the TKE rather than strengthening it. There are large areas of negative production on the bottom surfaces of the pebbles. Flow accelerates around the pebble in these regions, causing negative production and suppression of the TKE. Additionally, there are some elongated negative-production structures in the large void regions near the wall. The areas of high TKE production exist in the wake regions behind pebbles where the velocity gradients and covariances are high.
Further investigation into the TKE production was also performed to determine if there is any relationship to the observed form correction factor. It was theorized that differences in the amount of negative production could potentially
Figure 7: Ratio between pebble surface area and pebble volume \(S_{v}\) calculated for rings of \(0.05D_{peb}\) width versus ring porosity for several pebble beds. The form constant term with correction factor, \(6f(\epsilon)\) is also shown. All beds are nondimensionalized to \(D_{peb}=1\) with the bed aspect ratio shown in parentheses in the legend.
Figure 8: 67 pebble bed used for the DNS simulation (left) and centerplane slice of the TKE (right).
Figure 9: 3D Visualization of isosurfaces enclosing areas where the TKE production is less than or equal to -5 and greater than or equal to 30.
explain the correlation between the form constant and the porosity that was observed previously. As has been done with the other investigations, the 67-pebble bed was separated into rings of \(0.05D_{peb}\) width, and the average negative TKE production (\(\langle|P|\rangle\) for \(P<0\)) was calculated for each ring. An inverse relationship between the negative TKE production and the the porosity was found that closely matches the relationship between the form constant and the porosity. Figure 10 presents this relationship, along with the correction factor \(f(\epsilon)\) for reference. It can be seen that the trends of \(1/\langle|P|\rangle>\) with the porosity exhibit many similarities to the trends of \(f(\epsilon)\), as there are increases in both the high and lower porosity regions with a slight linear increase in the medium-porosity regions. This points to the idea that inertial flow effects vary greatly based on the porosity as a result of the different void geometries. These inertial effects will have an effect on the form loss coefficient which can perhaps explain the trends described with the \(f(\epsilon)\) correction factor. Regions with high negative TKE production experience more laminarization and a lower form loss per pore. Meanwhile, areas with lower negative TKE production exhibit the opposite behavior, with less laminarization and higher form loss per pore, aligning with the observed trend in the form constant found in Figure 6. The data point nearest to the wall in Figure 10, however, still requires additional investigation, as it does not follow the trend seen with the rest of the data. The bed used to extract this data is fairly small, leading to small averaging volumes and poor averaging statistics. Although the results presented from the investigation of the negative production suggest a possible explanation for the trend in the form constant, it remains difficult to decisively confirm this explanation. Additional data for larger beds to gather additional data points is necessary to further reinforce this claim.
## 4 Conclusion
This study presented a review of current efforts at Penn State University to gain a better understanding of the near-wall region of pebble bed reactors. Work has thus far studied the flow in a PBR by dividing the bed into multiple concentric regions and examining temporally and spatially-averaged flow characteristics in each region. Examination of the form-loss coefficients has revealed a correlation between the form loss term and the porosity of each ring region. This information has been used to generate an improved pressure drop correlation that is capable of more accurately reproducing the radial velocity profile in a PBR with a porous media code [13].
Although this improved correlation has proven to be a promising result, additional work is currently being conducted to better understand the near-wall flow phenomena that may be behind the observed correlation between the form term and the porosity. This study investigated the geometry of the concentric rings, particularly the ratio between the solid surface area and the solid volume. This ratio (denoted \(S_{v}\)) has been used as part of the derivation of many past pressure drop equations, where it has been assumed to be the theoretical value of \(6/D_{p}\). Calculation of \(S_{v}\) for several
Figure 10: Average of negative production (\(1/\langle|P|\rangle\) for \(P<0\)) versus local ring porosity. The correction factor \(f(\epsilon)\) is also included on a separate axis for reference.
computational beds revealed that \(S_{v}\) actually varies with the porosity, ranging from \(3/D_{p}\) to greater than \(30/D_{p}\) based on the porosity. This helps to explain the increase in the form constant term with the porosity, although the relationship between \(S_{v}\) and \(\epsilon\) is fairly linear. This indicates that although \(S_{v}\) may play a role in influencing the increasing form constant, it is likely not the only contributor.
The TKE production in each ring was then investigated to further examine potential flow phenomena that may contribute to the varying form constant. A relationship was found between the inverse of the negative TKE production and the porosity that closely matches the trend observed in the form constant. This suggests that the observed trend in the form constant with the porosity may be related to inertial effects that are caused by the different pore geometries at different porosities. The bed used to generate the TKE budgets was rather small, at only 67 pebbles, and thus further data for larger beds is needed to reinforce the hypothesis on the effect of the negative production.
## Acknowledgments
This material is based upon work supported under an Integrated University Program Graduate Fellowship.
This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
This research made use of Idaho National Laboratory computing resources which are supported by the Office of Nuclear Energy of the U.S. Department of Energy and the Nuclear Science User Facilities under Contract No. DE-AC07-05ID14517.
|
2304.03795 | Constraints on the Inner Regions of Lensing Galaxies from Central Images
using a Recent AGN Offset Distribution | In gravitational lensing, central images in quads can serve as a powerful
probe of the inner regions of lens galaxies. The presence of an offset central
supermassive black hole (SMBH) has the potential to distort the time-delay
surface in a way such that 3 central images form: a strongly de-magnified image
near the SMBH, and two less de-magnified (and potentially observable) images at
a central maximum and saddle point. Using a quad lens macro model, we simulate
the constraints that could be placed on various lens galaxy parameters based on
their central images probability of detection or non-detection. Informed by a
recent low-redshift distribution of off-nucleus AGN, we utilize Bayesian
inference to constrain the mean SMBH off-nucleus distance and galactic core
radius for a sample of 6 quads. In general, we find that a detection of the
central image in any quad would favor larger SMBH off-nucleus distances and
galaxy core sizes. Assuming a linear relationship between core radii and
velocity dispersion $r_c = b\sigma$, these results similarly imply strong
constraints on $b$, where the likely case of a central image non-detection in
each quad constraining $b$ to $3.11^{+2.72}_{-2.26} \times 10^{-4}$ kpc
km$^{-1}$ s. Our results show that tight constraints on lens galaxy parameters
can be made regardless of a detection or non-detection of a central image.
Therefore, we recommend observational searches for the central image, possibly
using our suggested novel detection technique in UV filters, to formalize
stronger constraints on lens galaxy parameters. | Derek Perera, Liliya L. R. Williams, Claudia Scarlata | 2023-04-07T18:00:17Z | http://arxiv.org/abs/2304.03795v1 | Constraints on the Inner Regions of Lensing Galaxies from Central Images using a Recent AGN Offset Distribution
###### Abstract
In gravitational lensing, central images in quads can serve as a powerful probe of the inner regions of lens galaxies. The presence of an offset central supermassive black hole (SMBH) has the potential to distort the time-delay surface in a way such that 3 central images form: a strongly demagnified image near the SMBH, and two less demagnified (and potentially observable) images at a central maximum and saddle point. Using a quad-lens macro-model, we simulate the constraints that could be placed on various lens galaxy parameters based on their central images' probability of detection or non-detection. Informed by a recent low-redshift distribution of off-nucleus AGN, we utilize Bayesian inference to constrain the mean SMBH off-nucleus distance and galactic core radius for a sample of 6 quads. In general, we find that a detection of the central image in any quad would favor larger SMBH off-nucleus distances and galaxy core sizes. Assuming a linear relationship between core radii and velocity dispersion \(r_{c}=b\sigma\), these results similarly imply strong constraints on \(b\), where the likely case of a central image non-detection in each quad constraining \(b\) to \(3.11^{+2.72}_{-2.26}\times 10^{-4}\) kpc km\({}^{-1}\) s. Our results show that tight constraints on lens galaxy parameters can be made regardless of a detection or non-detection of a central image. Therefore, we recommend observational searches for the central image, possibly using our suggested novel detection technique in UV filters, to formalize stronger constraints on lens galaxy parameters.
keywords: gravitational lensing: strong - galaxies: general - quasars: supermassive black holes
## 1 Introduction
Gravitational lensing theory predicts that the number of multiple images formed by non-singular mass distributions must always be odd. Following Fermat's Principle, the locations of these images correspond to stationary points on the time-delay surface. Three image systems (known as "doubles"), form visible images at a minimum and saddle point, where the saddle point image is usually demagnified relative to its minimum counterpart. Five image systems (known as "quads") form 2 images at minima and 2 images at saddle points. In both cases, a demagnified central image is formed at a maximum near the center of the lens. The central image is usually demagnified beyond visibility due to steep central lens density profiles causing sharp peaked behavior of the time-delay surface. Almost all existing searches for the central image rely on the optical, or radio wavelengths. In the optical, the central image is drowned by the light of the lensing galaxy, while detections in the radio are limited because most QSO sources are radio quiet.
Out of about 200 known doubles, only 2 have observed central images: PMN J1632-0033, demagnified to 0.004 (by 6 mag.), compared to the brightest image (Winn et al., 2004), and PKS 1830-211, demagnified to 0.007 (by 5.4 mag. Muller et al., 2020). Of the \(\sim\)50 known quads lensed by an isolated galaxy, no detections exist in the optical, radio, or at ALMA wavelengths (Wong et al., 2015; Tamura et al., 2015; Wong et al., 2017). The more recently discovered systems from _Gaia_(Lemon et al., 2022) do not appear to contain central images, but that is yet to be confirmed with further observations.
Even though the central image is seldom detected, its study is of great interest because it can serve as a useful probe of the central regions of lens galaxies and Supermassive Black Hole (SMBH) properties. Most, and probably all galaxies have SMBHs at their centers (Kormendy and Richstone, 1995; Ferrarese and Merritt, 2000). The black holes are tied to two important aspects of galaxies: SMBH growth is closely linked to galaxy formation, including galaxy mergers (Di Matteo et al., 2005; Koss et al., 2018) and formation of density cores (Nasim et al., 2021), and inspiralling and mergers of binary SMBH (Begelman et al., 1980) leading to the emission of gravitational waves.
An SMBH at kpc distances from the galaxy center will slowly spiral inwards through dynamical friction from collisionless particles, namely stars and dark matter, as well as gas (Chen et al., 2022). Observationally determined distribution of galaxy host-SMBH separations will constrain these dynam
ical friction timescales, and test hydrodynamical cosmological simulations (Volonteri et al., 2020; Katz et al., 2020). In massive elliptical galaxies, this process will carve out a density core. To get a better physical understanding of the dynamical friction within \(\sim\)1 kpc of the galaxy center it is crucial to constrain the sizes of galaxies' density cores. Nearby (\(\lesssim\) 100 Mpc) massive galaxies with large (\(\sim\)700 pc) density cores can be resolved (e.g. Rantala et al., 2018; Thomas et al., 2016), but this is more difficult for more distant galaxies and smaller cores. Since the central lensed image is affected by the SMBH and the galaxy mass density near the center (Mao et al., 2001; Rusin et al., 2005; Mao and Witt, 2012), a detection or upper limit on its brightness can place constraints on SMBH mass, distance from center, and lens galaxy core size.
Using lens system CLASS B1030+074, where no central image was detected, Quinn et al. (2016) finds that \(\sim\)45% of galaxies should yield observable central images, assuming demagnifications \(\leq\)10 magnitudes relative to the brightest image. Hezaveh et al. (2015) propose that 10-hr ALMA observations in the \(mm\) band can detect a central image at high significance for a lens galaxy core size \(\geq\)0.2 kpc, allowing for strong constraints on central density slope, core size, and mass of the central SMBH. Recent ALMA observations have yielded upper limits on the central image flux and SMBH mass (Wong et al., 2015; Tamura et al., 2015; Wong et al., 2017).
If measured or constrained through observations, properties of SMBH, such as their mass and distance from the host galaxy center will provide invaluable clues for the understanding of the central regions of galaxies, and SMBH merger rates. The masses of SMBH in many nearby galaxies have been measured using a range of other methods (Dullo et al., 2021; Gultekin et al., 2009). However, their distances from host galaxy centers are less well known (Skipper and Browne, 2018).
A recent examination of \(z=0.3-0.8\) Active Galactic Nuclei (AGN) led to a determination of the characteristic probability density function of their offsets' upper limits from the host galaxy center (Shen et al., 2019). Their results and sample are given the name of VODKA, which we accordingly adopt. The employed methodology, known as "varstrometry", is described lucidly in Hwang et al. (2020). (This technique can be applied to single off-nucleus AGN, or generalized to dual AGN.)
Observationally, a sub-kpc off-nucleus AGN and the center of its host galaxy appear as single-source photocenter (photometric center) for \(z>0.5\)(Hwang et al., 2020). The vast majority of AGN exhibit aperiodic photometric variability on day-year timescales of \(\gtrsim\) 0.03 mag (Sesar et al., 2007), with \(\sim\)30% varying \(\geq\)0.1 mag (Sesar et al., 2007; Rengstorf et al., 2006). For a single off-nucleus variable AGN and constant-flux host galaxy, it is expected that the AGN variability will lead to astrometric variability of the photocenter of the AGN-galaxy system, which will be strongly correlated with the total detected flux (Hwang et al., 2020). This allows for a measurement of the distance separation between the AGN and host galaxy center with linear regression (see eq. 4 of Hwang et al., 2020). When applied to low redshift (\(0.3<z<0.8\)) AGN and host galaxy pairs observed with _Gaia_ DR2, it is found that there are strong constraints on AGN separation. Nearly all AGN are at \(<\) 1 kpc, 90% are at \(<\) 500 pc, and 40% are at \(<\) 100 pc (Shen et al., 2019). While this result is a significant improvement over previous determinations, it is important to realize that the distribution of separations of AGN may differ from that of SMBH. All SMBH, not just AGN affect the central regions of galaxies by reshaping their central mass distributions, and also lead to the formation of gravitational wave emitting inspirals.
Offset SMBH are of particular importance for studying and potentially detecting the central lensed image. Ideally, one wants the distribution of host galaxy-SMBH separations, but such information does not exist. Instead, we take the AGN-galaxy host separation distribution from VODKA and use it as a prior in our analysis. The present paper is the first to incorporate the distribution of AGN offsets.
Gravitational lensing theory predicts that a sufficiently offset SMBH can produce extra central images. This is displayed in Figure 2. The inclusion of an offset SMBH creates two additional stationary points (and thus two new images) in the time-delay surface: a steep maximum very near to the location of the SMBH and a saddle between the original central maximum and the offset SMBH. The formation of the two new images and their properties depend on the offset distance of the central SMBH, the azimuthal location of the SMBH in the case of non-circular lenses, and the lens galaxy core size. If the offset distance is too short, then only one image forms at the SMBH maximum. The SMBH (maximum) image is always strongly demagnified, regardless of whether the two other central images are formed. When additional central images form in the case of a large offset distance, the central image near the maximum of the lens density profile, and the saddle image are not as demagnified, and can potentially be bright enough to be observed. It is important to emphasize that if the SMBH is not offset sufficiently, then no extra central images will form.
Here we present a new modelling framework that will allow constraints to be made on various galaxy properties with future observations. To make predictions about these properties, we proceed in three sequential steps. In the first step we create the macro-model of the galaxy using the positions and time delays of the four images of its quad. In the second step, we simulate central images for each of the macro-models. We treat the SMBH as a point mass and sample its offset according to the aforementioned VODKA distribution. Lastly, we use Bayesian inference to simulate constraints on galaxy parameters for specific observation scenarios.
Our analysis is the first one to study several (6) quad lens systems. We restrict our sample to quads because they provide more constraints for the galaxy macro-model, which affects the central region through its ellipticity and the location of the source. We assume two main possible scenarios for each quad: (i) a non-detection of central images (ii) detection of at least one of the central images. This allows us to place statistical constraints on the SMBH offset (\(\Delta r\)) and core radius (\(r_{c}\)) of individual lenses.
Additionally, we want to constrain the global distribution of these parameters for our galaxies. However, the distributions of core radii and SMBH distances may depend on the galaxy's mass, or velocity dispersion \(\sigma\) (and probably other parameters), and will not be the same for all galaxies in our sample. If one wants a ballpark value of a parameter of typical lens galaxy, one can ignore these differences, and estimate the parameter by combining the results for individual quads. We also consider a separate scenario of combination of detection in some, and non-detection in other quads in our sample.
This analysis assumes \(r_{c}\not\propto\sigma\). Alternatively, if we assume that \(r_{c}=b\sigma\), then \(b\) is the same for all galaxies, and can be estimated using all quads combined. This analysis plan is outlined in Figure 1.
Importantly, each scenario yields independent statistical constraints on the SMBH offset distribution and central density profile based on prior distributions informed from the VODKA analysis and our lens macro-model sample. Assuming that lenses from ongoing and future surveys will be analogous to the sample we use, our analysis can be used to forecast how well lens galaxy properties can be constrained with future data.
In Section 2 we describe our sample of lensed QSOs; in Section 3 we discuss our lens macro-modelling, simulation of the central images, and statistical inference analysis; in Section 4 we present the results of our analysis; in Section 5 we describe a novel technique to detect central images; and in Section 6 we discuss the implications of our constraints.
## 2 Data
Table 1 presents our sample of gravitationally lensed QSOs. This sample was chosen to roughly span the redshift range of the VODKA population, \(0.3\!<\!z\!<\!0.8\)(Shen et al., 2019), allowing direct comparison of our results with their AGN offset distribution. Most of our systems also have time delay information, which aids in lens macro-modeling. The SMBH masses we use are found by applying the \(M-\sigma\) relation from Dullo et al. (2021), as explained in further detail in Section 3.2. Four of the lens galaxies have measured central velocity dispersions. For SDSSJ0924+0219, MacLeod et al. (2015) does not directly measure \(\sigma\), and instead estimates it with lens modelling. We assume 10% uncertainty on their result, and record that value in Table 1. MG0414+0534 and RXJ0911+0551 have no measured \(\sigma\) in the literature. Therefore, we assume a \(\sigma\) for these two systems equal to the average of the measured \(\sigma\) in the rest of the sample. From this we obtain their \(M_{\rm SMBH}\approx 8.0\times 10^{8}M_{\odot}\) from the same \(M-\sigma\) relation. For the uncertainty on this \(M_{\rm SMBH}\), we assume a factor of 10, so as to span a wide enough range for possible \(M_{\rm SMBH}\).
RXJ0911+0551 presents itself as an exception in our sample. The radial distribution of the 4 observed quad images is highly asymmetric, probably due to the presence of a nearby galaxy cluster in the field, providing external shear (Burud et al., 1998; Tortora et al., 2004). As a result, the core size
Figure 1: Summary of our statistical analysis plan. We consider two assumptions for inference: (1) \(r_{c}\not\propto\sigma\) and (2) \(r_{c}\propto\sigma\). In the first case (1), we constrain the SMBH offset and galaxy core radius for individual quads depending on a hypothetical central image non-detection (blue and Figure 6) or detection (yellow and Figure 7). The cases in the red box, and Figures 10 and 11 make an assumption that all galaxy lenses have the same SMBH offset and core radius, which is unlikely to be true. Therefore these estimates should be treated only as “back-of-envelope” for this general type of galaxies. In the second case (2), we only make an inference on the \(r_{c}-\sigma\) proportionality constant \(b\), which is assumed to be the same for all galaxies. The constraints on \(b\) from individual quads are given for each detection scenario (green and Figure 8). Combining these results for a global constraint on \(b\) is similarly given for each detection scenario (purple and Figure 9).
distribution of our lens macro-models skews to larger values beyond the range of the other 6 lenses in our sample. Therefore, we treat this lens separately from the rest of the sample of 6 and derive independent constraints from it (see Table 2 and Section A). Henceforth, we refer to "our sample" as those QSOs mentioned in Table 1 not including RXJ0911+0551.
## 3 Analysis
The goal of the analysis is to obtain constraints on the galaxy core radii and SMBH offset using observational constraints on the central image.
Our analysis can be summarized in three sequential distinct steps: (1) We constrain galaxy macro-models based on the 4 non-central images. The result of this step are 1000 macro-models per lens, each described by 16 parameters, including the QSO source \((x,y)\) position. The lens galaxy
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline QSO & z\({}_{\rm QSO}\) & z\({}_{\rm lens}\) & \(\sigma\) [km s\({}^{-1}\)] & M\({}_{\rm SMBH}\) [M\({}_{\odot}\)] & m\({}_{o}\) [AB], Filter & Flux [\(10^{-17}f_{\lambda}\)] & Time Delays & References \\ \hline HE0435-1223 & 1.693 & 0.454 & 222 \(\pm\) 15 & 3.7 \(\pm\) 1.2 \(\times 10^{8}\) & 20.33, F275W & 10.36 \(\pm\) 0.07 & \(\Delta\tau_{12},\Delta\tau_{13},\Delta\tau_{14}\) & 1, 8 \\ PG1115+080 & 1.735 & 0.311 & 287 \(\pm\) 18 & 1.3 \(\pm\) 0.4 \(\times 10^{9}\) & 18.60, F218W & 101.30 \(\pm\) 0.50 & \(\Delta\tau_{14},\Delta\tau_{[23]4},\Delta\tau_{[23]}\) & 2, 8, 10 \\ RXJ1131-1231 & 0.657 & 0.295 & 323 \(\pm\) 20 & 2.3 \(\pm\) 0.7 \(\times 10^{9}\) & 19.62, F218W & 32.96 \(\pm\) 0.29 & \(\Delta\tau_{14},\Delta\tau_{24},\Delta\tau_{23}\) & 3, 8 \\ SDSSJ0924+0219 & 1.523 & 0.393 & 215 \(\pm\) 21.5 & 3.2 \(\pm\) 1.6 \(\times 10^{8}\) & 19.41, F275W & 25.40 \(\pm\) 0.12 & — & 4 \\ WFI2033-4723 & 1.662 & 0.658 & 250 \(\pm\) \(\rm{{}^{15}_{21}}\) & 6.7 \(\pm\) 2.3 \(\times 10^{8}\) & 18.56, F467M & 18.27 \(\pm\) 0.23 & \(\Delta\tau_{12(23)},\Delta\tau_{14},\Delta\tau_{[23]4}\) & 5, 8, 11 \\ MG0414+0534 & 2.640 & 0.958 & N/A & 8.0 \(\times 10^{8\pm 1}\) & 23.58, F621M & 0.11 \(\pm\) 0.01 & — & 6, See Section \\ RXJ0911+0551 & 2.763 & 0.769 & N/A & 8.0 \(\times 10^{8\pm 1}\) & 19.45, F547M & 6.11 \(\pm\) 0.10 & \(\Delta\tau_{12(234)}\) & 7, 9, See Section \\ \hline \end{tabular}
\end{table}
Table 1: Sample of gravitationally lensed QSOs
Figure 2: Visualization of the Fermat lens potential distortion by the SMBH. In both panels, the view is restricted to the central 256 mas\({}^{2}\) of the lens galaxy such that the 4 quad images are out of view. The _left panel_ shows the resulting Fermat potential contours when a \(3\times 10^{8}M_{\odot}\) SMBH is introduced at \((\Delta{\rm RA},\Delta{\rm Dec})=(6.0,-6.0)\) mas. In this case 3 images form (for a total of 7 in the system): a central image at the central maximum, a saddle image between the central maximum and the SMBH, and a SMBH image very close to the SMBH maximum. The spacing of contours shows the steepness of the potential increase due to SMBH, indicating that the SMBH image is strongly demagnified. The _right panel_ shows the lens potential contours when the same SMBH is offset a shorter distance at \((\Delta{\rm RA},\Delta{\rm Dec})=(3.6,-3.6)\) mas. The same 3 images form, with the saddle images forming closer to the midpoint of the central and SMBH images due to the less offset SMBH. The images in this case (right panel) are all magnified relative to the images in the first case (left panel), illustrating the dependence of central image magnification \(\mu\) on SMBH offset \(\Delta r\).
core radius is derived from the 14 lens parameters. These macro-models form a prior for the next step of the analysis. (2) For each macro-model, we simulate the central, saddle, and SMBH image locations and magnifications using a large sample of SMBH offsets from the VODKA distribution and SMBH masses from the \(M-\sigma\) relation. Image properties depend primarily on the SMBH masses (\(M_{\rm SMBH}\)), offsets (\(\Delta r\)), and galaxy's core radius (\(r_{c}\)). (3) The final step is the statistical inference for the two main observational scenarios we consider: (i) non-detections of the central image, and (ii) detection of at least one central image. Additionally, when we combine these constraints for all galaxies, we also consider a third case of some detections and some non-detections. With this framework, we can write the posterior probability as:
\[\begin{split} P(\Delta r,r_{c}|D)\propto\int P(D|M_{\rm SMBH}, \Delta r,r_{c})\,P(\Delta r)\,P(r_{c})\\ \times P(M_{\rm SMBH})\,dM_{\rm SMBH}\end{split} \tag{1}\]
where \(D\) are the input image positions and time delays (where available; see Table 1).
As we explain later in this section, the 16 parameter priors going into the macro-modeling in step (1) are flat. The resulting distributions of these parameters emerging from this step are not flat anymore, as the galaxy properties have been constrained by the quad images. These distributions become the priors for step (2), from which we obtain the prior for the galaxy core radius \(P(r_{c})\). The distributions of galaxy parameters (\(\Delta r\) and \(r_{c}\)) get further constrained as a result of step (3), and the constraints are different for non-detections vs. detections. The priors on SMBH properties going into step (2) are described in more detail in Section 3.2. Step (3) further constrains these properties, which we present as the main result of the paper.
### Constraining Galaxy Macro-Model
We generate galaxy-scale macro-models based on the 7 lens systems we consider in this paper. Since the central images are currently not observed in any of these, and we envision that our analysis can be extended to future lens systems, our modelling need not be tailored exactly to these systems. Instead, we use these 7 as approximate examples of realistic lenses. Because of that, the many macro-models we generate per lens fit the observed lensing data (image positions and time delays, where available) only to \(\chi^{2}\leq 9\), and in the systems where satellite galaxies are detected in addition to the main lens galaxy, their positions are not fixed at the observed position. Loosening the criteria to accept macro-models with \(\chi^{2}\leq 9\) results in lens plane image rms between \(<0.005"\) and \(\sim 0.035"\).
We represent lens galaxies by a superposition of two softened power-law ellipsoid potentials, called alphapot(Keeton, 2011):
\[\Psi_{\rm gal}=b\left(s^{2}+x^{2}+\frac{y^{2}}{q^{2}}+K^{2}xy\right)^{\frac{ \alpha}{2}} \tag{2}\]
where \(b\) is the normalization, \(\alpha\) is the power-law exponent1, \(s\) is the core radius, \(K\) and \(q\) determine the ellipticity and the position angle of the ellipsoid (Ghosh et al., 2020; Barrera et al., 2021). In addition to being a reasonably good representation of elliptical galaxies, it allows for analytical calculations of the deflection angles, normalized projected surface density \(\kappa\), and shear \(\gamma\) from the first and second derivatives of \(\Psi_{\rm gal}\). We do not include the SMBH at this point because it does not affect the QSO image positions, and therefore does not affect the lens galaxy macro-model.
Footnote 1: Not to be confused with the deflection angle.
The two mass components were allowed to have a non-zero offset between their centers, \((x_{\rm off},y_{\rm off})\). The reason for the offset is that a single component, or two co-centered components sometimes cannot reproduce QSO images to astrometric precision or other observables of the system (Rusu et al., 2020). Offsets break the elliptical symmetry of the lens and result in lopsided galaxy mass distributions, which apparently help to model some lenses and populations of lenses (Bruderer et al., 2016; Gomez and Williams, 2018; Nightingale et al., 2019; Williams and Zegeye, 2020; Barrera et al., 2021).
The two alphapots have a total of 10 parameters, i.e., two sets of \(b\), \(\alpha\), \(s\), \(q\), and \(K\). Combined with \((x_{\rm off},y_{\rm off})\), external shear amplitude and direction, and the QSO source position the total number of macro-model parameters is 16. We use downhill simplex to find solutions. The starting ranges for the macro-model parameters are the same for all 7 lenses. The density slopes \(\alpha=1.0\pm 0.2\), core radii \(s=200\pm 199\)pc, external shear amplitude, \(\gamma=0.1\pm 0.1\), and offsets of the secondary mass component, \(x_{\rm off}=y_{\rm off}=0\pm 40\)pc. We made exceptions for 3 systems that have visible nearby satellite galaxies: WFI 2033, MG 0414, and RXJ 0911, where the second mass components were given larger initial offsets to represent the satellite. For RXJ 0911+0551 we used a larger initial external shear, \(\gamma=0.1+0.1\pm 0.1\), to account for the nearby galaxy cluster.
Downhill simplex search is free to modify these initial values, so many of the final values of all parameters were outside of the starting ranges. This is also true of the location of the satellite galaxy; we did not fix it at the observed position but allowed simplex to find different solutions for each run. Because the number of macro-model parameters exceeds the data constraints, we generate many models for each observed quad. From these we reject macro-models that have two density peaks, and those where the single density peak is not coincident with the light peak, i.e., the center of the main lens. We also restrict the ellipticity of the final macro-models to disallow very elliptical or unphysically shaped galaxies: the ellipticity of the lens potential of each of the two mass components were restricted to have axis ratios \(\geq 0.7\). The surviving macro-models--about 1000 per lens system--sample the model space allowed by our assumptions and lensing degeneracies.
For each lens, the fitted macro-models have a range of central density profile slopes, and hence core radii. In general, steeper central density profiles imply smaller core radii. For each lens macro-model we calculate the galaxy density core size as the radius where the log-log density becomes steeper than \(-0.425\). While this value is somewhat arbitrary, it works well to estimate core sizes of mass distributions that do not have a core radius explicitly incorporated in their analytical form, as is the case with our two-component macro-models. Since we generate a range of macro-models for each observed lens, we also have a range of core sizes. Six of the 7 systems have core radii distributions that peak at \(\lesssim 100\) pc. RXJ 0911 is an exception: its distribution is broad with a peak at
\(\sim 400\) pc. These value are consistent with the range determined based on local ellipticals, 50-500 pc (Ferrarese et al., 2006; Hezaveh et al., 2015).
The 16 parameters for every one of the 1000 macro-models per system are passed to the next step of our analysis. These parameters do not have flat distributions, as they have been constrained by the quad images. Galaxy core size is calculated based on these parameters; its distribution is shown as yellow curves in the Figures 10 (for the 6 in our sample) and A3 (for RXJ0911) of this paper.
### Simulating Central Images
With the galaxy macro-models generated, the next focus is to find the locations and magnifications of the corresponding central images. Since we are most concerned with the central region, we re-scale the galaxy window to 0.3"\(\times\)0.3" about the center of the lens. We generate lensing potentials \(\Psi_{\rm gal}\) from the galaxy macro-models parameters (see Section 3.1) and SMBH parameters.
For the SMBH, we assume it to be a point mass with lens potential:
\[\Psi_{\rm SMBH}=\theta_{E}^{2}\ln\sqrt{x^{2}+y^{2}}, \tag{3}\]
where \(\theta_{E}\) is the Einstein radius of the SMBH:
\[\theta_{E}=\sqrt{\frac{4GM_{\rm SMBH}}{c^{2}}\frac{D_{ds}}{D_{s}D_{d}}}. \tag{4}\]
As mentioned earlier, the perturbation of the outer 4 images in each system is negligible since the SMBH only adds \(\sim\)10\({}^{-3}\) times the mass within the Einstein radius to the galaxy. From this setup we simply sum the two lens potential components to get the total lensing potential of a particular system: \(\Psi_{\rm tot}=\Psi_{\rm gal}+\Psi_{\rm SMBH}\). We then scan the re-scaled window to find the locations and magnifications of the images.
Within this framework, we displace the SMBH from the galaxy center according to the results from VODKA. To do this we generate a large sample of SMBH offset positions according to the probability distribution obtained by VODKA. From the right panel of Figure 1 of Shen et al. (2019), we transform the presented CDF into a PDF and fit to a Gaussian profile to obtain the best fit average and standard deviation of their sample. We find \(0.131\pm 0.008\) kpc, and \(0.163\pm 0.007\) kpc for the average and standard deviation, respectively. In practice, this represents a truncated Gaussian since the distribution extends into the regime of negative SMBH offsets. Using a Box-Muller transform, we use these results to generate a distribution of SMBH-galaxy center offsets. The azimuthal locations of SMBH with respect to the galaxy center were picked randomly. It is important to note that VODKA provides the distribution of SMBH offset upper limits, and we assume that the offset positions are the same as these upper limits. This is supported by recent simulation results which find that 60% of SMBH in brightest cluster galaxies at \(z=0\) are offset by \(>0.1\) kpc, and at \(z=2\) about 80% are offset by \(>1\) kpc (Chut et al., 2022). While the lens galaxies in our sample are not BCGs, their stellar masses are comparable to those in that study.
Using the \(M-\sigma\) relation for Sersic plus Core-Sersic galaxies from Dullo et al. (2021), and the velocity dispersion, \(\sigma\) (Table 1) for each QSO lens galaxy, we obtain a SMBH mass range for each lens. This range is determined by propagating the measured uncertainty in \(\sigma\) in the \(M-\sigma\) relation. For each macro-model generated, we pick the SMBH mass randomly from within this range.
Finally, for each of the 1000 macro-models per system we generate images based on the following parameters described above: SMBH mass, SMBH offset distance, SMBH azimuthal position, and lens galaxy macro-model (each with a corresponding galaxy core size). In general, the central image is always produced and is always demagnified. However, given certain SMBH masses, offsets, and macro-models, two additional images can be produced: a similarly demagnified saddle image, and a more strongly demagnified image very near the SMBH (see Figure 2). The saddle image always forms between the central and SMBH image. Using the results of magnification and SMBH offset for each macro-model, we can place statistical constraints on the galaxy lens core size and SMBH offset.
### Statistical Inference
In this section we model the analysis one would do in case of the likely scenario of a non-detection of the central image. Additionally, we also consider a less likely but more interesting case of a central image detection. Deep observations, whether or not they result in a central image detection, can constrain the galaxy core size and SMBH offset.
To place constraints on the galaxy core size \(r_{c}\), and SMBH offset \(\Delta r\), we employ Bayesian inference based on the results from our analysis described in Sections 3.1 and 3.2. Our first analysis assumes that \(r_{c}\) is independent of the galaxy's measured velocity dispersion \(\sigma\). Later, when we combine all the quads in our sample for a general constraint, we assume that \(r_{c}\) is proportional to \(\sigma\), which is the simplest, yet physically plausible, relation that these two parameters can have. For each of these two analyses we consider two cases to inform our likelihood function: non-detection of all central images and detection of at least one of the central images, for example, using the technique outlined in Section 5. In the latter analysis, we consider a case where 2 systems have detections, and 4 have non-detections.
Focusing first on results for individual QSO quads with no \(r_{c}-\sigma\) relation, we assume truncated Gaussian prior probabilities on SMBH offset \(\Delta r\) according to the VODKA distribution (Shen et al., 2019), and prior probability distribution of the galaxy core sizes, \(r_{c}\), from the galaxy lens macro-models, which resemble Gaussians, described in Section 3.1. For the SMBH mass \(M_{\rm SMBH}\) we assume a uniform prior \(P(M_{\rm SMBH})\) within the defined \(M_{\rm SMBH}\) range described in Section 3.2. With this, we can solve for the 2D posterior probability using equation 1.
To get a posterior on \(r_{c}\) for each quad, we can marginalize \(P(\Delta r,r_{c}|D)\) over \(\Delta r\):
\[P(r_{c}|D)\propto\int_{\Delta r}P(\Delta r,r_{c}|D)d\Delta r \tag{5}\]
We can repeat vice versa for a posterior on \(\Delta r\). This intrinsically assumes that there is no relation between \(r_{c}\) and \(\sigma\). We apply this analysis individually to each quad to estimate their \(\Delta r\) and \(r_{c}\).
Alternatively, we can expand the above method to account for a potential correlation between velocity dispersion and
core size. As stated above, we assume that \(r_{c}\) and \(\sigma\) are proportional, therefore:
\[r_{c}=b\sigma \tag{6}\]
where \(b\) is a fit parameter we wish to extract. Under this model, the parameter \(b\) is assumed to be the same for all quads, and we do not attempt to constrain \(\Delta r\). From this, we assume priors for \(b\), \(P(b)\), with the same shape as \(P(r_{c})\). We continue to use the same priors for \(M_{\rm SMBH}\) and \(\Delta r\) as before. Therefore, we can write a separate 2D posterior:
\[\begin{split} P(\Delta r,b|D)\propto\int P(D|M_{\rm SMBH}, \Delta r,b)\,P(\Delta r)\,P(b)\\ \times P(M_{\rm SMBH})\,dM_{\rm SMBH}\end{split} \tag{7}\]
Repeating the same marginalization as in equation 5 will give us individual posteriors for each lens for \(b\). To get a global constraint on \(b\), we can multiply all the resulting marginal posteriors for each quad together:
\[P(b|D)\propto\prod_{\rm quads}\left(\int_{\Delta r}P(\Delta r,b|D)d\Delta r\right) \tag{8}\]
This allows us to constrain the \(r_{c}-\sigma\) relation.
We emphasize that the constraints for \(\Delta r\) and \(r_{c}\) are results applicable to individual lenses, thus providing a robust method to derive these properties for observed quads. For \(b\), however, the constraint is intended to be a single result for all lenses assuming that equation 6 is obeyed.
Combining the distributions for \(\Delta r\) and \(r_{c}\) presents a separate challenge. As a ballpark estimate, we can use the simplistic assumption that \(\Delta r\) and \(r_{c}\) are the same for all lenses. With this, we can derive these constraints in the following way:
\[P(\Delta r|D)\propto\int_{r_{c}}\left(\prod_{\rm quads}P(\Delta r,r_{c}|D) \right)dr_{c} \tag{9}\]
and:
\[P(r_{c}|D)\propto\int_{\Delta r}\left(\prod_{\rm quads}P(\Delta r,r_{c}|D) \right)d\Delta r \tag{10}\]
For the scenarios of a central image non-detection and detection, we follow the outlined inference procedure and simply define different likelihood functions \(P(D|M_{\rm SMBH},\Delta r,r_{c})\) for each case, as explained below.
#### 3.3.1 Central Image Non-Detection
For the likely case of a non-detection, we define a limiting specific flux \(f_{\rm crit}\) below which we assume any central image regardless of its position in the lens plane will not be detected. While this flux limit can be varied at one's wish, we set it to be equivalent to 10 magnitudes fainter than the brightest quad image in each lens system, corresponding to a magnification2\(\mu_{\rm crit}\) of \(10^{-4}\) relative to the brightest quad image (same condition as used in Quinn et al., 2016). Since, by definition, magnification \(\mu=f/f_{\rm BI}\) where \(f_{\rm BI}\) is the flux of the brightest quad image (see Table 1), the limiting flux \(f_{\rm crit}=\mu_{\rm crit}f_{\rm BI}\). Under this assumption, we define the likelihood of non-detection \(P(D|M_{\rm SMBH},\Delta r,r_{c})\) as the integral of a Gaussian probability density centered around each predicted image flux (\(f_{i}\)) up to \(f_{\rm crit}\):
Footnote 2: Magnifications can be converted into magnitudes using: \(m_{i}=-2.5\log_{10}(\mu_{i})+m_{\rm BI}\)
\[P(D|M_{\rm SMBH},\Delta r,r_{c})\propto\int_{-\infty}^{f_{\rm crit}}\exp(- \frac{1}{2}\left(\frac{f-f_{i}}{\sigma_{f}}\right)^{2})df \tag{11}\]
The dependence of the likelihood on \(M_{\rm SMBH}\) enters through \(f_{i}\). We define the image flux uncertainty \(\sigma_{f}\) to be \(0.4f_{i}\) to allow for a range of uncertainty in the image flux. Furthermore, we note that the predicted image positions \(\vec{r}_{i}\) are absent in this likelihood as this information would be unknown in the case of a central image non-detection.
#### 3.3.2 Central Image Detection
For the case of a detection of the central image, we define a hypothetical observed central image with specific flux \(f_{o}\) and position in the lens plane \(\vec{r}_{o}\). For each image formed for a given \(M_{\rm SMBH}\), \(\Delta r\), and \(r_{c}\), the likelihood of detection is assumed to be Gaussian relative to the hypothetical observation:
\[P(D|M_{\rm SMBH},\Delta r,r_{c})\propto\exp(-\frac{1}{2}\left(\frac{\left(f_{ i}-f_{o}\right)^{2}}{\sigma_{f}^{2}}+\frac{(\vec{r}_{i}-\vec{r}_{o})^{2}}{ \sigma_{f}^{2}}\right)) \tag{12}\]
Here, \(f_{i}\) and \(\vec{r}_{i}\) are the predicted image flux and position for a given \(M_{\rm SMBH}\), \(\Delta r\), and \(r_{c}\), while \(\sigma_{f}\) and \(\sigma_{r}\) are the uncertainties on \(f_{o}\) and \(\vec{r}_{o}\). From our method, this requires us to choose the location and flux of detection in each system's lens plane. This can theoretically be anywhere in the lens plane; however, for simplicity, we choose the predicted central image location \(\vec{r}_{o}\) to be at the center \((0,0)\) of each galaxy lens macro-model. We set \(\sigma_{r}\) based on the resolution of an image of this detection. Assuming an HST image, we adopt the astrometric precision of \(\sim\)0.03". We convert this \(\sigma_{r}\) into kpc units with the measured \(D_{d}\) for each lens. It turns out that this choice for \(\sigma_{r}\) is relatively large, therefore minimizing the importance of the positional dependence of the likelihood of detection in equation 12. However, since a hypothetical detection of the central image depends more on brightness rather than position3, and our choice of \(\sigma_{f}\) is more restrictive, we justify this choice as realistic. For \(f_{o}\) we simply choose the equivalent of 9 magnitudes fainter than the estimated brightest quad image for each lens system. This choice is consistent with our choice of 10 magnitudes fainter for non-detections. We set \(\sigma_{f}=0.4f_{o}\), equivalent to \(\sim\)1 magnitude in uncertainty. This allows us to encompass a wide range of produced image magnitudes and to account for magnification dispersion from stellar microlensing (Dobler et al., 2007).
Footnote 3: This assumption is valid only because our detection technique (Section 5) assumes the lens galaxy is invisible in the observing filters.
Using this likelihood function in equation 1 and marginalizing over \(M_{\rm SMBH}\) gives us the posterior probability density function for \(\Delta r\) and \(r_{c}\).
## 4 Results
### Lens Macro-model Maps
In this section we present examples of maps for some individual galaxy macro-models (see Section 3.1), and properties of the corresponding central images (see Section 3.2), before obtaining statistical results in Section 4.2.
Following the modelling procedure outlined in Section 3, we obtain maps for each lens galaxy macro-model depicting the locations of SMBH that allow extra central images to form. For illustrative purposes, the left panel of Figure 3 shows contours of one of these mass maps computed for the case for RXJ1131-1231 with a \(2.11\times 10^{9}M_{\odot}\) SMBH4 displaced according to VODKA distribution, and with random azimuthal positions. The central and saddle images are presented in these maps, as greenish and reddish points, respectively. The SMBH image is typically extremely demagnified, far below observability, so these are not included in the map. Similarly, to avoid cluttering the map, the SMBH locations are not shown. Our probability calculations include SMBH locations where no extra images form.
Footnote 4: As shown in Table 1, this SMBH mass is roughly near the center of the \(M_{\rm SMBH}\) range found from the \(M-\sigma\) relation.
It can be seen from the map that there exists an elliptical 'barrier' between the saddle point images (reddish points) and central images (greenish points), of radius \(\sim\)0.02". This barrier is the critical curve formed by the presence of SMBH. The closest images to this barrier are the brightest images generated from the model, while the further away the images become dimmer. The exact radius of this critical curve depends on the galaxy macro-model used. The right panel of Figure 3 shows the same image locations plotted onto the same galaxy macro-model's Fermat potential \(\Psi_{\rm Fermat}=\frac{1}{2}r^{2}-\Psi_{\rm gal}\). The asymmetric Fermat potential of this galaxy macro-model prefers that for the saddle and SMBH images to form, SMBH positions should be in the right of the field, as indicated by the density of saddle images in that region. In general, the shape of the Fermat potential determines which regions of the field are discriminated against having saddle and SMBH images, with more elliptical potentials strongly favoring SMBH locations for central images along the semi-major axis. The asymmetry of the mass distribution arising from quad-scale macro-modeling is important for central image properties, yet is often ignored in the literature.
For this model, the mean SMBH offset distance that produces extra central images is 0.282 \(\pm\) 0.113 kpc. In general, the production of extra central images requires SMBH offsets greater than the mean \(\Delta r\) from VODKA. This trend is important because the failure to create extra central images means the singular central image at the maximum of \(\Psi_{\rm Fermat}\) will be strongly demagnified.
Magnification statistics for this model are shown in Figure 4. While in this case, \(M_{\rm SMBH}\) is the same for all the models with the purpose of showcasing the observed magnification variability for a given \(M_{\rm SMBH}\), our general procedure varies \(M_{\rm SMBH}\) randomly for each model according to the distribution outlined in Section 3.2. As with our map in the left panel, SMBH locations where no central images form are excluded. In general, SMBH locations with no central images make up \(\sim\)50% of all offset distances, meaning roughly half of all SMBH offsets will not yield observable images. The vertical dashed lines correspond to image magnitudes of \(m_{\rm AB}=30\) and 34 in the F218W HST filter. For this particular model we find that most (\(\sim\)62%) of the central images are brighter than \(m_{\rm AB}=30.00\). The corresponding saddle images have a much broader distribution and are generally much fainter than the central images. Combining each of these maps and magnification histograms for all the models in each QSO allows us to place statistical constraints on \(r_{c}\) and \(\Delta r\).
Our statistical inference method (see Section 3.3) requires a likelihood function dependent on the magnitude and position of the predicted central image. As a visualization of a predicted detection, we define a small circular region within the lens plane such that any central images falling within the region are considered to be detected with magnitudes according to their respective magnification. The left panel of Figure 5 depicts one example region for a subset of 100 HE0435-1223 galaxy macro-models, centered at \(\vec{r}_{o}=(0.0020",0.0108")\) with a radius of 0.0050". For this lens, this radius corresponds to \(\sim\)0.03 kpc. The radius of the region can be thought of as the uncertainty \(\sigma_{r}\) of a detection at \(\vec{r}_{o}\). All central images found within the region and their corresponding saddle images and SMBH locations are shown. The streak pattern is due to the fact that saddle images form on the line connecting the center of the lensing potential and the SMBH, shown here as black points. The SMBH offset distribution average is measured to be 0.187 kpc with a corresponding standard deviation of 0.017 kpc.
The right panel of Figure 5 presents the information for the same circular region shown in the left panel, but plots the central and saddle image distances from the lensing potential peak against the corresponding SMBH distance. It is clear from the figure that the central and saddle images that form closer together, and hence closer to the critical curve, are brighter than those that form further away. Additionally, there is a general positive correlation between the SMBH position and its corresponding central and saddle image distances.
### Constraints on Lens Galaxy Parameters
The previous section highlights how individual lens galaxy macro-models from our central image analysis can yield constraints on \(\Delta r\) and \(r_{c}\). This is largely the outcome of applying the analysis described in Section 3.2. The main goal of this work is to combine all models for each QSO in our sample to obtain general constraints on each lens galaxy's parameters. For this we follow the analysis procedure described in Section 3.3. For each QSO, 1000 lens galaxy macro-models were generated. For each of these, and a randomly picked SMBH \(\Delta r\) offset from the VODKA distribution, and randomly picked SMBH mass from the appropriate distribution, we determined how many central images were produced, and each image's position and magnification. This allows us to create contour probability maps in \((\Delta r,r_{c})\) space based on a predicted non-detection (see equation 11) or detection (see equation 12). We extend our statistical inference to 2 outcomes: (i) Non-detections and (ii) Detections. A summary of our constraints on lens galaxy parameter distributions for each quad is presented in Table 2. The posterior PDFs \(P(\Delta r,r_{c}|D)\) for the cases of a non-detection and detection in
each quad are shown in Figures 6 and 7. Individual marginal posterior PDFs for each lens are shown in Figure 8.
Let us first consider the most likely case of central image non-detections. Our inference method in this case simply checks the probability of an image being brighter than the \(f_{\rm crit}\) threshold (see equation 11). The posterior PDFs (see equation 1) for each lens in this case is shown in Figure 6. The case of non-detections prefers smaller \(\Delta r\) values than the case of detections. In general, we can say that non-detections of central images imply that the central SMBH is very likely to be well centered on the galaxy's lensing potential. As with \(\Delta r\), smaller values of \(r_{c}\) and \(b\) are strongly favored for the case of non-detections. Qualitatively, this is a similar conclusion to those of previous works (e.g. Quinn et al., 2016).
Next, we consider the unlikely, but more interesting, hypothetical outcome that the central image was detected in the 6 lens systems in our sample. The resulting posterior PDFs are shown in Figure 7. We can see that in the event of a detection at the chosen \(\vec{r}_{\rm o}\) with \(f_{o}\) in each lens system, \(\Delta r\) is most likely to be \(>\)0.1 kpc. The case of detections favors larger \(\Delta r\). Similarly for \(r_{c}\), the case of detections constrains to larger core sizes and \(b\) values.
Lastly, since the \(b\) parameter is assumed to be a global parameter for all lenses, we can multiply the 2D posteriors together and marginalize according to equation 8 to get a general constraint on \(b\). We do this for three scenarios: (i) All Non-detections in the sample, (ii) All detections in the sample, and (iii) A combination of Non-detections and detections. For this third case, we assumed that the lenses with the top two brightest observed quad images, PG1115+080 and WFI2033-4723, had hypothetical central image detections. With this assumption, we multiplied together individual non-detection posteriors for the rest of the lens systems in our sample with the detection posteriors for the two chosen lens systems that we promoted to detections. Each scenario yields independent constraints on \(b\). The summary of these constraints is detailed in Table 3 and Figure 9. In general, central image detections favor steeper \(r_{c}-\sigma\) (\(b=6.28^{+1.59}_{-1.81}\times 10^{-4}\) kpc km\({}^{-1}\) s), while the inverse is true for non-detections (\(b=3.11^{+2.72}_{-2.26}\times 10^{-4}\) kpc km\({}^{-1}\) s). The combination case
Figure 4: Histogram of relative magnifications in log space of central and saddle images for a SMBH of \(M_{\rm SMBH}\approx 2.11\times 10^{9}M_{\odot}\) for the case of RXJ1131-1231 shown in Figure 3. \(\mu\) and \(\mu_{\rm BI}\) are the magnifications of the modeled central or saddle image, and the brightest image in RXJ1131-1231, respectively. The two vertical dashed lines correspond to image magnitudes of \(m_{\rm AB}=30\) and \(34\).
Figure 3: Central image modelling for RXJ1131-1231 quad lens. We used 1576 SMBHs (not shown) randomly offset from the galaxy host center according to the distribution in Shen et al. (2019). In this figure all SMBH have \(M_{\rm SMBH}\approx 2.11\times 10^{9}M_{\odot}\). (Other maps for this system use different \(M_{\rm SMBH}\) randomly chosen from its range shown in Table 1). Offset SMBHs that produce central images (green stars), and saddle images (inverted parity; orange and red squares) are plotted on the mass density (_left panel_), and Fermat potential \(\Psi_{\rm Fermat}\) (_right panel_) of one model of RXJ1131-1231 (\(\Psi_{\rm SMBH}\) is not included so as to avoid cluttering the plot). Cases where only one central image is produced at the location of the offset SMBH are excluded from this plot, but included in subsequent calculations. The left and top colorbars indicate the magnitude of the central and saddle images, respectively. The maps have scales of 0.0035 kpc pix\({}^{-1}\). All our maps have 1250 pix arcsec\({}^{-1}\). The asymmetry in the image distribution visible in both the panels comes from modeling of the four quad images.
yields \(b=4.47^{+2.49}_{-2.26}\pm 1.40\times 10^{-4}\) kpc km\({}^{-1}\) s, between the detection and non-detection case constraint. In fact, in each of the three scenarios, the resulting distributions for \(b\) have credible intervals smaller than that of the prior, indicating that the search for central images with this analysis can yield stronger constraints regardless of outcome.
### Rough Constraints for \(\Delta r\) and \(r_{c}\)
Our results so far have shown that the non-detection or detection of a central image can constrain the lens galaxy SMBH offset and core radius. To place a general constraint on these parameters, we assumed a proportionality with \(r_{c}\) and \(\sigma\) and marginalized over \(\Delta r\) to obtain the posterior for this proportionality constant \(b\). We are able to do this since \(b\) is a parameter that is assumed to be the same for all lens galaxies. In this section, we similarly assume that all lens galaxies share the
\begin{table}
\begin{tabular}{l c c c c c c} \hline QSO (Det./Non-det.) & \(\widehat{\Delta r}\) [kpc] & 95\% CI (\(\Delta r\)) [kpc] & \(\widehat{r_{c}}\) [kpc] & 95\% CI (\(r_{c}\)) [kpc] & \(\widehat{b}\) [\(10^{-4}\) kpc km\({}^{-1}\) s] & 95\% CI (\(b\)) [\(10^{-4}\) kpc km\({}^{-1}\) s] & 95\% CI (\(b\)) [\(10^{-4}\) kpc km\({}^{-1}\) s] \\ \hline HE0435-1223 (Non-det.) & 0.066 & \(0.015<\Delta r<0.367\) & 0.042 & \(0.010<r_{c}<0.179\) & 1.90 & \(0.459<b<8.069\) \\ HE0435-1223 (Det.) & 0.154 & \(0.081<\Delta r<0.397\) & 0.095 & \(0.042<r_{c}<0.165\) & 4.26 & \(1.90<b<7.41\) \\ PG1115+080 (Non-det.) & 0.118 & \(0.015<\Delta r<0.353\) & 0.044 & \(0.010<r_{c}<0.167\) & 1.54 & \(0.365<b<5.832\) \\ PG1115+080 (Det.) & 0.242 & \(0.081<\Delta r<0.426\) & 0.081 & \(0.025<r_{c}<0.145\) & 2.84 & \(0.886<b<5.051\) \\ RXJ1131-1231 (Non-det.) & 0.110 & \(0.022<\Delta r<0.367\) & 0.056 & \(0.013<r_{c}<0.181\) & 1.73 & \(0.407<b<5.616\) \\ RXJ1131-1231 (Det.) & 0.162 & \(0.059<\Delta r<0.419\) & 0.126 & \(0.022<r_{c}<0.206\) & 3.91 & \(0.692<b<6.374\) \\ SDSSJ0924+0219 (Non-det.) & 0.073 & \(0.015<\Delta r<0.367\) & 0.037 & \(0.008<r_{c}<0.213\) & 1.72 & \(0.349<b<9.298\) \\ SDSSJ0924+0219 (Det.) & 0.176 & \(0.088\leq\Delta r<0.404\) & 0.125 & \(0.032<r_{c}<0.174\) & 5.82 & \(1.49<b<8.10\) \\ WFI2033-4723 (Non-det.) & 0.073 & \(0.015<\Delta r<0.360\) & 0.071 & \(0.013<r_{c}<0.230\) & 2.86 & \(0.518<b<9.041\) \\ WF12033-4723 (Det.) & 0.162 & \(0.081<\Delta r<0.404\) & 0.117 & \(0.034<r_{c}<0.209\) & 4.70 & \(1.35<b<8.37\) \\ MG0414+0534 (Non-det.) & 0.147 & \(0.022<\Delta r<0.375\) & 0.089 & \(0.019<r_{c}<0.234\) & 3.45 & \(0.728<b<9.020\) \\ MG0414+0534 (Det.) & 0.184 & \(0.037<\Delta r<0.404\) & 0.132 & \(0.033<r_{c}<0.273\) & 5.08 & \(1.27<b<10.5\) \\ RXJ0911+0551 (Non-det.) & 0.162 & \(0.022<\Delta r<0.375\) & 0.485 & \(0.084<r_{c}<0.868\) & 18.68 & \(3.23<b<33.46\) \\ RXJ0911+0551 (Det.) & 0.184 & \(0.022<\Delta r<0.389\) & 0.310 & \(0.119<r_{c}<0.641\) & 11.96 & \(4.57<b<24.72\) \\ \hline \end{tabular} The columns list the QSO and constraint scenario (based on whether or not a central image is detected), modes for SMBH offset \(\widehat{\Delta r}\), core size \(\widehat{r_{c}}\), and \(b\) parameter \(\widehat{b}\), and 95\% credible intervals for SMBH offset, core size, and \(b\) parameter. All modes and credible intervals presented are for the posterior PDF distributions for each parameter. RXJ0911+0551 is treated independently since its core size prior distribution is skewed to large core sizes due to the presence of large external shear.
\end{table}
Table 2: Constraints on Individual Lens Galaxy Parameters
Figure 5: _Left_: SMBH (all have \(M_{\rm SMBH}\approx 3.73\times 10^{8}M_{\odot}\)) locations (black dots) corresponding to central images (green stars) and saddle images (orange squares) generated from a subset of 100 mass models for HE0435-1223 falling within the \(r=0.0050\)” region (blue circle) centered at \((0.0020\)”, \(0.0108\)”) from the mass density peak. Orange squares depict simultaneously produced saddle images. The right and top colors indicate the magnitude of the central and saddle images, respectively. _Right_: Central (green stars) and saddle (orange squares) image distances from the mass density peak as a function of their corresponding SMBH offset distance for central images falling within the \(r=0.0050\)” region of the top panel. All our maps have 1250 pix arcsec\({}^{-1}\). The vertical streak pattern indicates the range of image locations for each SMBH distance given by the 100 model subset. The right and top colorbars show the magnitude of the central and saddle images, respectively.
Figure 6: Individual Joint Posterior PDFs \(P(\Delta r,r_{c}|D)\) for the hypothetical case of a non-detection of the central image in all QSOs in our sample. Marginalizing over each axis gives individual posterior PDFs for core size \(P(r_{c}|D)\) and SMBH offset distance \(P(\Delta r|D)\). The vertical red dashed lines indicate the percentage of SMBH offset by \(<\)0.1 and \(<\)0.5 kpc from VODKA (Shen et al., 2019).
Figure 7: Individual Joint Posterior PDFs \(P(\Delta r,r_{c}|D)\) for the hypothetical case of a central image detection in all QSOs in our sample. The central image detection is assumed to be located at the center of each lens galaxy macro-model with a brightness of 9 magnitudes fainter than the brightest quad image in each lens. Marginalizing over each axis gives individual posterior PDFs for core size \(P(r_{c}|D)\) and SMBH offset distance \(P(\Delta r|D)\). The vertical red dashed lines indicate the percentage of SMBH offset by \(<\)0.1 and \(<\)0.5 kpc from VODKA (Shen et al., 2019).
same SMBH offset and core radius in order to directly constrain \(\Delta r\) and \(r_{c}\). Since this is an unrealistic assumption, we treat these results as a "back of envelope" estimation rather than a strict constraint. In fact, since the estimated \(\Delta r\) and \(r_{c}\) for individual lens galaxies are approximately similar to one another (see Table 2), this exercise serves as a useful estimate of the rough scale of SMBH offsets and core radii that one can expect in future analyses.
With this assumption established, we follow equations 9 and 10 to derive posterior PDFs for \(\Delta r\) and \(r_{c}\), respectively.
Figure 8: Individual posterior PDFs for SMBH offset distance \(\Delta r\) (_top row_), core size \(r_{c}\) (_middle row_), and \(r_{c}-\sigma\) proportionality constant \(b\) (_bottom row_) for all QSOs in our sample. The case of a central image non-detection (_left column_) and detection (_right column_) yield independent constraints on lens galaxy parameters. The prior distributions for each parameter are shown in yellow.
We do this for the same 3 scenarios as for the constraint on \(b\) (All Non-detections, All Detections, and a Combination of Detections and Non-detections. As before, we assume a detection in PG1115+080 and WFI2033-4723 but non-detections in all others for the third case.). A summary of all the posterior PDF constraints for \(\Delta r\) and \(r_{c}\) is shown in Figure 10 and Table 3. Similarly, the total multiplied together 2D posteriors for each case are shown in Figure 11.
For all non-detections in our sample, the mode of the posterior \(P(\Delta r|D)\) is 0.132 kpc with a 95% credible interval of \(0.037<\Delta r<0.228\) kpc. This distribution is skewed to smaller \(\Delta r\), consistent with the trend that non-detections of central images imply that the central SMBH is very likely to be well centered on the galaxy's lensing potential. With core size, the \(r_{c}\) posterior mode is 0.049 kpc. For all detections, the \(\Delta r\) posterior mode is 0.242 kpc with a 95% credible interval of \(0.154<\Delta r<0.286\) kpc. This is a stronger constraint than that of the case of all non-detections, the \(r_{c}\) posterior mode for all detections is 0.107 kpc. In the combination case, the constraints on \(\Delta r\), \(r_{c}\), and \(b\) lie in between those found in the cases of non-detections and detections. This intuitively
Figure 10: Combined posterior PDFs for SMBH offset distance \(\Delta r\) (_top panel_) and core size \(r_{c}\) (_bottom panel_) assuming these parameters are the same for all lens galaxies
Figure 9: Combined posterior PDFs for \(r_{c}-\sigma\) proportionality constant \(b\). The constraints from a hypothetical central image detection in each QSO are shown in blue. The constraints from non-detections of the central image in each QSO are shown in red. Constraints from a central image detection in PG1115+080 and WFI2033-4723 but non-detections in all others are shown in black. The prior distribution is shown in yellow. In general, non-detections of the central image favor smaller \(b\) values.
\begin{table}
\begin{tabular}{l l l l l l l} \hline Constraint Scenario & \(\widehat{\Delta r}\) [kpc] & 95\% CI (\(\Delta r\)) [kpc] & \(\widehat{r_{c}}\) [kpc] & 95\% CI (\(r_{c}\)) [kpc] & \(\widehat{b}\) [\(10^{-4}\) kpc km\({}^{-1}\) s] & 95\% CI (\(b\)) [\(10^{-4}\) kpc km\({}^{-1}\) s] \\ \hline All Detections & 0.242 & \(0.154<\Delta r<0.286\) & 0.107 & \(0.083<r_{c}<0.136\) & 6.28 & \(4.47<b<7.87\) \\ All Non-detections & 0.132 & \(0.037<\Delta r<0.228\) & 0.049 & \(0.019<r_{c}<0.107\) & 3.11 & \(0.849<b<5.830\) \\ Some Det./Non-det. & 0.191 & \(0.110<\Delta r<0.264\) & 0.068 & \(0.033<r_{c}<0.102\) & 4.47 & \(2.21<b<6.96\) \\ \hline \end{tabular} The columns list the constraint scenario (based on whether or not a central image is detected), modes for SMBH offset \(\widehat{\Delta r}\), core size \(\widehat{r_{c}}\), and \(b\) parameter \(\widehat{b}\), and 95\% credible intervals for SMBH offset, core size, and \(b\) parameter. All modes and 95\% credible intervals presented are for the marginalized posterior PDF distributions for each parameter assuming the entire sample is governed by a single constraint value for \(\Delta r\), \(r_{c}\), and \(b\). The scenario for Some Det./Non-det. assumes a central image detection in PG1115+080 and WFI2033-4723 and non-detections in the rest.
\end{table}
Table 3: Constraints on Lens Galaxy Parameters
makes sense as overall, central image detections favor larger SMBH offsets and core sizes and steeper \(r_{c}-\sigma\), while the inverse is true for non-detections. In addition, the posterior means for each parameter are similar to the prior means, but with smaller credible intervals.
## 5 A new detection technique
The results of our analyses can be applied to future observations of the elusive central image in hopes of constraining various galaxy properties. Here we outline a novel observing strategy that can help improve the chances of central image detection.
In addition to being significantly demagnified, the central image is often superimposed by the lens galaxy in optical wavelengths. Therefore, to increase the likelihood of detecting the central image, it is important to increase the brightness contrast between it and the lens galaxy, that is, to maximize the central image flux and minimize the lens galaxy flux. To achieve that, the observed wavelength of the galaxy should be shorter than the rest frame wavelength of the Balmer break (3646 A). The lens galaxies will be largely invisible in these wavelengths, because they are predominantly old, massive ellipticals, and thus have spectral energy densities that peak in red wavelengths and drop off toward the blue and ultraviolet. Furthermore, QSO sources have a blue power-law continuum, with high flux densities in bluer wavelengths. Additionally, to avoid the wavelength range absorbed by intervening intergalactic medium (Ly-\(\alpha\) forest), the observed wavelength of the QSO should be greater than the rest frame wavelength of \(\sim 1200\) A, which means the source QSOs should have redshifts above \(\sim 1.5\)(Francis et al., 1991; Hewett and Wild, 2010). In practice, this only becomes important for sources at redshifts greater than \(\sim\)2. This implies that the ideal scenario for which this technique can be utilized is the highest possible redshift for the lens galaxy for any given source redshift.
In order to accomplish this, we suggest observing in the bluest wavelengths achievable with the UV filters on the Hubble Space Telescope (HST). In blue filters, the contrast of the lens galaxy and central image can be increased to the point where the central image can become detectable.
In addition to brightness considerations, the positioning of the central image is better constrained for compact lens sources, which is another reason why QSOs are ideal sources.
As a preliminary illustration of this technique, we consider the HST UV filter F275W and Optical filter F555W, which have effective wavelengths5 of 2713.86 A and 5326.96 A, respectively. Observing the gravitationally lensed QSO quad HE0435-1223 (properties presented in Table 1) in F555W corresponds to rest frame QSO and lens emission at \(\sim 1980\) A and \(\sim 3664\) A, respectively. In this filter, the lens emission is longward of the Balmer break limit, so the lens galaxy would obscure the central image. However, observing in F275W would correspond to rest frame emission at \(\sim 1008\) A and \(\sim 1866\) A for the QSO and lens, respectively. This wavelength corresponds to a regime where the QSO continuum flux is large. Similarly, the lens rest frame is in the
Figure 11: Posterior PDFs \(P(\Delta r,r_{c}|D)\) for the hypothetical case of central image non-detections in all QSOs in our sample (_top panel_), central image detections in all QSOs in our sample (_middle panel_), and a central image detection in PG1115+080 and WFI2033-4723, and non-detections in all other QSOs in our sample (_bottom panel_). For detections, the central image detection is assumed to be located at the center of each lens galaxy macro-model with a brightness of 9 magnitudes fainter than the brightest quad image in each lens. Marginalizing over each axis gives individual posterior PDFs for core size \(P(r_{c}|D)\) and SMBH offset distance \(P(\Delta r|D)\), assuming \(r_{c}\) and \(\Delta r\) are the same for all lens galaxies. The vertical red dashed lines indicate the percentage of SMBH offset by \(<\)0.1 and \(<\)0.5 kpc from VODKA (Shen et al., 2019).
regime that should have minimal flux from the lens galaxy. Therefore, our central image detection technique can be potentially viable for this system, as illustrated in Figure 12.
## 6 Discussion and Conclusions
Using a sample of 7 quad lenses, we modelled the central image and placed constraints on lens galaxy parameters for various scenarios of central image detection. We estimated 3 lens galaxy parameters: \(b\) (the \(r_{c}-\sigma\) proportionality constant), SMBH offset \(\Delta r\), and galaxy core radius \(r_{c}\). Constraints on these parameters for individual lenses are presented in Table 2.
Here we concentrate on the combined constraints for all lenses, assuming roughly that \(\Delta r\) and \(r_{c}\) are the same for all galaxies, while \(b\) is defined to be this way. Our main results and constraints are presented in Table 3, and are as follows:
* All the cases we considered are hypothetical. However, the one that comes closest to the current observational constraints is the case with no detections of the central image in any of the lenses (red curves in Figure 10). For these, the SMBH offset from the center of the galaxy host is \(132^{+96}_{-95}\) pc, the galaxy core radius is \(49^{+58}_{-30}\) pc, and the constant of proportionality \(b\) (eq. 6), relating the core radius and the galaxy line-of-sight velocity dispersion is \(3.11^{+2.72}_{-2.26}\times 10^{-4}\) kpc km\({}^{-1}\) s. Each of the constraint distributions yield much tighter constraints for each parameter than their respective priors, and favor smaller values.
* The more exciting, but unlikely, case of a central image detection in each lens also yields tight constraints on lens galaxy parameters (blue curves in Figure 10). For these, the SMBH offset from the galaxy host center is \(242^{+44}_{-48}\) pc, the galaxy core radius is \(107^{+29}_{-24}\) pc, and the average constant of proportionality \(b\), relating the core radius and the galaxy line-of-sight velocity dispersion is \(6.28^{+1.59}_{-1.81}\times 10^{-4}\) kpc km\({}^{-1}\)s. A central image detection will therefore imply that the galaxy core sizes and SMBH offsets are larger than the average of their respective priors.
* The case of a combination of detections and non-detections in our sample yields independent constraints on lens galaxy parameters. With our assumed hypothetical detection in PG1115+080 and WFI2033-4723, justified by their bright quad images, the average SMBH offset, galaxy core radius, and constant of proportionality relating the core radius and the galaxy line-of-sight velocity dispersion are \(191^{+73}_{-81}\) pc, \(68^{+34}_{-35}\) pc, and \(4.47^{+2.6}_{-2.26}\times 10^{-4}\) kpc km\({}^{-1}\)s, respectively. In general, even just a single detection of the central image will strongly influence the constraints to larger SMBH offset, core radius, and \(b\).
* Regardless of whether or not the central image is detected in a lens or not, tighter constraints can be placed on SMBH offset, galaxy core radius, and the \(r_{c}-\sigma\) proportionality constant. A notable exception to this result is exemplified in RXJ0911+0551 (see Section A). Due to the presence of a nearby galaxy cluster which leads to a wide radial range of its image distribution, and requires a large external shear, RXJ0911+0551 does not constrain any lens galaxy parameter regardless of detection. From this, it is likely that lenses that require large external shear due to the presence of nearby external mass will not be useful in constraining lens galaxy parameters in future studies.
* Our analysis demonstrates that the quad-scale lens macro-model is important for deriving properties of the central images. This is especially true in the case of very asymmetric systems, like RXJ0911+0551.
* While our results are not restricted to any particular observational filter, we recommend a novel observing strategy utilizing UV wavelength filters (see Section 5). Given that the lens galaxy peaks in red wavelengths and the QSO source peaks in blue wavelengths, UV filters have a potential use in searches for central images since they will obscure the lens galaxy and possibly allow for easier detection of the elusive demagnified blue central image.
In this paper we considered how the fluxes of central QSO images are affected by the presence of SMBH, which can be offset from the center of its galaxy host. Because the central galaxy region is dominated by stars, the central image flux can also be affected by stellar microlensing (Dobler et al., 2007), which would broaden the predicted flux distribution by \(\sim\)1 mag. Future studies should continue to account for this in central image modelling.
While our rough calculations for a single value of \(\Delta r\) and \(r_{c}\) for all galaxies (see Section 4.3) are useful in understanding the general scale of their underlying distribution, a more insightful result would be direct constraints on the distributions of \(\Delta r\) and \(r_{c}\). This would require a more complicated theoretical and statistical model on top of our outlined framework, and would be the subject of future study.
Future observations with the Vera C. Rubin Observatory, Gaia, and Pan-Starrs will discover many new lensed systems (Marshall et al., 2010; Lemon et al., 2019; Canameras et al., 2020) allowing the extension of our analysis to a larger sample, giving tighter constraints, and increasing the probability of detecting a central image. Similarly, combining our constraints with recent results from gravitational lensing of source AGN at \(z>1\)(Millon et al., 2022; Spingola et al., 2022) can extend our constraints to the inner regions of higher redshift galaxies. Furthermore, our results can be compared with simulations (Tremmel et al., 2018; Volonteri et al., 2020; Katz et al., 2020).
In this work we have shown that searches for the central image in quads can yield tight constraints on lens galaxy parameters regardless of detection or non-detection. Therefore, we recommend commencing new observation programs for the central image to formalize new constraints for SMBH offset, galaxy core radius, and \(r_{c}-\sigma\) proportionality constant.
## Acknowledgements
The authors would like to thank Lindsey Gordon, Galin Jones, John Hamilton Miller Jr., and Sarah Taft for useful suggestions and discussions.
## Data Availability
Data generated from this article will be shared upon reasonable request to the corresponding author. |
2303.08667 | ZTBus: A Large Dataset of Time-Resolved City Bus Driving Missions | This paper presents the Zurich Transit Bus (ZTBus) dataset, which consists of
data recorded during driving missions of electric city buses in Zurich,
Switzerland. The data was collected over several years on two trolley buses as
part of multiple research projects. It involves more than a thousand missions
across all seasons, each mission usually covering a full day of operation. The
ZTBus dataset contains detailed information on the vehicle's power demand,
propulsion system, odometry, global position, ambient temperature, door
openings, number of passengers, dispatch patterns within the public
transportation network, etc. All signals are synchronized in time and include
an absolute timestamp in tabular form. The dataset can be used as a foundation
for a variety of studies and analyses. For example, the data can serve as a
basis for simulations to estimate the performance of different public transit
vehicle types, or to evaluate and optimize control strategies of hybrid
electric vehicles. Furthermore, numerous influencing factors on vehicle
operation, such as traffic, passenger volume, etc., can be analyzed in detail. | Fabio Widmer, Andreas Ritter, Christopher H. Onder | 2023-03-15T14:54:23Z | http://arxiv.org/abs/2303.08667v4 | # ZTBus: A Dataset of 1000+ Complete,
###### Abstract
This paper presents the Zurich Transit Bus (ZTBus) dataset, which consists of recorded driving missions of electric city buses in Zurich, Switzerland. The data was collected over several years on two trolley buses as part of multiple research projects. It includes more than a thousand missions throughout all seasons, each usually covering a full day of real operation. The ZTBus dataset contains detailed information on the vehicle's power demand, propulsion system, odometry, global position, ambient temperature, door openings, number of passengers, dispatch patterns within the public transportation network, etc. All signals are synchronized in time and are provided with an absolute timestamp in tabular form. The dataset can be used as a foundation for a variety of studies and analyses. For example, the data can serve as a basis for simulations to estimate the performance of different public transit vehicle types, or to evaluate and optimize control strategies of hybrid electric vehicles. Furthermore, numerous influencing factors on vehicle operation, such as traffic, passenger volume, etc., can be analyzed in detail.
## Background & Summary
Public transportation is an effective solution for reducing traffic in growing cities. It significantly reduces the number of vehicles on the road, resulting in less congestion, shorter travel times, minimal ecological footprint, and reduced overall energy consumption. The need for such efficient urban transportation systems is likely to increase, as an estimated two-thirds of the world's population is expected to live in cities by 2050 [1].
In this context, detailed driving and operational data are of great value to assist cities and transportation operators in making informed decisions on the vehicles' ideal propulsion technology and charging strategy for the respective public transportation network. Furthermore, during the development and tuning of intelligent vehicle state estimation algorithms or energy management strategies, time-resolved data of the traction system is necessary for both vehicle manufacturers and the research community. While there are publicly available datasets describing urban traffic conditions and human mobility [2, 3, 4], time-series data of personal cars [5, 6, 7] or taxis [8], publicly available time-series data of urban transit buses is lacking.
The goal of this publication is to fill this gap by presenting the ZTBus dataset, which is composed of data recorded throughout the course of the projects <<SwissTrolley plus>> [9] and ISOTHERM [10], both of which were collaborations between industry partners and public research institutions that are financially supported by the Swiss Federal Office of Energy (SFOE). The dataset covers more than a thousand driving missions of two trolley buses that were in operation between April 2019 and December 2022. It consists of detailed time series that represent the power demand, propulsion system, odometry, global position, ambient temperature, door openings, number of passengers, and the dispatch patterns within the public transportation network of the two vehicles. The time series are provided in a synchronized form and are sampled every second. Aggregated quantities for each of the missions are provided in a metadata table. A schematic overview of the data acquisition and curation procedure, which is explained in greater detail below, is shown in Fig. 1. Figure 2 presents the full extent of the dataset.
This data offers the potential to be used in a broad variety of fields. For example, the time-resolved GNSS data can be used in combination with odometry signals, such as the wheel speeds and the steering angle, by means of sensor fusion approaches. Such algorithms can significantly improve the raw pose estimates provided by the GNSS sensor, and allow dead reckoning approaches to deal with momentary signal outage. Additionally, the large amount of data on a set of given routes allows for the examination of algorithms for trajectory filtering and map matching in machine learning contexts.
Machine learning may also be utilized to predict various influence factors in public transportation, such as the number of passengers that travel a certain distance at a given time, the traffic levels on specific roads and at specific times of the day, or the expected speed profiles of the vehicles in the near future or in general on certain road segments.
Finally, the aggregated data enables the examination of long-term correlations such as the impact of COVID-19 mitigation
Figure 1: Data acquisition and curation. Signals from thee different sources, i.e., the vehicle control unit (VCU), the global navigation satellite system (GNSS) antenna, and the passenger counting system, are used in the definition of the driving missions. Various filtering steps are added to reject erroneous and unrepresentative data. Finally, time synchronization and sampling is performed to present the data in a tabular format.
Figure 2: Visualization of the extent of the ZTBus dataset, which includes a total of 1394 driving missions over the period of over 3.5 years between April 2019 and December 2022.
measures on passenger numbers, the effect of weather conditions on energy consumption, etc.
The dataset presented in this manuscript has been used in several of our own publications in the context of the research activities mentioned above: 1) The position, odometry, and velocity data served to develop and evaluate a real-time incremental graph construction algorithm [11]. 2) Time-resolved speed, torque, and braking pressure signals were used for the development of the model-based vehicle mass and road grade estimation method [12]. 3) The spatio-temporal nature of the power request signal was used to quantify the relation between grid and battery energy usage on certain road segments, which then served to derive a stochastic model predictive control approach [13]. 4) The optimal design and control of a thermal energy buffer in an electric city bus was studied based on the passenger loads, velocity and elevation profiles [14]. 5) A set of 16 representative all-day driving missions served to optimize the battery degradation throughout the vehicle lifetime [15]. 6) Hourly-averaged data was used to conduct a large-scale sensitivity analysis of the thermal comfort systems, allowing a comparison of various heating, ventilation and air conditioning (HVAC) systems [16].
## Methods
### Data Collection
The ZTBus dataset was recorded on two trolley buses during regular operation by Verkehrsbetriebe Zurich (VBZ) on various bus routes in Zurich's public transportation network. Both buses are single-articulated, have an overall length of about 19 m, a curb weight of about 19 t, and a maximum passenger capacity of about 160. They are equipped with traction batteries, which allow them to run for a few kilometers without the overhead power grid.
Onboard logging systems developed for that purpose allow us to record various data streams, which originate from the three different systems as follows:
1. The majority of the data is provided by the VCU to which the raw measurement data is directly transmitted via multiple controller area network (CAN) buses from the various vehicle components. As this data is used during the normal operation of the bus, these signals are always available if the attached logging system works as intended.
2. The data related to the global positioning of the vehicles is provided by a GNSS antenna mounted on their roofs. The GNSS data may be temporarily unavailable if no reliable connection to the satellites can be established, which may be the case during bad weather, between tall buildings, or in underpasses, for instance.
3. The passenger counts are estimated by onboard infrared-based passenger counting systems that transmit their estimates to the public transportation operator's server computer via the local cellular network. This data is then automatically synchronized and augmented with the data from the intermodal transport control system (ITCS), i.e., the corresponding bus route number and stop names. We refer to this combined data as "ITCS data".
The data is organized in "driving missions", which we define as the entire period from the moment the bus is switched on until the moment it is switched off.
### Selection of Data Records
To make sure the dataset is of high quality, we include only those records that represent complete driving missions in regular public transport operation. For example, we reject test drives, short trips within depots, and missions that are completely missing any data of the three systems. The details of this selection step are described in the section on technical validation below.
### Processing
We aim to reduce the processing of data to a minimum. In particular, instead of applying sophisticated filtering and smoothing techniques, we publish the raw measurement data received from the sensory devices, which allows its use also for the development or tuning of such algorithms. The processing steps that were nevertheless considered necessary and were carried out are listed as follows:
* On our vehicles, the most accurate indicator of the vehicle speed is provided by the rotational speed sensors mounted on the motor shafts. As we aim to present our data in a manner that is independent of the specific drivetrain used, we use an estimate of the "compound" transmission ratio \(\gamma\) to convert the rotational speed measurements \(\omega\) to the longitudinal vehicle speed \(v\): \[v=\frac{\omega}{\gamma}\,.\] (1)
The compound transmission ratio thus combines the effects of the transmission, final drive, and wheel radius. For estimating \(\gamma\), we analyze measurement data of perfectly straight driving sections, where the traveled distance according to filtered GNSS data is compared to the total angle covered by the electric machine. The value thus obtained is also used to calculate the traction force \[F_{\text{trac}}=\gamma\cdot T_{\text{trac}}\,,\] (2) where \(T_{\text{trac}}\) represents the total torque provided by the electric machines.
* To focus on the valuable information recorded between the initial departure and the final arrival of each driving mission, we discard any data recorded more than 1 min before and more than 1 min after the actual driving.
* Finally, the data from the three different sources introduced above, i.e, the VCU, GNSS, and the ITCS, is synchronized and resampled. For this purpose, we first generate a new date-time vector in coordinated universal time (UTC) with a uniform sampling period of 1 s covering the time window identified above. The signals are then mapped to this time vector as follows:
* The ITCS data is only given at discrete time events approximately matching the moments the bus is leaving a stop. As the interpretation of the raw data is to be kept to a minimum, the ITCS data is not interpolated. Instead, the nearest sample times of the new date-time vector are found and the discrete values are mapped accordingly.
* All binary (status) signals are interpreted as piecewise constant signals and are thus resampled via previous neighbor interpolation.
* All other signals are linearly interpolated.
## Data Records
The ZTBus dataset is organized in two different types of comma-separated values (CSV) text files, the first of which describes the 1394 individual driving missions, while the second contains metadata of all driving missions.
The names of the files that describe the individual driving missions are based on the vehicle identification number (either 183 or 208) and the time period in which the data was collected. For example, the data collected on the bus numbered 183 between 16 Oct 2019 02:52:43 and 16 Oct 2019 07:10:12, both given in UTC, is available in the following file:
\[\text{B183\_2019-10-16\_02-52-43\_2019-10-16\_07-10-12.csv}\]
The metadata describing all driving missions is provided as metaData.csv.
All files are published at the repository for publications and research data of ETH Zurich and are available at [https://doi.org/10.3929/ethz-b-000600108](https://doi.org/10.3929/ethz-b-000600108).
### Detailed Description of the Time-Resolved Measurement Data
The ZTBus dataset consists of 1394 driving missions, each of which is described in a separate CSV file. All files have the same structure and format, where the first row contains the headers of the corresponding columns and the remaining rows describe the set of data samples recorded at a specific moment in time. This time index is represented in the first column as absolute UTC time, expressed according to ISO 8601.
The columns are described in Table 1, where NaN represents unavailable data, unless specified otherwise.
### Detailed Description of the Metadata
The metadata of the driving missions is tabulated as described in Table 2. The first row contains the headers of the corresponding columns. The remaining rows contain metadata of the driving missions, indexed via the corresponding file name in the first column.
## Technical Validation
In this section, we explain the various measures we have taken and the types of data visualization that we have conducted to ensure a high quality of the ZTBus dataset. In particular, we iteratively developed a few simple selection criteria that are able to consistently remove all data records that contain any artifacts of software malfunction or that are not representative of a regular public transportation operation, such as drives within a gar |
2304.12840 | Spatiotemporal gender differences in urban vibrancy | Urban vibrancy is the dynamic activity of humans in urban locations. It can
vary with urban features and the opportunities for human interactions, but it
might also differ according to the underlying social conditions of city
inhabitants across and within social surroundings. Such heterogeneity in how
different demographic groups may experience cities has the potential to cause
gender segregation because of differences in the preferences of inhabitants,
their accessibility and opportunities, and large-scale mobility behaviours.
However, traditional studies have failed to capture fully a high-frequency
understanding of how urban vibrancy is linked to urban features, how this might
differ for different genders, and how this might affect segregation in cities.
Our results show that (1) there are differences between males and females in
terms of urban vibrancy, (2) the differences relate to `Points of Interest` as
well as transportation networks, and (3) that there are both positive and
negative `spatial spillovers` existing across each city. To do this, we use a
quantitative approach using Call Detail Record data--taking advantage of the
near-ubiquitous use of mobile phones--to gain high-frequency observations of
spatial behaviours across the seven most prominent cities of Italy. We use a
spatial model comparison approach of the direct and `spillover` effects from
urban features on male-female differences. Our results increase our
understanding of inequality in cities and how we can make future cities fairer. | Thomas Collins, Riccardo Di Clemente, Mario Gutiérrez-Roig, Federico Botta | 2023-04-25T14:12:58Z | http://arxiv.org/abs/2304.12840v2 | # Spatiotemporal gender differences in urban vibrancy
###### Abstract
Urban vibrancy is the dynamic activity of humans in urban locations. It can vary with urban features and the opportunities for human interactions, but it might also differ according to the underlying social conditions of city inhabitants across and within social surroundings. Such heterogeneity in how different demographic groups may experience cities has the potential to cause gender segregation because of differences in the preferences of inhabitants, their accessibility and opportunities, and large-scale mobility behaviours. However, traditional studies have failed to capture fully a high-frequency understanding of how urban vibrancy is linked to urban features, how this might differ for different genders, and how this might affect segregation in cities. Our results show that (1) there are differences between males and females in terms of urban vibrancy, (2) the differences relate to 'Points of Interest' as well as transportation networks, and (3) that there are both positive and negative'spatial spillovers' existing across each city. To do this, we use a quantitative approach using Call Detail Record data-taking advantage of the near-ubiquitous use of mobile phones-to gain high-frequency observations of spatial behaviours across the seven most prominent cities of Italy. We use a spatial model comparison approach of the direct and'spillover' effects from urban features on male-female differences. Our results increase our understanding of inequality in cities and how we can make future cities fairer.
urban vibrancy urban gender segregation mobile phone data spatial data science
## 1 Introduction
As the world continues to urbanize at an unprecedented rate, the lives of city inhabitants are transforming, with both unprecedented opportunities but also growing challenges and complexities that cannot be ignored. The United Nations reported that, by 2050, \(68\%\) of the world's population will be living in cities [15] and that while urban populations are increasing rural populations are in decline. This trend toward urban life is thought to be related to economic development alongside changes in social organisation [1, 13], how humans use land [16], and the drastic changes in the patterns of collective human behaviour [14, 15]. Rapid urban growth is thought to make cities more innovative and generate wealth but can cause large-scale social issues for people and communities. These include reduced housing affordability [20], environmental degradation [11], high crime rates with negative effects on economics, education, and health [17], greater disease incidence [18, 19] and traffic congestion [21, 22].
[23] reported that rapid city growth can increase segregation and inequality in urban areas [23]. Indeed, in cities in the United States, life expectancy has generally increased in the middle classes whereas, in poorer classes, it has remained the same [1]. Within the spatial structure of cities, some neighbourhoods have become differentially desirable. More expensive locations force lower-income inhabitants away, and in some cases, to the fringes of cities or areas with increased levels of criminality or poverty [23]. This can generate a powerful reinforcement loop thought to block those wishing to move to the area and hinder social mobility further.
One way to understand and quantify socio-spatial segregation in cities has been to use traditional data, like a census.
However, by being based only on where people live, such data only ever'scratch at the surface' regarding the quantification of the fascinating details of urban environments and the relationship to our social lives and our 'quality-of-life' [Entwisle, 2007, King, 2013]. Thus, city planners increasingly look to new technologies to study collective human behaviour and, especially, to characterise mobility patterns [Steenbruggen et al., 2015]. Data on broad movement behaviours are now accessible due to widespread interaction with technological systems [Gonzalez et al., 2008] and computer technology can help to reveal patterns in human behaviour. The world's near-ubiquitous uptake of mobile phone technology and social media generates huge amounts of data on our behaviour and mobility [Lazer et al., 2009, Vespignani, 2009, Salesses et al., 2013, Botta et al., 2015, Seresinhe et al., 2016, Preis et al., 2020]. From shopping habits [Di Clemente et al., 2018, Bannister and Botta, 2021] to transportation [Su et al., 2022]; there are an unlimited array of uses afforded to us due to this new ability to track and record the movements of citizens. This new direction has provided an extraordinary new understanding of urban environments and cities [Batty, 2013, Pan et al., 2013, Botta et al., 2015, Barthelemy, 2016, Botta et al., 2020].
Mobile phone data can support the study of _urban vibrancy_ or _urban vitality_, which measures the energetic activity of urban environments [Sulis et al., 2018, Botta and Gutierrez-Roig, 2021, Wang et al., 2021]. Urban vibrancy has been a concept that has been extensively theorised. Jane Jacobs was hugely influential in highlighting how urban design could encourage urban vibrancy and her arguments often focused on the maintenance and provision of social interactions in cities [Jacobs, 1961]. Her greatest addition to theory is an understanding that density and diversity in the physical structure of an urban place might affect its functional use [Moroni, 2016] and that locations that are more diverse, or more concentrated, in terms of their street networks, buildings, or 'Points of Interest', may be the most vibrant locations. Thus, city planners should consider diversity and social accessibility because diversity provides social cohesion and supplies opportunities for spontaneous interactions, subsequently allowing high levels of creativity and activity that are accessible to the inhabitants and also maintain the community with a diverse socioeconomic background [Perrone, 2019].
We follow on from previous work by Botta and Gutierrez-Roig [2021] that found that _third places_-i.e., places that humans use, that are neither workplaces nor home places and are specific in that they are used for social interactions-are important predictors of urban vibrancy levels across age groups. Here, we study segregation and urban inequality through the lens of urban vibrancy. We explore the link between urban features and urban vibrancy and whether this differs for different social groups resulting in spatial segregation. We use large data sets that showed male-female'space use' achieved via mobile phone activity data, _OpenStreetMap_ geographical data, and residential census data. We use mobile phone activity data as a proxy measurement for urban vibrancy and analyse which urban features contribute to urban vibrancy for different social groups, particularly males and females. We find that there are differences between males and females in terms of urban vibrancy. Indeed, the differences relate to 'Points of Interest' and transportation networks; however, there are both positive and negative spatial'spillovers' that exist across each city. We discuss how these differences could be accounted for in urban planning and design and how human interaction with large technological systems provides a wealth of data that can complement that derived from more traditional methods of monitoring populations. This could allow social problems, such as spatial segregation, to be measured more accurately, and at faster rates, so that social problems might be solved more easily by policymakers and urban planners for the cities of the future. We hope to extend the understanding of gender differences and segregation as it has never been so important to do so.
## 2 Data
We use three main data sources: (1) Italian census data that contains detailed information on where people live, (2) Call Detail Records data (CDR), derived from mobile phones, containing information on where people are at a high temporal granularity, and (3) _OpenStreetMap_[OSM, 2017]. We use OSM data because it provides measurable features of urban environments. For each data type, we gather data for only the'metropolitan areas' of each city because these areas are the most densely populated areas of cities (see Supplementary Materials for summary statistics Table SI 1). Metropolitan cities are areas that are linked to the city in terms of its culture and economy as well as its geographic proximity. Data for the metropolitan area boundary were gathered from the Italian Office for National Statistics ('ISTAT') [ISTAT, 2023]. Data can be made available on request. We outline the data sources below.
### Italian census data
The _ISTAT_ census [ISTAT, 2023] is conducted every decade in Italy. Census sections are small, typically with 250 households, and provide total population and gender counts per section. We use 2011 census data downloaded from the ISTAT to confirm resident locations during key times of the day.
### Call Detail Record data
We use mobile phone Call Detail Record (CDR) data from _Gruppo TIM_ (formerly _Telecom Italia_) that was made available as part of their '_Big Data Challenge 2015_' (described in GruppoTIM (2015)). The CDR data is available for seven different cities: _Milan_, _Rome_, _Turin_, _Naples_, _Venice_, _Palermo_, and _Bari_, covering approximately two months (from 23:00 GMT on 2015-02-28 to 21:45 GMT on 2015-04-30). Each CDR data set has a corresponding grid designed by _Telecom Italia_(GruppoTIM, 2015). Each grid takes into account the topology of each city and the potential communication load. Each spatial grid was also designed to maintain the privacy of the inhabitants of each city. Subsequently, the grid polygons change shape in relation to the underlying mobile cells: cells typically get smaller the closer they are to the centre of each city. The activity of the data has a granularity of fifteen-minute intervals; however, there are time points in the CDR data that contain no records; this may occur because the number of users drops below three, and no data is recorded to preserve privacy, but there may also be further issues such as cutouts or problems in the collection of the data. The data contain the gender of the user who generated the CDR within the network (see CDR summary statistics Table SI 2). The data contain a value for each gender and the gender is derived from the registration of the SIM card of male and female mobile phone users. Therefore, the data shows male and female use across time.
A related data set has already been used to understand how different age groups interact with cities (Botta and Gutierrez-Roig, 2021). This is possible because this type of data allows us to analyse and investigate the existence of differences across social groups instead of aggregating across a population, where information concerning the differences between social groups is neglected or reduced. We utilize the mobile phone CDR data set as a proxy for measuring urban vibrancy in the cities under investigation, consistent with prior research which has employed alternative forms of data to approximate urban vibrancy. We intend to use these data to understand differences between social groups to understand how urban features contribute to a vibrant environment with respect to gender. We have aggregated the cell-user data across an array of time periods such that we get metrics pertaining to urban vibrancy in each grid cell.
### _OpenStreetMap_ data
To understand cities, we must first create representations of their characteristics. Creating such representations has only recently been made easier thanks to collaborative projects such as _OpenStreetMap_(OSM, 2017). OSM is an open-source data repository generated and collected by volunteer collaborators. The data that is formed consists of large-scale geographic data that is made freely available to users. It is possible to download data on an array of city attributes including the networks, systems, and features of urban landscapes. Here, we retrieve data for each study area; however, it is important to note that the data that was downloaded was the most up-to-date version of the urban features. What we derive from these data is explained systematically in the next section.
## 3 Methods
### A proxy for urban vibrancy: Call Detail Records
As a proxy measure for urban vibrancy, we use CDR data as it indicates the presence of inhabitants throughout the day. We calculate gender differences by subtracting the male value from the female value:
\[\Delta_{i}=M_{i}-F_{i} \tag{1}\]
where \(\Delta_{i}\) represents the vector of differences, \(M_{i}\) is the vector of male users, and \(F_{i}\) is the female users.
Gender identity was only available for users who disclosed it when acquiring their sim cards. Users who did not disclose their gender were excluded from the analysis. As described in Section 2.2, gaps in the data occur when the number of users falls below three to protect their anonymity. To fill in these gaps, we assumed a value of zero for these time points. The grid consists of cells of varying sizes, so we normalized the raw data by the area of each grid cell to obtain a population density that accounts for the usage and topology of each cell in relation to the city.
### Cities as networks: Independent variables
We downloaded for each static grid cell of each city a range of features of previous research that has been shown to be related to urban vibrancy. We processed these features in two ways as outlined below.
#### 3.2.1 _Density_ in urban features
By Jacobs (Jacobs, 1961), increased feature density was a promoting factor in urban vibrancy because of the increased activity. The density of buildings, highways, networks, intersections, or 'Points of Interest' in a place, all have the potential to provide more opportunities for activities because of the increased number of users of those locations, or the increased vehicular or pedestrian access; however, importantly, these might differ across genders. Here, we define density as the concentration of a feature type within a given area. To arrive at the density value, we first download features from the free geographical database: _OpenStreetMap_ (OSM; see Section 2). We construct feature collections of the buildings, transport networks, and 'Points of Interest' found in each cell of each city. We took the total count per cell and divided it by the total area of that cell to give a value of the feature density. For the networks, we calculated the average length of transport networks or the average number of intersections, that are accessible for (1) pedestrians, (2) cyclists, and (3) drivers. We used the following calculation to determine feature density:
\[\rho=\frac{N}{A} \tag{2}\]
where \(\rho\) is the density, \(N\) is the total number of features in the geometry, and \(A\) is the area of the geometry. The values were added to the grid cells for each city (see Figure SI 1).
#### 3.2.2 _Diversity_ in urban features
According to Jacobs, it is the diversity in features that increases and encourages a location's vibrancy (Jacobs, 1961). Similarly to above, in a place, the diversity of buildings, highways, or 'Points of Interest' all have the potential to provide more opportunities for activities because of the increased usage of those locations; however, these might differ across genders. To gather data on urban feature diversity we use the same downloaded OSM feature collections. We use the Shannon-Wiener diversity index (Shannon, 1948) to calculate the diversity of features. The diversity index is calculated as follows:
\[H=-\sum_{i=1}^{M}P_{i}\,log_{2}\,P_{i} \tag{3}\]
where \(H\) is the diversity index, \(M\) is the total number of categories in the geometry feature, and \(P_{i}\) is the frequency of the \(i^{\text{th}}\) category. The values were added to the grid cells for each city (see Figure SI 1). We detail the 'Points of Interest' variables below.
'Points of Interest':'Points of Interest' are important features that directly relate to urban vibrancy. We collected all 'Points of Interest' from OSM found under the amenity, building, leisure, shop, and sport tags in the OSM database to construct a 'Points of Interest' collection for each city. We manually labelled the points using the same label collection as Moro et al. (2021) and Fan et al. (2022), based on the Foursquare classification system, which includes 14 categories. This taxonomy of labels is as follows: (1) _Arts / Museum_, (2) _City / Outdoors_, (3) _Coffee / Tea_, (4) _College_, (5) _Entertainment_, (6) _Food_, (7) _Grocery_, (8) _Health_, (9) _Residential_, (10) _Service_, (11) _Shopping_, (12) _Sports_, (13) _Transportation_, and (14) _Work_. We considered these labels because they represent the most frequently visited locations and are likely to be important for segregation (Moro et al., 2021).
'Third Places':We constructed a collection of '_Third Places_' (Oldenburg and Brissett, 1982). Third Places are locations that are neither home nor workplaces and are considered to be vitally important in terms of urban vibrancy because they allow for impromptu everyday gatherings in urban locations that result in positive effects for communities (Jeffres et al., 2009; Botta and Gutierrez-Roig, 2021) and because people spend a significant fraction of their free time in third places. For this analysis, we considered that there may be differences between genders in how they use amenities like shops, pubs, cafes, or community centres. And that this was likely to be used differently depending on general discrepancies in urban mobility diversities and, amongst others, related to socioeconomic characteristics.
Here, using the same 'Points of Interest' tags used in OSM (i.e., 'amenity', 'building', 'leisure','shop', and'sport'), we calculated density and diversity for third places (as defined in Section 3.2.1 and 3.2.2) across all grid cells and cities. We manually labeled third places based on Jeffres et al. (2009)'s categorization of (1) _eating and drinking_, (2) _organized activities_, (3) _outdoor_, and (4) _commercial venues_, and added a fifth label of _commercial services_. Commercial services are based on the locations where people might receive a service or go with the intent of buying something, but where
you may also have opportunities for social interactions that might be brief compared to the other groupings. We included this label to capture the potential for brief social interactions in locations like banks and pharmacies that would have otherwise been removed due to Jeffres et al. (2009) definition. Only those locations that fit within these categories were considered third places.
### Statistical approach
In this analysis, the Call Detail Record (CDR) data and census data have different spatial grids. We use the geometry of CDR data as our main reference and extract OSM data for each cell. We interpolate census data to the same spatial grid using areal interpolation (Compher and Zeng, 2019; Bergroth et al., 2022) to enable correlation analysis and spatial linear regression at the grid cell level.
We perform a correlation analysis to test the data's representativeness by comparing the nighttime CDR data with census counts that have been converted to density estimates, matching the CDR data in terms of spatial scale and intensive property. We compare both male and female nighttime CDR values with their respective interpolated census data and use Kendall's correlation coefficient as it is distribution-free and more suitable for spatial data (Hamed, 2011). We apply this analysis to all cities. We aim to model male-female differences for each city while also building an aggregated model to identify common trends. We refer to this aggregated model in many of the analyses below for clarity. To ensure comparability, we standardize all variables. We then construct an ordinary least squares (OLS) model as a baseline method for estimating the regression \(\beta\) coefficients and evaluating the importance of spatial extensions to OLS. We use the following linear function to explain male-female differences as a function of a set of separate features denoted by \(X\) (See Section 3.1):
\[Y=\beta_{i}X_{i}+\epsilon \tag{4}\]
where \(Y\) represents the male-female differences as a response variable consisting of a value proportional to the users per gender used here as a proxy for activity and a measure of vibrancy (see Section 3.1), \(\beta_{i}\) are the regression coefficients, \(X_{i}\) is the independent variables, and \(\epsilon\) is the error.
Our data may have spatial autocorrelation, which violates some assumptions of a basic regression model. We address this issue in the following sections.
We diagnosed spatial dependence by analyzing OLS model residuals. The error terms may not be independent due to a spatial relationship, so we checked for spatial structure and the need for spatial models using Moran's \(I\) analyses from Ward and Gleditsch (2019). Spatial clustering implies spatial dependence, and so requires spatial models.
We use maximum likelihood estimation to create spatial lag and spatial error models (Anselin et al., 2006; Ward and Gleditsch, 2019). The Lagrange multiplier and AIC difference from OLS were calculated. We utilized _Queen's contiguity_ based on grid-cell geometry for spatial weights' matrix, connecting centroids of observations to those with shared vertices (Rey and Anselin, 2010). We used row transformation to normalize the weights' values to achieve an average of variable values in each observation's neighbourhood. We chose Queen's contiguity due to irregular grid size, but note it may impact model results.
We consider the spatial error model (SEM) as our first model, which incorporates spatial dependence using a spatially lagged error term:
\[Y=X_{i}\beta_{i}+u,u=\rho Wu+\epsilon \tag{5}\]
where \(Y\) is the male-female differences as a response variable, \(X_{i}\) represents the explanatory variables, \(\beta_{i}\) are the regression coefficients, \(u\) is the first error term, \(\rho\) is a scalar of the spatial lag parameter, \(W\) is the weights' matrix (Queen's contiguity), and \(\epsilon\) is the spatially independent error term.
We also utilized spatial lag models (SAR) where dependent variables were spatially lagged, providing coefficients for both _direct_ and _indirect_ effects of independent variables on the response and mean activity of neighbouring grid cells (Lesage and Fischer, 2008). The SAR model terms are as follows:
\[Y=X_{i}\beta_{i}+\rho WYu+\epsilon \tag{6}\]
where \(Y\) is the vector of the response variable, \(X\) represents the explanatory variables, \(\beta\) is the regression coefficients, \(\rho\) is a scalar of the spatial lag parameter, \(WY\) is the weights' matrix (Queen's contiguity), and \(\epsilon\) is the spatially independent error term.
For each model, we calculated the direct effect, indirect effects, and total effects due to the challenges associated with interpreting predictors unit change under'spillover' effects (Anselin and Rey, 2014). Spatial spillover effects are indirect effects and refer to secondary impacts that result from the direct effects. To compute these values, we first derived estimated coefficients for exogenous variables in the model, which yielded the direct effect that shows the influence of one spatial unit on another (Lesage and Fischer, 2008). Direct effects are calculated as:
\[DE=\beta \tag{7}\]
where \(DE\) are the direct effects, \(\beta\) are the coefficients of the SAR model without the spatial lag term.
Contained within the direct effects are both the indirect effects and the total effects (GuoMeng et al., 2021). To split these apart, we extract the spatial lag term from the coefficients and divide the coefficients by \(1-\) the spatial lag term multiplied by the largest eigenvalue of the spatial weights' matrix (Lesage and Fischer, 2008; Bivand and Piras, 2015). Total effects take into account the full range of impacts that a particular change may have on the spatial system as a whole (Lesage and Fischer, 2008). Total effects are calculated as:
\[TE=\beta/(1-\rho*\lambda) \tag{8}\]
where \(TE\) are the total effects, \(\beta\) are the coefficient of the exogenous variable in the spatial lag model, \(\rho\) is the spatial lag term, and \(\lambda\) is the maximum eigenvalue from the spatial weights matrix. Finally, we calculate the indirect effects. These refer to the secondary impacts that result from the direct effects. Indirect effects are a measure of any spillover effects that occur beyond the immediate spatial units (Lesage and Fischer, 2008). Indirect effects are calculated as:
\[IE=DE-TE \tag{9}\]
where \(IE\) are the indirect effects, \(DE\) are the direct effect and \(TE\) are the total effects. Using the above methods and data, we created a hierarchy of models that aggregated CDR data at different time periods. These included: (i) All day time data (08:00-20:00), (ii) Weekdays (Monday-Thursday) versus Weekends (Friday-Sunday) within the all-day time period, and (iii) Twenty-four-hour day data divided into Morning (06:00-12:00), Afternoon (12:00-18:00), Evening (18:00-00:00), and Night (00:00-06:00) categories. Models were run for individual cities and for all cities combined. This approach enabled us to study how vibrancy relates to gender and urban features at different times and investigate variations over time.
## 4 Results
First, we compare the census and the Call Detail Record (CDR) data to check representativeness. Across all cities, we find results were positive and significant at the \(5\%\) level (see Figure 1). The lowest \(\tau\) for females is 0.55 (Bari) whereas the highest \(\tau\) for females is 0.72 (Torino). For males, the lowest \(\tau\) is 0.56 (Bari) whereas the highest \(\tau\) for males is 0.7 (Torino). These strong correlations between the CDR-when selecting only the nighttime values-and census data show that the CDR data are representative of the census data. This suggests that the values of the daytime should also be representative of the movements of the general population.
Next, we use Kendall's rank correlation coefficient to compare each urban feature with the male-female differences. We calculate our results using false discovery-rated detection (Benjamini and Hochberg, 1995) to correct for the rate of type I errors in the null hypothesis. We found that, except for a few instances, all of our results were positive and were highly significant at the \(5\%\) level (\(P<0.05\)). Also, between density and diversity metrics, diversity often had a smaller association contrasting with previous work on age groups (Botta and Gutierrez-Roig, 2021), we found that smaller cities had larger associations with male-female differences. See figure SI 3 for the full correlation analysis results.
We use Moran's spatial autocorrelation analysis to determine the global Moran's \(I\) of the data and to further understand the data in terms of its spatial dependence. We extract residuals of an ordinary least squares (OLS) model at each level of the model hierarchy. We calculate Moran's \(I\) statistics for the male-female differences in each city. For clarity, we report only the minimum and maximum in the lowest level of the hierarchy (this does not include the aggregated model which is excluded from this analysis due to its obvious spatial clustering in different cities), i.e. the daytime data (08:00-20:00; see Section 3.3). The lowest value for Moran's \(I\) was 0.013, the maximum was 0.8, and the mean was 0.2. Of the seven cities in the analysis, two were non-significant at the \(5\%\) level, these were Bari and Napoli; all the rest were significant (\(P<0.05\)). These values confirm the presence of spatial clustering and spatial dependence; however, these values alone cannot provide a full account of the presence or absence of spatial clustering or fully understand the spatial structure of the data; this, however, provides evidence that spatial models may be more appropriate. See Supplementary Materials Figure SI 2 and Table SI 4 for full Moran's \(I\) analysis results.
To further test the presence of spatial clustering, we calculate the Lagrange multiplier test statistic (both non-robust and robust) for the OLS model of each city and at each level of the analysis. We did this to indicate the type of model most suitable for the analysis. This provides a measure to inform us whether to use a simple linear model or use either the spatial lag models (SAR) or the spatial error models (SER) (LeSage and Pace, 2009). At the same time, we calculate the Akaike information criterion (AIC) of each of the models. We find that the SAR models most often contain the highest values (Minimum LMerr = 0.062, \(P\) is NS, Maximum LMerr = 410.104; Minimum LMlag = 9.56, \(P<0.001\), Maximum LMlag = 539.642). The same relationship is also found for the robust methods. The differences between the AIC of the OLS and the AIC of the spatial models were consistently greater for the SAR models. Because these tests consistently pointed toward using the SAR models, and we observe spatial clustering, we continue the rest of our analysis with SAR models.
We find fairly consistent results across this hierarchy. Firstly, we find that smaller cities had larger amounts of error in
the estimates than larger cities and with more variation in the coefficients, most likely due to the size and number of cells in the smaller cities. Secondly, there are no strong gender differences between the night and day (see Figures 2, SI 3 and SI 4). In the larger cities, we find a significant positive indirect effect between male-female differences and third-place density (see Figure 2); this relationship is consistent across the three largest cities and the aggregated model. We also found a significant negative indirect effect between male-female differences and the density of 'Points of Interest' (see Figure 2). The pattern was again similar across, this time, the four largest cities and the aggregated model. We also find a relationship between highway density and all three of the intersection variables; however, they did not share the direction. Both cycling and walking intersection density were significant with positive indirect effects; however, for road intersections, though there was a negative indirect effect, this was not significant. The highway density was only significant in the aggregate model (see Figure 2). We found that diversity metrics generally were not significant in the aggregate model. We also found that the average
significant; however, we found that the 'organised activity' and 'outdoor' third place categories were significant; this was not the case for third places generally (see Figures SI 3 and SI 4).
We find that the strongest effect, which was also the most consistent, was the positive indirect effect of the third place density. We consider this an interesting finding when coupled with the significant negative indirect effect found in the 'Points of Interest' variable because the directionality of the 'Points of Interest' variable-without accounting for the social aspect of the third places-are opposing one another. This could suggest that locations that fall under our five categories for third places are not equally used by each gender whereas, the 'Points of Interest' as a whole are used more equally.
For each level of the nested hierarchy, we report the results of the pseudo-r-squared values from the models. The pseudo-r-squared is the squared correlation between the dependent variable and the predictions of the dependent variable (Anselin, 1988). These values are a measure of goodness-of-fit and are used to understand the relationship between the model variables. Our models exhibited relatively high values consistently across cities and across the hierarchy of models highlighting a good relationship between independent and dependent variables (see Figures 2, SI 3, and SI 4).
## 5 Discussion
In this study, we have focused on modelling _urban vibrancy_-a measure of the dynamic activity of human beings in urban environments. For this, we have considered seven of the largest cities in Italy. We asked how urban features might contribute to a vibrant environment and how they might vary across social groups, especially concerning gender. We hypothesised that there would be differences for different genders because, firstly, heterogeneity exists generally in how people interact with urban environments (De Palma and Papageorgiou, 1988) but, secondly, that similarities might correlate most closely with groups such as gender due to similarity in socioeconomic characteristics or general behaviours. We used a computational approach to reveal any potential socio-spatial segregation across our study areas, and we used a range of relevant urban features taken from urban vibrancy theory (Sung et al., 2013; Botta and Gutierrez-Roig, 2021; Yu et al., 2022; Chen et al., 2022). To model urban vibrancy, we used data showing the presence
Figure 2: The relationship between density in features and male-female differences for the spatial model aggregating all cities. The plot shows the significance of the direct effect; however, the \(\beta\) coefficients represent the indirect effect (see 3.3 for definitions). The plot displays all density variables, with the y-axis showing the variables for each model and the colour representing each variable. Panel (A) shows all daytime data between 08:00 and 20:00; Panel (B) displays weekday (Monday-Thursday) versus weekend (Friday-Sunday) data within the same time period; Panel (C) shows data averaged into four categories: Morning (06:00-12:00), afternoon (12:00-18:00), Evening (18:00-00:00), and Night (00:00-06:00). To aid comparison, shapes denote categories in Panels (B) and (C). Significance is indicated by closed and open shapes (\(P<0.05\) and \(P>0.05\), respectively) and each shape shows the error bars as a horizontal line. The pseudo-r-squared is reported for each panel.
of mobile phone users as a proxy-another established methodology (Jia et al., 2019; Botta and Gutierrez-Roig, 2021). We have uncovered a variety of findings. First, we have been able to study urban vibrancy-and potential segregation in urban vibrancy-by using high-frequency Call Detail Record (CDR) data and open-source geographical data. We show that it is possible to do this with high predictive power and goodness-of-fit, and we do this across a model hierarchy that accounts for movement behaviours in order to reflect the reality of life in cities. Second, we have furthered discussions from previous works that focused on the importance of third places in cities: we found significant evidence that an increase in the density of third places in a given area is associated with larger male-female differences in urban vibrancy. Opposing this, we also found that an increase in the density of 'Points of Interest' overall is associated with smaller male-female differences. The evidence that third places are associated with larger differences could suggest that locations that we have defined as third places, i.e., locations that fall under our five-category system (see Section 3), are not equally used by each gender. This evidence does not necessarily mean that increases in the density of third places increase differences; however, it may be that certain types of third places are unequally used across genders. Reasons for this could be based on cultural differences or socioeconomic factors. Another part to consider is the clustered nature of third places in cities: a positive indirect effect indicates increases in male-female differences but also that the variable is positively correlated with the neighbouring values of male-female differences. It is important to consider that third places are places that are likely to cluster geographically with other factors such as economic activity or environmental conditions. More evidence would be needed to uncover further details but a similar methodology could be used with extensions and additional analyses. One such methodological adaptation could be the use of a geographically Weighted Regression. This would help to understand the predictive power across cities whilst exploring potential spatial biases in the data.
Within our analysis, we can identify a number of limitations, and it is important to acknowledge these and discuss them here. Firstly, our CDR data is used as a proxy for urban vibrancy measurement; these data are from Telecom Italia, just one provider. Though this is the largest provider in Italy, these data do not capture the entire population and so may contain unknown biases. A full account of the general population could be gained by using multiple providers and may improve our overall analysis. It is also the case that the data were only derived from phone calls; this clearly misses a breadth of other communication methods and could potentially hide biases in the data due to the myriad of different ways people communicate today. Furthermore, gender information is derived from SIM purchases, but this is likely to not be an exact representation of the gender of users. However, we also note that our validation with the census data shows a good correlation with the mobile phone data, suggesting that these issues may be relatively limited (see Figure 1). A second potential problem is that the data are from differing time periods: we have taken census data from 2011, CDR data from 2015, and _OpenStreetMap_ data from 2022. This undoubtedly introduces some biases in the analysis; however, we expect them to be small and not affect the overall results.
In this study, we have considered the modelling of _urban vibrancy_ with respect to gender differences. We found that the density of different collections of 'Points of Interest' are simultaneously associated with both decreases and increases in male-female differences. This was the case when we gave a broad-scale use-category ('Points of Interest') and a fine-scale use-category that considers the social context of a place (third places). This adds further evidence for the importance of characterising third places when studying urban environments and urban vibrancy. This evidence also suggests that comparing different collections of 'Points of Interest' could hold interesting avenues for further research relating to urban vibrancy. We have shown that this could also provide details on the potential segregation we find existing in cities today. To conclude, our analysis provides further evidence and support for the use of CDR and crowdsourced data to understand large-scale movement behaviours and how we can use these data to understand the social fabric of urban life. In turn, this could provide evidence for the design of our future urban environments.
|
2301.06033 | Lagrangian statistics of a shock-driven turbulent dynamo in decaying
turbulence | Small-scale fluctuating magnetic fields of order $n$G are observed in
supernova shocks and galaxy clusters, where its amplification is likely caused
by the Biermann battery mechanism. However, these fields cannot be amplified
further without the turbulent dynamo, which generates magnetic energy through
the stretch-twist-fold (STF) mechanism. Thus, we present here novel
three-dimensional magnetohydrodynamic (MHD) simulations of a laser-driven shock
propagating into a stratified, multiphase medium, to investigate the post-shock
turbulent magnetic field amplification via the turbulent dynamo. The
configuration used here is currently being tested in the shock tunnel at the
National Ignition Facility (NIF). In order to probe the statistical properties
of the post-shock turbulent region, we use $384 \times 512 \times 384$ tracers
to track its evolution through the Lagrangian framework, thus providing a
high-fidelity analysis of the shocked medium. Our simulations indicate that the
growth of the magnetic field, which accompanies the near-Saffman kinetic energy
decay ($E_{\textrm{kin}} \propto t^{-1.15})$ without turbulence driving,
exhibits slightly different characteristics as compared to periodic box
simulations. Seemingly no distinct phases exist in its evolution, because the
shock passage and time to observe the magnetic field amplification during the
turbulence decay are very short ($\sim\!0.3$ of a turbulent turnover time).
Yet, the growth rate is still consistent with those expected for compressive
(curl-free) turbulence driving in subsonic, compressible turbulence.
Phenomenological understanding of the dynamics of the magnetic and velocity
fields are also elucidated via Lagrangian frequency spectra, which are
consistent with the expected inertial range scalings in the Eulerian-Lagrangian
bridge. | Justin Kin Jun Hew, Christoph Federrath | 2023-01-15T07:47:14Z | http://arxiv.org/abs/2301.06033v2 | # Lagrangian statistics of a shock-driven turbulent dynamo in decaying turbulence
###### Abstract
Small-scale fluctuating magnetic fields of order \(n\)G are observed in supernova shocks and galaxy clusters, where its amplification is likely caused by the Biermann battery mechanism. However, these fields cannot be amplified further without the turbulent dynamo, which generates magnetic energy through the stretch-twist-fold (STF) mechanism. Thus, we present here novel three-dimensional magnetohydrodynamic (MHD) simulations of a laser-driven shock propagating into a stratified, multiphase medium, to investigate the post-shock turbulent magnetic field amplification via the turbulent dynamo. The configuration used here is currently being tested in the shock tunnel at the National Ignition Facility (NIF). In order to probe the statistical properties of the post-shock turbulent region, we use \(384\times 512\times 384\) tracers to track its evolution through the Lagrangian framework, thus providing a high-fidelity analysis of the shocked medium. Our simulations indicate that the growth of the magnetic field, which accompanies the near-Saffman kinetic energy decay (\(E_{\text{kin}}\propto t^{-1.15}\)) without turbulence driving, exhibits slightly different characteristics as compared to periodic box simulations. Seemingly no distinct phases exist in its evolution, because the shock passage and time to observe the magnetic field amplification during the turbulence decay are very short (\(\sim 0.3\) of a turbulent turnover time). Yet, the growth rate is still consistent with those expected for compressive (curl-free) turbulence driving in subsonic, compressible turbulence. Phenomenological understanding of the dynamics of the magnetic and velocity fields are also elucidated via Lagrangian frequency spectra, which are consistent with the expected inertial range scalings in the Eulerian-Lagrangian bridge.
keywords: MHD - turbulence - ISM: kinematics and dynamics - ISM: magnetic fields - dynamo - shock waves
## 1 Introduction
Astrophysical gas flows in the interstellar medium (ISM) are often highly stratified and weakly magnetised (Zeldovich et al., 1983; Tobias, 2002), with fields of the order of \(n\)G to \(10^{2}\mu\)G, extending over large coherence length scales of the order of several kilo parsecs (Brandenburg et al., 1996; Brandenburg and Subramanian, 2005). It is in these, often shock-dominated, compressible flows that the small-scale magnetohydrodynamic (MHD) turbulent dynamos can exist (Schober et al., 2012; Schleicher et al., 2013; Federrath et al., 2014; Federrath, 2016; Seta and Federrath, 2022), where small seed turbulent magnetic fields amplify into much larger ones in the presence of vorticity and turbulent fluctuations, which excites the field intermittently and sustains it by converting kinetic energy into magnetic energy (Batchelor, 1950; Mac Low and Klessen, 2004; Federrath et al., 2011; Brandenburg, 2018; Achikanath Chirakkara et al., 2021; Seta and Federrath, 2021; Kriel et al., 2022).
The primary effect of turbulence and anisotropy production is the amplification of the turbulent field through the transport terms in the MHD equations, which are governed by two dimensionless numbers called the magnetic Reynolds number \(\text{Rm}_{\ell}\) and the hydrodynamic Reynolds number \(\text{Re}_{\ell}\). These control the action of the magnetic field through the characteristic scales of turbulence, where \(\ell\) is the characteristic length scale. This defines \(\text{Re}_{\ell}=v\ell/\nu\), where \(v\) is the turbulent velocity and \(\nu\) is the kinematic viscosity. The turbulent magnetic resistivity \(\eta\) defines the magnetic Reynolds number as \(\text{Rm}_{\ell}=v\ell/\eta\)(Yokoi, 2013). This further introduces the quantity called the magnetic Prandtl number, which is \(\text{Pm}_{\ell}=\text{Rm}_{\ell}/\text{Re}_{\ell}\). Oftentimes in astrophysical flows, \(\text{Re}_{\ell}\) and \(\text{Rm}_{\ell}\) are very large and \(\text{Pm}_{\ell}>1\), leading to generation of large-scale vorticity, thus permitting the exponential amplification of a turbulent magnetic field, \(B=B_{0}\exp(\Gamma t)\), where \(\Gamma\) is the growth rate, from below the viscous scale (\(k_{\nu}\)) to the resistive scale (\(k_{\eta}\)), such that \(k_{\nu}<k<k_{\eta}\), but only up until the equipartition scale, \(k\sim k_{\text{eq}}\), where the conversion between magnetic and kinetic energy slows down and the turbulent dynamo saturates (Schekochihin et al., 2002).
While substantial work has been done on the small-scale turbulent dynamo (SSD) through periodic box simulations, there are only a number of studies on this process in the context of post-shock turbulence. The latter has been a subject of only a few numerical (Balsara et al., 2004; Vladimirov et al., 2006; Inoue et al., 2009; Drury and Downes, 2012; Downes and Drury, 2014; Donnert et al., 2018; Hu et al., 2022) and experimental studies (Sarma et al., 2002; Meinecke
et al., 2014; Sano et al., 2021). Some of these have been focussed on the amplification by shock compression and pre-shock pressure gradients only or on examining mixed pre- and post-shock turbulent media (Inoue et al., 2009; del Valle et al., 2016; Bohdan et al., 2021), where the corrugated shock front interacts with density inhomogeneities (Giacalone & Jokipii, 2007; Beresnyak et al., 2009), inducing vorticity and turbulence transport enhancement. In most cases considered, two-dimensional (2D) numerical simulations were conducted with strong shock profiles emulating supernova blast and detonation waves, or heliospheric termination shocks, where magnetic flux lines are rapidly compressed and stretched, yielding orders of magnitudes of shock-induced amplification. For shock-driven turbulence, it has been suggested that the small-scale dynamo process likely contributed significantly to these amplifications (Mac Low et al., 2005; Federrath et al., 2014; Federrath, 2016; McKee et al., 2020). However, its impact is likely masked by the contribution from rapid shock compression (Balsara et al., 2004; Kim & Balsara, 2006).
Moreover, we expect that two-dimensional numerical simulations conducted in prior works can significantly differ from their three-dimensional counterparts, since the development of three-dimensional coherent structures is not possible in the former, due to the topological constraints imposed in two-dimensional geometry. These have shown to play a crucial role in the turbulent dynamo process within post-shock turbulence (Inoue et al., 2013; Downes & Drury, 2014; Ji et al., 2016; Hu et al., 2022) since purely 2D flows are unable to excite a dynamo according to Zeldovich (1957)'s anti-dynamo theorem.
Thus, motivated by the lack of studies in this particular area, we here propose to investigate the post-shock turbulent medium through the Lagrangian framework by studying the evolution of tracer trajectories in the moving volume behind a laser-driven shock front. This allows thorough analyses of the dynamical evolution of the turbulent dynamo in relation to its associated time scales, since the tracer trajectories follow the advected (co-moving) fluid parcels via _streamlines_; thus providing a high-fidelity approach to studying the filamentary structures that compress or stretch the magnetic field lines in the flow, while avoiding amplifications caused directly by the shock front, or by stratified shear instabilities (Sano et al., 2012). Such methods of injecting Lagrangian tracers have been applied by Konstandin et al. (2012) to establish the Lagrangian statistics of supersonic ISM turbulence with mixed solenoidal and compressive turbulence driving, and by Homann et al. (2007) and Busse et al. (2010) to the study of the Lagrangian structure functions and frequency spectra scalings in MHD turbulence. Lagrangian statistics for the Taylor-Green forced dynamo was also studied by Homann et al. (2014), where time evolution of the magnetic field was educed through the material frame with mass-averaged quantities, providing insight into the time scales of its evolution through a volume that is unaffected by advection due to the co-moving frame of reference. To our knowledge, there are no other studies applying the Lagrangian framework to quantify small-scale dynamo action, especially for shock-driven turbulence.
The rest of the paper is organised as follows. In Section 2, a theoretical background is given covering the details pertinent to our numerical experiment, including turbulent (small-scale) dynamos, Lagrangian statistics and decaying hydrodynamic and MHD turbulence. Then, in Section 3 we describe our numerical model and setup. Finally, in Section 4 we provide the numerical results of our shock-driven dynamo simulations, and quantify the level of magnetic field amplification with quantitative comparisons to ISM dynamos. Section 5 summarises the results and conclusions of the study.
## 2 Theoretical background
### The turbulent (small-scale) dynamo
#### 2.1.1 Kinematic (exponential) growth phase
In high \(\mathrm{Pm}=\nu/\eta\) plasmas (\(\mathrm{Pm}\gg 1\)) such as in the ISM, there is little to no resistive decay (\(\eta\sim 0\)). The small-scale dynamo existing in the inner scales of hydrodynamic turbulence can grow exponentially from interactions with viscous eddies at the dissipation scale, \(\ell_{\nu}\sim k_{\nu}^{-1}\)(Batchelor, 1950; Schekochihin et al., 2002; Kulsrud & Anderson, 1992; Xu & Lazarian, 2016), such that when it reaches a stage where the magnetic excitation is so strong that at \(k_{\nu}<k<k_{\eta}\) (kinematic regime), the magnetic energy spectrum in Fourier space, has a spatial distribution given by the resistive Green's function solution to the Kazantsev equation:
\[M(k,t)=M_{0}\exp\left(\frac{3}{4}\int\Gamma dt\right)k^{3/2}K_{0}\left(\frac{k }{k_{\eta}}\right), \tag{1}\]
where \(K_{0}\) is the Macdonald function, and the magnetic spectrum evolves as \(M\sim k^{3/2}\)(Kazantsev, 1968; Kulsrud & Anderson, 1992; Federrath et al., 2011a). Based on Kazantsev theory, one can also obtain a definition of the magnetic energy, via an integral over the magnetic energy spectrum,
\[E_{\mathrm{mag}}=\frac{1}{2}\nu_{\mathrm{A}}^{2}=\frac{1}{2}\int_{0}^{k^{ \prime}}M(k,t)dk, \tag{2}\]
where \(E_{\mathrm{mag}}\) is the specific magnetic energy, and \(\nu_{\mathrm{A}}\) is the Alfven speed. Thus, the magnetic energy is dependent only on the viscous scale eddies, \(k_{\nu}\sim\ell_{\nu}^{-1}\), and an amplitude term for the initial magnetic energy, \(M_{0}=\epsilon_{0}/k_{\nu}\), and \(k^{\prime}\) is a reference scale, where \(k_{\nu}<k^{\prime}<k_{\eta}\). Coupling this with the conducting limit of the MHD induction equation (McKee et al., 2020; Beattie et al., 2022), we have
\[\frac{dE_{\mathrm{mag}}}{dt}=2\Gamma E_{\mathrm{mag}} \tag{3}\]
where the growth rate (\(\Gamma\)) is determined only by quantities at the dissipation scales,
\[\Gamma(t)=\frac{\langle(\mathbf{B}\otimes\mathbf{B}):(\nabla\otimes\mathbf{v} )\rangle_{\nu}}{\left\langle B^{2}\right\rangle_{\nu}}. \tag{4}\]
Thus, the magnetic energy \(E_{\mathrm{mag}}\) grows exponentially by \(\exp(2\Gamma t)\) throughout the kinematic regime. Additionally, since the fundamental scales of this regime is governed by folds and random stretching at the diffusive scale, we have \(\ell_{\nu}^{2}/\nu\sim\ell_{\eta}^{2}/\eta\), then \(k_{\eta}\sim k_{\nu}\mathrm{Pm}^{1/2}\), as proposed by Schekochihin et al. (2002), and confirmed recently by Kriel et al. (2022) and Brandenburg et al. (2022).
#### 2.1.2 Transition to saturation (non-linear stage)
Now, we direct our attention towards the nonlinear stage of the dynamo, where the back-reaction by the Lorentz force is magnified enough that it is able to dampen the development of coherent structures; thus hindering the continual amplification of the field through the stretch-twist-fold-merge mechanism. Here we approach the peak scale of the magnetic spectrum, \(k_{\mathrm{peak}}=k^{\prime}\exp\left((3/5)\Gamma t\right)\). Xu & Lazarian (2016) argued that, by setting \(E_{\mathrm{mag}}\sim E_{\nu}\), where \(E_{\nu}\) is the turbulent kinetic energy at the diffusive scale, we can account for the field growth near equipartition, since it is the eddies at the stretching scale, \(\ell_{st}=k_{st}^{-1}\), where \(k_{\mathrm{inj}}<k_{st}\ll k_{\nu}\), that now dominate the interactions. Thus, we have
\[E_{\mathrm{mag}}=\frac{1}{2}(\nu\epsilon)^{1/2} \tag{5}\]
where \(\epsilon\) describes the kinetic energy dissipation rate at the inertial range, whose value is determined from the injection scales of Kolmogorov turbulence, \(\epsilon=k_{\rm inj}^{-1}v_{\rm inj}^{3}\); \(k_{\rm inj}=L_{\rm inj}^{-1}\). It can be seen from Eqn. 2 that for eddies \(k^{\prime}<k_{\rm peak}\), the dominant contribution of the magnetic energy always comes from the larger scales that seeded it, and no dependence is placed on the weaker fields whose contributions are negligible in the amplification process. Then, the magnetic energy amplifies until the peak of the power spectrum shifts to that of the viscous scale eddies, and one can eliminate the dependence on \(k_{\nu}\), through the fact that there are only dependencies on the injection scales, \(k_{\rm inj}\) and \(v_{\rm inj}\). By such dimensional arguments, one can then write
\[E_{\rm mag}=\frac{1}{2}(\nu\epsilon)^{1/2}\approx\frac{1}{2}v_{\rm inj}^{2}; \tag{6}\]
and finally, in the fully non-linear stage of the dynamo, we have minimal scale separation, such that \(k_{\nu}\sim k_{\rm peak}\). Thus, expanding the Macdonald function \(K_{0}\) in Eqn. 1, for the low wavenumber limit, where \(K_{0}\approx\ln(k_{\nu})\sim\ln(k_{\nu})\). One obtains a magnetic spectrum of the form (Xu & Lazarian, 2016, 2017, 2020):
\[M(k,t)=M_{0}\exp\left(\frac{3}{4}\int\Gamma d\right)\left(\frac{k}{k_{\nu}} \right)^{3/2}, \tag{7}\]
Substituting this into Eqn. 2, and taking the time derivative \(d\ln(\ldots)/dt\) we have:
\[\frac{d\ln(E_{\rm mag})}{dt}\sim\frac{3}{4}\Gamma, \tag{8}\]
and hence
\[\frac{dE_{\rm mag}}{dt}\sim\frac{3}{4}\Gamma E_{\rm mag}\approx\frac{3}{8} \Gamma v_{\rm inj}^{2}, \tag{9}\]
where \(\Gamma\sim\alpha v_{\rm inj}/L_{\rm inj}\), and \(\alpha\) is of order unity, which simplifies it to a linear differential equation of the form:
\[\frac{dE_{\rm mag}}{dt}=\beta\epsilon. \tag{10}\]
Using the earlier definition for the energy dissipation rate, Xu & Lazarian (2016) found directly that \(\beta=3/38\) by accounting for the reconnection diffusion effect encountered in the nonlinear phase, where only a fraction of the total turbulent kinetic energy on the stretching to viscous scales contribute to the overall magnetic field amplification. The rest is dissipated via fast stochastic reconnection (Lazarian & Vishniac, 1999; Eyink et al., 2011), including natural mechanisms of viscous heating and turbulent diffusion (Kolmogorov, 1941; Korsud & Anderson, 1992). Similar scalings, with corresponding linear growth1, up until the suggested \(k_{\eta}\sim k_{\rm inj}{\rm Pm}^{1/2}{\rm Re}^{1/2}=k_{\rm inj}{\rm Rm}^{1/2}\) at saturation2 have also been observed in numerous prior works (Kulsrud & Anderson, 1992; Schekochihin et al., 2002; Cho et al., 2009; Beresnyak et al., 2009; Beresnyak, 2012).
Footnote 1: Alternatively, consider simply that \(v_{\rm str}/\ell_{\rm str}\sim\eta/\ell_{\eta}^{2}\), which gives \(\ell_{\eta}\sim(\ell_{\rm str}\eta/v_{\rm str})^{1/2}\sim(\eta t)^{1/2}\). The selective decay mechanism suppresses high \(k\)-modes, which triggers a magnetic back-reaction when \(B^{2}\sim v_{\rm str}^{2}\). Then, \(d\Gamma_{\rm mag}/dt\sim v_{\rm inf}B^{2}/\ell_{\rm str}\sim v_{\rm inf}^{2}/ \ell_{\rm str}\sim\epsilon\), implying therefore that \(E_{\rm mag}\sim\epsilon t\)(Schekochihin et al., 2022; Cho et al., 2009; Beresnyak et al., 2009; Beresnyak, 2012).
Footnote 2: According to this scenario, a quasi-static balance is achieved, where nonlinear interactions arising from the injection (outer) scales of turbulence have dynamical time-scales comparable to folding at the resistive time-scales (i.e. \(\tau_{\rm inj}\sim\tau_{\rm inj}\)). Hence, \(\tau_{\rm inj}\sim L_{\rm inj}/v_{\rm inj}\sim\tau_{\eta}\sim I_{\eta}^{2}/\eta\), which yields the expected \(k_{\eta}\sim{\rm Pm}^{1/2}{\rm Re}^{1/2}k_{\rm inj}\)(Schekochihin et al., 2008; Galishnikova et al., 2022). Note that this idealised relation does not consider the effect of tearing-mediated turbulence concentrated within anisotropic current sheets (e.g., Galishnikova et al. (2022); Beattie et al. (2022)).
Hu et al. (2022) applied this model to analyse the dynamo growth rate in shock-driven turbulence. Thus, we will also apply it for comparisons to our simulations. It should be noted upfront that the Xu & Lazarian (2016) model applies in the non-linear stage of the dynamo, i.e., when the Lorentz force has become strong, as discussed in this subsection. However, the simulations discussed below, have not reached this stage, as we will see, which makes a direct comparison to the Xu & Lazarian (2016) model difficult. Instead, our simulations here are in the exponential (often referred to as 'kinematic' phase) growth stage of the dynamo.
### Lagrangian description of second-order statistics
Similar to the Eulerian description of turbulence, one can describe two-point statistics such as the second-order structure function and the energy spectra through the Lagrangian framework. A unique advantage of this perspective is that it allows the treatment of point-like particle trajectories, which are co-moving in the direction of velocity streamlines, such that each particle has a time-dependent position, \({\bf X}={\bf X}({\bf X}_{0},t_{0})\), based on the Eulerian fixed-in-space velocity field \({\bf V}({\bf X}({\bf X}_{0},t),t)\). Thus, trajectories in this frame of reference are not affected by advection, and therefore, each Lagrangian tracer particle represents a unique fluid/gas element that can be traced throughout the simulation. Through this, one can define the Lagrangian second-order structure function as
\[\mathcal{S}_{2}^{\Theta}(\Delta t)=\left\langle\left|\Phi_{j}(t+\Delta t)-\Phi_{ j}(t)\right|^{2}\right\rangle \tag{11}\]
where \(\Phi\) is an arbitrary vector field and \(j=x,y\) are the longitudinal and transverse components of \(\Phi\), over which we take its increments along each particle trajectory and average the values obtained as an ensemble of realisations. This quantity is spatially invariant in homogeneous turbulence and is also rotationally invariant in isotropic flow (Frisch & Kolmogorov, 1995). In the inertial subrange \(k_{\rm inj}<k<k_{\nu}\), where \(k_{l}\) is the injection scale or forcing scale, the energy spectrum follows an energy cascade. Thus, it can be shown that the Lagrangian K41 scaling for the second-order LSF with respect to Lagrangian frequency, \(\omega\sim(\epsilon/\nu)^{1/2}\), by the constant flux ansatz, has the form,
\[\mathcal{S}_{2}(\Delta t)\sim(\Delta t)^{p} \tag{12}\]
up to small-scale intermittency corrections (Benzi et al., 1993; Homann et al., 2007; Arneodo et al., 2008; Benzi et al., 2010; Busse et al., 2010; Konstantin et al., 2012; Beresnyak, 2015). The velocity LSF follows a linear scaling (\(p=1\)) based on the Kolmogorov bridge relations as detailed below.
If we assume Kolmogorov (K41) (Kolmogorov, 1941) or Goldreich-Sridhar (GS95) scaling (Sridhar & Goldreich, 1994; Goldreich & Sridhar, 1995, 1997), which obtains both \(E(k)\sim\epsilon^{2/3}k^{-5/3}\) (\(k=k_{\perp}\) for GS95) in Eulerian space, one can easily show that
\[E(\omega)\sim\epsilon\omega^{-2} \tag{13}\]
is the expected scaling obtained for the kinetic energy spectrum (Inoue, 1951; Corrsin, 1963; Tennekes & Lumley, 1972; Tennekes, 1975; Frisch & Kolmogorov, 1995). We further note that in the three-dimensional incompressible MHD simulations of Busse et al. (2010), excellent agreement was found for this scaling law given by Eqn. 13, consistent with prior experimental (Mordant et al., 2004) and numerical results (Yeung et al., 2006). However, for two-dimensional simulations, it was found that \(E(\omega)\sim\omega^{-3/2}\), in accordance with the Iroshnikov-Kraichnan (IK) phenomenology of turbulence, where \(E(k)\sim k^{-3/2}\)(Iroshnikov, 1964; Kraichnan, 1965, 1977; Gogogberidze, 2007) for the wavenumber spectra. Thus, on the basis that
dynamical alignment at large scales (Mason et al., 2006) are dominated by Eulerian sweeping effects, Busse et al. (2010) suggested that the relevant timescale for the Lagrangian frequency spectrum should be the Eulerian correlation time. Therefore, following the Eulerian definition of a time spectra, with the ansatz of frequency-wavenumber self-similarity, i.e. \(\omega E(\omega)\sim kE(k)\), an analogous IK scaling is found which is identical to the Eulerian time-frequency spectra (Tennekes, 1975; Busse et al., 2010).
We note here also that for a Burgers' spectrum (Burgers, 1995), \(E(k)\sim k^{-2}\) occurring in shock-dominated, highly supersonic flows (Federrath, 2013; Federrath et al., 2021); since \(v\sim\ell^{1/2}\), and assuming \(t_{\rm ac}\sim t_{\rm cas}\), where \(t_{\rm ac}\) and \(t_{\rm cas}\) are autocorrelation and cascade timescales, respectively. We have \(t_{\rm cas}\sim\ell/v\sim\ell^{1/2}\), yielding \(v_{\rm cas}^{2}\sim t_{\rm cas}^{2}\sim\omega^{-2}\). Thus, the corresponding Lagrangian frequency spectrum should therefore scale as3:
Footnote 3: This is a spectrum with no mathematically self-similar second-order structure function (SF2), since \(\mathrm{SF}_{2}(\tilde{t})=2\left(v^{2}-\int_{-\infty}^{\infty}E\left(\omega \right)\exp\left(i\omega\tilde{t}\right)d\omega\right)=2\int_{-\infty}^{ \infty}\left[1-\exp\left(i\omega\tilde{t}\right)\right]E\left(\omega\right)d\omega\) using Wiener-Khinchin theorem, is conditionally convergent only when \(E\left(\omega\right)\sim\omega^{-n}\) with \(n\in(1,3)\).
\[E(\omega)\sim\omega^{-3}. \tag{14}\]
### Decaying MHD turbulence
In shock-driven turbulence without additional external turbulence driving, supersonic turbulence decays very rapidly on time scales of roughly one turnover time (Scalo & Pumphrey, 1982; Stone et al., 1998; Mac Low et al., 1998; Mac Low, 1999; Federrath & Klessen, 2012). Such time scales emphasise the importance of turbulence driving mechanisms (Mac Low & Klessen, 2004; Schleicher et al., 2010; Federrath et al., 2016; Sur, 2019), which continuously supply kinetic energy into the system to allow for amplification of a small-scale seed magnetic field (Schober et al., 2012; Schleicher et al., 2013; Seta & Federrath, 2020, 2021).
Numerical simulations with large-scale mean fields (Mac Low et al., 1998) and even seeded kinetic helicity (\(H^{k}=v\cdot(\nabla\times v)\)) (Brandenburg & Petrosyan, 2012; Brandenburg et al., 2019) have shown that turbulent (or mean in the large-scale dynamo setting) magnetic fields can decay rapidly together with the kinetic energy, such that saturation or strong magnetic fields can never be achieved. The increased alignment of the velocity and magnetic fields associated with this process (Servidio et al., 2008), suggests that even turbulence driven with a very strong shock, followed by a transient period of quiescence, will not be able to completely amplify small-scale magnetic fields.
Here we also expect such phenomena to occur. Thus, the time-dependence of the energy flux will need to be quantified in this undriven (decaying) turbulent configuration for accurate understanding of how magnetic fields can amplify in decaying ISM post-shock media. We note that in subsonic, incompressible turbulence, the energy flux follows a power-law decay, \(E_{\rm kin}\sim\left<v^{2}\right>\propto t^{-n}\), where \(n=6/5\) if the Saffman integral is invariant (Saffman, 1967), and \(n=10/7\) if the Lotivsarsy integral is conserved (Proudman & Reid, 1954) (see also Davidson, 2000; Krogstad & Davidson, 2010; Davidson, 2010). In supersonic, isothermal turbulence, it has been found that \(0.85<n<1.2\)(Mac Low et al., 1998; Mac Low, 1999), suggesting a decay much closer to that of the Saffman invariant. Further numerical experiments (Biskamp & Muller, 1999, 2000; Muller & Biskamp, 2000; Banerjee & Jedamzik, 2004; Frick & Stepanov, 2010; Berera & Linkmann, 2014; Brandenburg et al., 2015; Brandenburg & Kahniashvili, 2017; Reppin & Banerjee, 2017; Sur, 2019; Bhat et al., 2021) in three-dimensional non-helical4 MHD turbulence also confirm scalings very close to the Saffman invariant, as well as the later known Biskamp & Muller (1999) scaling (\(n=1\)) based on 2D anastrophy conservation5.
Figure 1: Schematic of the geometrical configuration used in the present study. The physical system is identical to Dhawalikar et al. (2022), resembling the current experimental test setup at the NIF. The laser-driven shock hits the ablator at \(y\approx 0.3\) mm, and propagates further through the cylindrical tube in the \(y\)-direction, subsequently interacting with the foam material, which are shown as black circles. The top panel shows a slice along the \(z\)-direction though the centre of the tube, while the bottom panel shows a slice along the \(y\)-direction, again at centre of the tube.
Thus, here in our numerical experiment, we test these decay laws, and quantify the decay found in our simulations, suggesting how it may affect the dynamo growth rate over longer timescales.
## 3 Numerical Simulations
### Governing Equations
We use a modified version of the FLASH code (Fryxell et al., 2000), with the HL3R 3-wave approximate Riemann solver (Bouchut et al., 2010; Waagan et al., 2011) to solve the fully three-dimensional, compressible MHD equations,
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=0, \tag{15}\]
\[\rho\left(\frac{\partial}{\partial t}+\mathbf{v}\cdot\nabla\right) \mathbf{v}=\frac{1}{4\pi}(\mathbf{B}\cdot\nabla)\mathbf{B}-\nabla\left(p_{\mathrm{th}} +\frac{B^{2}}{8\pi}\right) \tag{16}\] \[+\nabla\cdot(2v_{\rho}\nabla)+\rho\mathbf{F}\]
\[\frac{\partial\mathbf{B}}{\partial t}=\nabla\times(\mathbf{v}\times\mathbf{B})+\eta \nabla^{2}\mathbf{B} \tag{17}\]
\[\nabla\cdot\mathbf{B}=0 \tag{18}\]
where \(\rho\), \(\mathbf{v}\), \(p_{\mathrm{th}}=p_{\mathrm{th}}+(1/8\pi)|\mathbf{B}|^{2}\), \(\mathbf{B}\), and \(e=\rho e_{\mathrm{int}}+(1/2)\rho|\mathbf{v}|^{2}+(1/8\pi)|\mathbf{B}|^{2}\) denote the gas density, velocity, total pressure (sum of the thermal and magnetic), magnetic field, and energy density (sum of the internal, kinetic and magnetic), respectively. \(\mathcal{S}_{ij}=(1/2)\left(\partial_{i}v_{j}+\partial_{j}v_{i}\right)-(1/3) \delta_{ij}\nabla\cdot\mathbf{v}\) is the traceless rate of strain tensor, which is the symmetric part of the velocity gradient tensor that accounts for physical shear viscosity. Here \(\mathbf{F}\), the turbulence driving parameter is set to zero since we do not use any driven turbulence. The quantities \(\nu\) and \(\eta\) are the kinematic viscosity (dynamic viscosity divided by density), and the magnetic resistivity, respectively. Here we do not specify these dissipative terms, and instead use numerical viscosity and resistivity inherent in the Riemann flux functions as a subgrid-scale model for dissipation (Garnier et al., 1999). Thus, we perform implicit large-eddy simulations (ILES). We close the MHD equations with an equation of state (EOS) for an ideal monoatomic gas, i.e., \(p_{\mathrm{th}}=\rho e_{\mathrm{int}}(\gamma-1)\), where \(\gamma=5/3\) is the specific heat ratio.
### Initial conditions and flow configuration
Fig. 1 displays the initial configuration used in the present study. The geometry is identical to that used in Dhawalikar et al. (2022), and corresponds also to the one currently being tested in the wind tunnel facility at the National Ignition Facility (NIF). The foam within the cylindrical domain is modelled as a CH-based polymer, and the foam voids with radius \(r=25\) mm are air bubbles contained within the foam, existing as the precursor small-scale density inhomogeneities to generate post-shock turbulence. Also, although the laser-driven blast wave propagating into the medium may inherently cause changes in the material chemistry, induce radiation via inverse Bremsstrahlung, as well as cooling effects, etc., we do not consider these properties, since the primary purpose of this setup is to study the turbulent dynamics of a post-shock medium generated by a shock running over a pre-structured medium. The thermodynamic properties are not a primary concern for this, as long as a reasonable turbulent density and velocity field results from the interaction, which is the case (Dhawalikar et al., 2022). Neglecting these effects will also allow us to make thorough comparisons of our numerical results to other studies of post-shock turbulence, as well as small-scale dynamo processes in the ISM. Thus, the simplified approach was taken for this purpose. In order to study the growth of a turbulent magnetic field, we inject a very small-scale magnetic field of \(B_{\mathrm{turb}}=5.5\times 10^{-5}\) G, and also a mean guide field in the \(y\)-direction (streamwise) of that same value, corresponding to an initial plasma \(\beta=2c_{s}^{2}/v_{A}^{2}=1\times 10^{16}\). The turbulent field is initialised using Fourier modes, with an initial power law at large scales, \(2\leq kL/2\pi\leq 20\) where \(L\) is the 3D turbulent box size, and \(k\) the wavenumber, containing a Kazantsev spectral scaling with a power-law exponent of \(3/2\) (see Sec. 2). We also test a parabolic power with no mean field in the streamwise direction, with the magnetic field being injected at even larger scales, \(1\leq kL/2\pi\leq 3\), similar to that used in Seta & Federrath (2020, 2022), and find negligible differences in the overall qualitative properties (i.e., the magnetic field amplification and other
Figure 2: Density distribution showing the initial Lagrangian volume chosen in the post-shock medium at \(t=t_{1}=26.1\) ns, consisting of about \(2\times 10^{5}\) tracers. The volume chosen is a cylinder with radius \(0.03\) cm, in accordance with the flow configuration itself. (a) \(z\)- projected density distribution, (b) \(y\)-projected density distribution, centred on the respective mid-plane of the shock tube. Tracer particles are shown as white points (note that each tracer technically corresponds to exactly the size of a grid cell, as we are using the cloud-in-cell particle-mesh interpolation scheme, i.e., while this graphical representation plots them as point particles, they actually occupy/trace the entire cylindrical volume in which they were initialised as a collective).
time-dependent properties remain the same). The turbulent initial magnetic fields were generated with the publicly available TurbGen code (Federrath et al., 2010, 2022).
### Grid and Lagrangian statistics
The simulation domain is a uniform grid with \(384\times 512\times 384\) cells, with outflow boundary conditions (as in Dhawalikar et al., 2022). For sampling the Lagrangian statistics, we initialise \(384\times 512\times 384\) tracer particles (one in each grid-cell centre). This is comparable to the amount of tracers used in prior high-resolution periodic box simulations (Biferale et al., 2004; Arneodo et al., 2008; Benzi et al., 2010; Homann et al., 2007; Konstandin et al., 2012), thus allowing us to sample the time dynamics reliably.
In order to investigate the Lagrangian statistics specifically within the moving post-shock turbulent medium, we select a subset of tracer particles in the turbulent region behind the propagating shock front (i.e., where the shock has already passed), that is similar in size to the turbulence analysis region used in Dhawalikar et al. (2022). The cylindrical region chosen here (see Fig. 2) is wide enough for such analyses, where we are able to sample over \(2\times 10^{5}\) tracers throughout the time evolution. This allows us to examine the growth rate of the magnetic field, while avoiding the domain boundaries, so as to avoid shock reflection (diffraction) effects or interactions with the ablator or pre-shock medium, which typically result in abrupt vorticity and magnetic field amplifications that are not associated with SSD action. We also ensure that the tracers do not sample the flow properties within the stratified shear instabilities, which only develop much further behind the shock front at the later stages of the time evolution.
## 4 Results and discussion
Table 1 defines the computed mean values of the post-shock variables in the material volume traced throughout the time evolution. Crucially, the turbulent time (large-eddy turnover time) is calculated based on the largest length scale in the moving volume, which approximates the integral length scale in our simulations. This quantity is used throughout our time evolution analyses below.
Figure 4: Same as Fig. 3, but at \(t=t_{3}=60\) ns and with magnetic field lines (shown as blue streamlines) superimposed. The filamentary and tangled nature of the field is clearly visible. The collective of tracer particles is shown as white dots in these projections.
Figure 3: Same as Fig. 2, but at \(t=t_{2}=40.0\) ns. The Lagrangian volume traced by the tracer particles has evolved into a complex structure. However, by the definition of the Lagrangian tracers, the collective of tracer particles still traces the same material as they were initialised in (cf., Fig. 2), allowing us to study the magnetic field amplification and other turbulent properties, for exactly the same material at any given time.
### Time evolution and probability distributions
Fig. 3 displays the later stage of the time evolution of the density distribution with the Lagrangian tracers superimposed. It can be clearly seen that the tracers begin to disperse rapidly from its original position owing to the highly turbulent nature of the post-shock medium. As the shock front propagates further downstream, it clearly becomes corrugated in shape, similar to that observed in Ji et al. (2016) and Hu et al. (2022) due to interactions with the density inhomogeneities. Such changes in the global curvature of the shock further leads to enhanced vorticity production, particularly in the shock-parallel direction (Kevlahan, 1997). Furthermore, Fig. 4 clearly shows that the topology of the magnetic field lines are very tangled and filamentary in nature. This is an indicator of a turbulent dynamo mechanism (Federrath, 2016).
Fig. 5 shows the time evolution of the \(x\), \(y\) and \(z\)-components of the turbulent velocity dispersion (mass-weighted, as they were computed on the tracer particles) across all tracers in the moving post-shock volume. It can be seen that the initial velocity dispersion starts off at rather large values within the Lagrangian volume, of order \(10^{5}\,\mathrm{cm\,s^{-1}}\), with the streamwise component \(\sigma_{v_{y}}\) always being slightly higher than the other two components, since it corresponds to the shock direction, where the shock profile was first injected. However, the values decay to almost half their value over less than half a turbulent turnover time. Such a behaviour cannot be purely explained by conversion of kinetic energy to magnetic energy, and is fundamentally indicative of decaying turbulence (Mac Low et al., 1998; Mac Low, 1999), where a fraction of the kinetic energy decays away as the corrugated shock front runs down the domain.
Fig. 6 shows the time evolution of the magnetic field, where we can notice substantial correlations with the corresponding velocity fields. The magnetic fields are gradually amplified over a short time scale, while the velocities decay. The streamwise field (\(\sigma_{B_{y}}\)) is always larger than the other components, likely owing to the additional amplification originating from shock compression. All values clearly indicate the anisotropic nature of the turbulent quantities, which crucially leads to the enhanced anisotropic nature of the vorticity. Our simulations further indicate that the magnetic field amplification by the turbulent dynamo effect does not even exceed an order of magnitude. This is similar to observations in prior numerical works (Giacalone and Jokipii, 2007; Hu et al., 2022) with only slightly longer time evolution, where the seeded mean turbulent field amplifies by about a factor of 2 in half a turnover time. They however, primarily focussed on the maximum amplifications, we here consider the mass-averaged quantities through the Lagrangian framework, thereby removing compression effects from dynamo action. Moreover, the magnetic field amplification in our system is accompanied with a high degree of turbulent diffusion, so that no distinct phases or regimes can be observed in the averaged turbulent magnetic field evolution.
In order to elucidate the effects of the shock compression and its influence on the magnetic field, we plot the mean density and the density dispersion (Fig. 7). We note that at \(t\approx 0.2t/T_{edd}\), the density values begin to rise in both quantities, and display similar evolution with the magnetic field components (Fig. 6). Such a result is typical of strongly compressive flows (Sur et al., 2010; Federrath et al., 2011), where the magnetic field amplifies as \(|\mathbf{B}|\sim\langle\rho\rangle^{p}\), where \(p\) is some positive power and \(\langle\rho\rangle\) is the mean density of the region of interest. Thus, in order to distinguish dynamo effect from shock compression-induced magnetic field amplification, the effect of the compression has to be corrected for, in order to isolate purely turbulent magnetic field amplification, i.e., dynamo action. A common strategy to account for the effect of compression is to divide the magnetic field by the density to some power (Sur et al., 2010; Federrath et al., 2011). For instance, in a 3D medium in which the magnetic field is compressed in all three spatial directions, \(B\sim\langle\rho\rangle^{2/3}\), because of mass and magnetic flux conservation during compression.
Further to this, we also find that the turbulent density dispersion (standard deviation of the density) amplifies by a factor of two, a
\begin{table}
\begin{tabular}{l l l} \hline \hline _Post-shock parameters_ & Definition/Symbol & Mean \\ \hline Mean Density & \(\rho\) & \(0.13\)\(\,\mathrm{g\,cm^{-1}}\) \\ Turbulent Alfvén speed & \(v_{A}=|\mathbf{B}|/\sqrt{4\pi\rho}\) & \(1.76\times 10^{-9}\)\(\,\mathrm{cm\,s^{-1}}\) \\ Turbulent plasma beta & \(\beta=2c_{s}^{2}/v_{A}^{2}\) & \(2.29\times 10^{15}\) \\
3D Turbulent Velocity & \(\sigma_{v}v_{30}=\sqrt{3}\sigma_{v}\) & \(11.9\)\(\,\mathrm{km\,s^{-1}}\) \\ Sound Speed & \(c_{s}=\sqrt{\gamma P/\rho}\) & \(20.0\)\(\,\mathrm{km\,s^{-1}}\) \\ Injection length scale & \(L_{inj}\) & \(0.14\)\(\,\mathrm{cm}\) \\ Turbulent turnover time & \(L_{inj}/\sigma_{v}\) & \(217\)\(\,\mathrm{ms}\) \\ Alfvén Mach number & \(\mathcal{M}_{A}=\sigma_{v}/v_{A}\) & \(3.89\times 10^{14}\) \\ Mach number & \(\mathcal{M}=v/c_{s}\) & \(0.31\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Calculated post-shock parameters in the post-shock turbulent medium. The large-scale turbulent turnover time, \(T_{\mathrm{ed}}\) is computed with the largest length scale that the Lagrangian volume occupies during the time evolution.
Figure 5: Time evolution of the Cartesian components of the turbulent velocity dispersion computed as an average across all tracers initially marked in Fig. 2. The time is in units of the turbulent turnover time as defined in Tab. 1. We clearly see the decaying nature of the turbulence in the post-shock turbulent medium traced by the tracer particles.
Figure 6: Same as Fig. 5, but for the standard deviation of the magnetic field components.
value very similar to that observed in Dhawalikar et al. (2022), even with mass-averaged quantities. In order to quantify this, we show the probability distribution functions (PDFs) of the logarithmic density contrast \(s=\ln(\rho/\langle\rho\rangle_{m})\) time-averaged on the tracer particles within the Lagrangian volume in Fig. 8, the magnetic field PDFs in Fig. 9, and the Mach number PDFs in Fig. 10. Here we notice that the density PDF displays salient characteristics similar to that found by Dhawalikar et al. (2022), with a log-normal for low to intermediate densities, and a power-law tail at high densities, despite the fact that we have utilised mass-averaged quantities, where it is known that substantial quantitative differences can exist (see e.g., Konstandin et al., 2012). The magnetic field PDFs in Fig. 9 show that the magnetic fields are spatially intermittent, with non-Gaussian stretched tails. This is consistent with the log-normality condition of the magnetic field PDF in the kinematic SSD based on the white-in-time Fokker-Planck model (Boldyrev and Schekochihin, 2001; Schekochihin and Kulvard, 2001; Schekochihin et al., 2002c, 2004) (i.e. the \(B\) field components themselves will be non-Gaussian and spatially intermittent). The \(B_{y}\) component is slightly different than the rest, and occupies a slightly larger volume fraction. This is expected since the magnetic field in the shock direction is always larger than the other components, producing larger fluctuations compared to the \(x\) and \(z\) components. Nonetheless we note that the spatially intermittent character of the PDFs are indicators of the presence of the turbulent dynamo (Seta and Federrath, 2021, 2022), which has not yet reached saturation6. The Mach number PDFs (Fig. 10) clearly illustrate a similar pattern as that observed for the magnetic ones, where the occupied volume in the shock direction is always larger due to the simple fact that it has larger variations near the shock front. They are, however, Gaussian, as expected for fully-developed turbulent flows (Federrath, 2013; Dhawalikar et al., 2022). Overall, this highlights the role of the shock front in creating not only turbulent Mach number variations, but also turbulent magnetic field amplification as mentioned earlier, in the post-shock medium.
Footnote 6: At \(v\sim B\) (saturated state), the log-normal magnetic field PDFs become increasingly Gaussian (non-intermittent), resembling then the quasi-normal velocity PDFs in a causal manner. This scenario is traced out nicely in Seta and Federrath (2021, 2022).
helicity decreases in the initial time evolution up until \(t=0.3t/T_{ed}\). This is associated with the gradual entanglement of the magnetic and velocity field lines, which explains the growth of the magnetic field during this period of the time evolution. Examining all component in Fig. 6, we can observe an intimate connection between the cross helicity and the consequent decay of the magnetic fields at later time intervals. The associated increase of \(H^{c}\) from \(t\approx 0.3t/T_{\rm{del}}\) leads to the increased alignment of \(\mathbf{v}\) and \(\mathbf{B}\), which inhibits the generation of the e.m.f. This explains the decay at late times in the magnetic fields.
Further to this, in order to examine the contribution of small-scale solenoidal modes in the flow, we show the solenoidal ratio (Kida and Orszag, 1990; Kida and Orszag, 1992; Kritsuk et al., 2007; Federrath et al., 2010; Pan et al., 2016), defined as
\[r_{\rm{cs}}\equiv\frac{\left\langle|\nabla\times\mathbf{v}|^{2}\right\rangle}{ \left\langle|\nabla\cdot\mathbf{v}|^{2}\right\rangle+\left\langle|\nabla\times\bm {v}|^{2}\right\rangle}, \tag{19}\]
which measures the contribution of the vorticity (\(\mathbf{\omega}=\nabla\times\mathbf{v}\)) relative to the full velocity field (sum of vorticity and divergence). This ratio is bounded in \([0,1]\), and thus provides a good indicator of the vorticity fraction in the local flow. Fig. 13 displays this ratio, and shows that at the small scales for which this quantity is computed, the solenoidal modes (\(\nabla\times\mathbf{v}\)) are much larger than the contributions from compressive modes (\(\nabla\cdot\mathbf{v}\)). High values are expected in the case of post-shock turbulence (Kritsuk et al., 2007; Pan et al., 2016), since such drivers, while compressive in nature, still tend to induce high fractions of solenoidal modes in the flow (Federrath et al., 2010; Kritsuk et al., 2011; Federrath and Klessen, 2013).
Fig. 14 displays the vorticity PDF, which shows a similar shape as the logarithmic density PDF (cf. Fig. 8), with a power-law tail at higher vorticity levels. We attribute this to the fact that not all regions in space have uniformly-distributed vorticity, and thus large-scale contributions only exist intermittently in space within the post-shock medium. Such structures may also explain the intermittency observed in the magnetic field PDFs (Fig. 9), since intermittent magnetic field variations are strongly linked to vorticity production (Mee and Brandenburg, 2006; Federrath et al., 2011; Seta and Federrath, 2021).
Furthermore, we show that the connection between the vorticity and logarithmic density contrast PDFs (Figs. 14 and 8) lie in the fact that vorticity generation behind a three-dimensional curved shock front has an analytical relation that is related to the density perturba
Figure 11: Time evolution of the Mach number components, \(\mathcal{M}_{x}\), \(\mathcal{M}_{y}\), \(\mathcal{M}_{z}\), averaged across all the tracer trajectories.
Figure 12: Time evolution of the normalised turbulent cross helicity across all tracer trajectories.
Figure 10: Same as Fig. 9, but for the turbulent Mach number. \(\mathcal{M}_{y}\) occupies a larger volume fraction compared to the other Mach number components, since it is in the shock direction. It therefore also displays somewhat more intermittent (non-Gaussian) features; similar to Dhawalikar et al. (2022).
Figure 13: Time evolution of the small-scale solenoidal ratio as defined in Eqn. 19. This value is bounded in \([0,1]\) and therefore measures the relative strength of vorticity compared to the sum of vorticity and divergence (compression).
tions (Kvelahan, 1997; Kevlahan & Pudritz, 2009):
\[\delta\omega=\frac{\mu^{2}}{1+\mu}\frac{\partial C_{r}}{\partial S}-\frac{\mu}{C_ {r}}\left[\left(\frac{\mathrm{D}\nu}{\mathrm{D}t}\right)_{S}+\frac{C_{r}^{2}}{1+ \mu}\frac{1}{\rho}\frac{\partial\rho}{\partial S}\right]+\mu\omega \tag{20}\]
where \(C_{r}\) is the velocity in the shock-normal frame, \(\mu\) is the normalised density jump across the shock, \(\partial/\partial S\) is the tangential component of the directional derivative and \(S\) denotes the shock tangential surface. For the sake of simplicity, we assume that the flow ahead of the shock is initially uniform, which reduces it to a well-known result (Hayes, 1957; Kanwal, 1959), given by
\[\delta\omega\mathbf{b}=-\frac{\mu^{2}}{1+\mu}\mathbf{n}\times\left(\mathbf{r}_{\mathrm{ shock}}\cdot\mathbf{K}+\frac{\partial C_{r}}{\partial S}\right)_{S} \tag{21}\]
where \(\mathbf{b}\) and \(\mathbf{K}\) denote the shock-tangential direction and shock curvature, respectively and \(\mathbf{v}_{\mathrm{shock}}\) is the shock velocity. Since \(\mu\sim\exp(s)-1\), we have:
\[\delta\omega\sim\frac{\mu^{2}}{1+\mu}\simeq\frac{A\left[\exp(s)-1\right]^{2} }{1+B\left[\exp(s)-1\right]} \tag{22}\]
if we assume a mostly pseudo-stationary (pseudo-steady) shock (i.e., \(v_{\mathrm{shock}}\), \(\partial C_{r}/\partial S\simeq\mathrm{const}\)) as well as constant shock curvature (\(|\mathbf{K}|\simeq\mathrm{const}\)), which leaves behind the free parameters \(A\) and \(B\). Taking the PDF of Eqn. 22 in the moving post-shock frame, we find reasonably close agreement between the model and the vorticity PDF (Fig. 14), bearing in mind the aforementioned assumptions. This therefore shows the strong connection between the logarithmic density contrast \(s\) and the vorticity generation behind a shock. While the model PDF we derive here also neglects vorticity contribution from the baroclinic term, which generates vorticity through the misalignment between pressure and density gradients (\(\nabla p_{\mathrm{th}}\times\nabla p\)), the fact that it still suffices to predict the overall shape of the long-tailed intermittent distribution suggests that baroclicity may not play a crucial role in highly subsonic, post-shock turbulence, as has already been reported previously (Mee & Brandenburg, 2006; Federrath et al., 2011; Livescu & Ryu, 2016; Federrath, 2016; Tian et al., 2019; Achikanath, Chirakkara et al., 2021); while such effects, are usually magnified in pre-shock, supersonic turbulence (e.g., cosmic-ray pressure gradients; see Beresnyak et al., 2009; Drury & Downes, 2012; Downes & Drury, 2014). Moreover, the close agreement between the PDFs elucidate that shock curvature effects play a pre-dominant role in vorticity generation within post-shock turbulence, and also further solidifies that we have successfully isolated the turbulence generation behind a shock front by employing the Lagrangian frame of reference.
### Dynamo amplification
With the analyses above, we have established that dynamo action is present in the post-shock turbulent medium in our simulations. Here we educethe the magnitude of its amplification, and compare it to values obtained for dynamos in the literature (Federrath et al., 2011; Xu & Lazarian, 2016). Firstly, we conduct two additional simulations with the exact same parameters, but only vary the seed for the foam void distribution, and subsequently take the average of the values from all three of them. The different seeds were also found to not influence the overall dynamics of the system, which gives confidence to the numerical results. Averaging over these additional seeds is merely to improve the statistical significance of our results and to allow for a more accurate determination of the growth rate of the dynamo in the post-shock medium.
We further examine the level of turbulent diffusion by plotting \(E_{\mathrm{kin}}\), as shown in Fig. 15. It can be clearly seen that in less than half a turnover time, the kinetic energy drops by an about a factor of 6, as reflected also in the turbulent velocity components. We fit the scaling of \(E_{\mathrm{kin}}\) in our simulations, averaged across the three different seeds, and find that \(E_{\mathrm{kin}}\sim t^{-1.15\pm 0.02}\). This value of the power-law exponent of the decay is very close to the Saffman integral invariant, which goes as \(t^{-6/5}\). Interestingly, this value is also very similar to that observed by Mac Low et al. (1998) for their subsonic case, which had a scaling of \(t^{-1.1}\). This is consistent with scaling expected in kinetically dominated turbulence. As mentioned earlier, many numerical experiments, (Biskamp & Muller, 1999, 2000; Christensson et al., 2001; Banerjee & Jedamzik, 2004; Frick & Stepanov, 2010; Berera & Linkmann, 2014; Brandenburg et al., 2015; Brandenburg & Kahniashvili, 2017; Reppin & Banerjee, 2017; Sur, 2019; Bhat et al., 2021) have also observed scalings between the range of the Saffman integral and that of Biskamp & Muller (1999), where the exact decay law should depend on whether \(v\sim B\), \(v\ll B\) or \(v\gg B\). Thus, we find that the system undergoes significant turbulence decay, and the dynamo effect will most likely no longer be sustained after a long time evolution, at least not at the same intensity as compared to early times when the turbulence is still strong. This is consistent with previous works. It also shows that in such a decaying system, the dynamo growth rate is time dependent, at least when quantified over a significant amount of time, due to the time-dependence of the large-scale turbulent turnover time. Such an observation, has also been made for helical large-scale \(\alpha^{2}\)-dynamos (Brandenburg et al., 2019).
Now, in order to fully capture the dynamo-induced magnetic field amplification, we note that the shock-normal streamwise field always has higher amplifications than the rest. This is attributed to the compression at the shock front, and primarily a result of the large-scale systematic stretching of field along the shock propagation direction. Thus, we neglect this contribution, because we want isolate the truly turbulent amplification process, and therefore only calculate the density-normalised magnetic energy for components parallel to the shock front (\(B_{x}\) and \(B_{z}\)).
Fig. 16 shows the magnetic energy as a function of time. As mentioned before, there are seemingly no distinct phases or stages for the evolution of the magnetic energy, because the time to observe dynamo amplification during the onset of decaying turbulence origi
Figure 14: PDF of the vorticity, \(\omega=\nabla\times\mathbf{v}\), normalised by its standard deviation. Similar to the log-normal density PDF (Fig. 8), the vorticity PDF also shows a Gaussian plus power-law shape. Thus, we fit a semi-analytical model PDF that is directly related to the logarithmic density contrast (\(s\)), based on the vorticity generation behind a curved shock front (Eqn. 22), assuming negligible baroclenicity, constant shock curvature and near self-similarity of the shock profile.
nating from turbulent (numerical) diffusion is very short, only \(\sim 0.3\) of a turbulent turnover time. We find that in the intermediate range of time scales at \(t\approx 0.195-0.380t/T_{\rm ed}\), the growth is very close to exponential. We attribute the initial growth of the field to a numerical transient, where the field experiences a sudden growth at early stages of its evolution due to the prior strong shock compression. The later stages are also neglected in consideration that many of the tracer trajectories have exited the medium with the propagating shock, and thus may not be able to capture the full temporal dynamics of the magnetic energy.
Thus, we fit the growth rate in this time window, where \(2\Gamma=0.216\pm 0.008\) is the best fit obtained. The time-averaged Mach number is \(\mathcal{M}=0.31\) (Fig. 11). Based on measurements of the growth rate in driven turbulence box simulations by Federrath et al. (2011a) and Achixanath Chriakkara et al. (2021), purely solenoidal driving would yield a growth rate near unity, while purely compressive driving would yield \(2\Gamma_{\rm comp}=0.16\), close to what we find for the present shock-induced simulations.
For purposes of further comparisons with dynamos where clear, distinct phases can be observed (kinematic, nonlinear, saturated), we also show the prediction of the Xu & Lazarian (2016) non-linear phase model (Eq. 10),
\[E_{\rm mag}=E_{\rm initial}+\frac{3}{38}\epsilon(t-t_{\rm initial}), \tag{23}\]
where \(E_{\rm initial}\) and \(t_{\rm initial}\) correspond to the initial magnetic energy and time where the dynamo process begins. Here, we find that the model is able to predict the growth of the magnetic field we observed in the averaged data from all three of our numerical simulations with reasonable accuracy, although we must emphasise that it applies only in a non-linear phase, with the assumption of Kazantsev-Kraichnan phenomenology for solenoidally forced (not decaying) turbulence. Thus, in the presence of compressive driving, we do not expect that the non-linear growth phase to be well-captured by the analytical model.
To further educ the overall growth rate, we use the semi-empirical estimate provided by Kulsrud (2005) (see also Fraschetti (2013) and Appendix A in this work, where we provide a derivation), which assumes homogeneity and isotropy of the velocity two-point correlator to obtain a relation between the growth rate, \(\Gamma\) (in units of \(T_{\rm ed}^{-1}\)) and the vorticity induced downstream of a shock, \(|\omega|\) as:
\[\Gamma\approx\frac{\pi}{3}|\omega|T_{\rm ed} \tag{24}\]
In Fraschetti (2013), it was assumed that the pre-shock medium has initially zero vorticity, \(|\omega_{0}|=0\). In three-dimensional simulations, we find that this is not the case. Thus, we divide the mean vorticity evolution with \(|\omega_{0}|\) in order to consider only the vorticity driven by the shock. Noting that this is an order of magnitude estimate, the post-shock vorticity from our simulations is \(|\omega|/|\omega_{0}|\approx 0.5\times 10^{6}\), this yields \(2\Gamma\approx 0.2\pm 0.1\), which is close to what we find in our measured growth rates.
Thus, all the above estimates further provide confidence that there is an inherent turbulent dynamo mechanism within the post-shock turbulent flow, and that it corresponds well with the growth rates expected for compressively-driven turbulence as shown earlier. This is also consistent with the observations of Dhawalikar et al. (2022), since their work demonstrated that the driving mode of shock-driven turbulence is primarily compressive, rather than solenoidal.
Figure 16: Time evolution of the specific magnetic energy (\(E_{\rm mag}=1/2V_{A}^{2}\)), averaged across the three different seeds for the foam void distribution. We compare the growth rates in the region where an exponential growth is observed, with rates expected for compressive and solenoidal turbulence driving mechanisms (Federrath et al., 2011a), as well as the analytical model of Xu & Lazarian (2016).
Figure 17: Plot of the magnetic energy growth rate (2T) as a function of Mach number, \(\mathcal{M}\), with the value obtained from simulations in the current work, along with the propagated error. Comparisons are made to the empirical fit from Federrath et al. (2011a) for compressively- and solenoidally-driven turbulence, as well as corresponding simulation data obtained in their work.
Figure 15: Time evolution of the kinetic energy of the simulation data (thick black line), with best-fit line and scaling parameters obtained as \(t^{-1.15}\) shown as the blue solid line. The scalings obtained for the Lotiysansky and Saffman invariants are shown for comparison, as the red dotted and green dash-dotted lines, respectively. Thin lines show individual simulations with three different random seeds for the foam, which are used to obtain the averaged line (thick black line) with the 1-sigma band shown as the shaded grey region.
Finally, we show the measured growth rate averaged from our three simulations (Fig. 17) together with those expected for compressively- and solenoidally-driven turbulence (Federrath et al., 2011), further confirming that the shock-driven turbulent dynamo growth rate exhibited in our simulations are very close to that of a compressively-driven turbulent system.
### Second-order statistics of the velocity and magnetic field
Now we consider the second-order statistics in the form of the Lagrangian frequency spectrum (Tennekes and Lumley, 1972; Tennekes, 1975; Busse et al., 2010; Homann et al., 2014; Beresnyak, 2019). We plot both the kinetic and magnetic energy spectra, via the cosine transform of their temporal auto-correlation functions,
\[\Phi(\omega)=\frac{1}{2\pi}\int d\tau\langle Q_{t}(t+\tau)Q_{t}(t)\rangle\cos( \omega\tau), \tag{25}\]
where \(Q=\textbf{B}\) or **u**, and where \(\tau\) is the time lag from the standard two-point correlation function. The Lagrangian frequency spectrum is computed for all tracers, and then averaged to obtain the mean spectra. The velocity and magnetic field spectra are displayed in Fig. 18 and Fig. 19. It can be seen that the velocity spectra show a spectral scaling consistent with that of the Lagrangian bridge for the Kolmogorov scaling, \(E(\omega)\sim\omega^{-2}\), within the 16-th to 84-th percentile range. As mentioned earlier, such scalings have been observed in three-dimensional incompressible MHD simulations (Busse et al., 2010), hydrodynamic simulations (Yeung et al., 2006) and experiments (Mordant et al., 2004). Thus, we also observe these power-law scalings even in the presence of large-scale compression, where the slight deviation exists likely due to compressibility effects and small-scale intermittencies commonly observed in Lagrangian statistics even with high Reynolds number turbulence (Homann et al., 2007; Arneodo et al., 2008; Benzi et al., 2010; Busse et al., 2010; Konstandin et al., 2012). To our knowledge, this is the first discussion and verification of the scaling of the Lagrangian frequency spectrum in the context of post-shock MHD turbulent flows.
The magnetic spectrum, however, displays fundamental differences from its Eulerian counterpart. There are seemingly no visible scale separations within it, which one would see in the Eulerian framework, i.e., a typical peak scale and driving scale which is to be expected in an Eulerian magnetic spectrum (Schekochihin et al., 2004; Schober et al., 2015; Brandenburg et al., 2019; Seta and Federrath, 2020). In fact, the shape of the magnetic spectra in our simulations resembles those of Homann et al. (2014) (cf., Fig. 9 in their paper), with somewhat similar scaling. Most importantly, it also corresponds well with the findings of Busse et al. (2010), that the total spectra of both velocity and magnetic field (i.e. for the Elsasser field \(\textbf{z}^{+}=\textbf{v}+\textbf{B}\)) should scale roughly as \(\omega^{-2}\). The overall features nevertheless shows a clear power-law turbulent cascade, which is expected for the magnetic energy spectrum, where energies are at a range from large to small scales due to the fundamental property of inertial range cascading turbulence. However, the intrinsic properties of the Lagrangian magnetic spectrum still remains to be fully understood, and thus should be further investigated beyond this context, and also beyond the scope of this paper.
## 5 Conclusions
In this study, we performed numerical experiments of shock-driven MHD turbulence to investigate the turbulent dynamo induced magnetic field amplification through the Lagrangian framework for the first time. We followed the moving post-shock turbulent shell, in order to capture the full temporal dynamics of the post-shock medium, while avoiding spurious amplifications from Richtmyer-Meshkov related stratified shear instabilities, and thus found that the growth rates of the dynamo are comparable to turbulence driving in the ISM, for subsonic, compressively-driven turbulence. The overall setup and evolution is consistent with the hydrodynamic simulations of Dhawalikar et al. (2022), but we here focus on the magnetic field amplification using Lagrangian tracer particle tracking of the turbulent post-shock medium. We summarise our main findings as follows:
1. The shock-driven turbulent dynamo, in the presence of decaying hydrodynamic turbulence displays slightly different characteristics than its forced periodic box counterparts. This is particularly because the shock passage is usually quite short (e.g., Davidovits et al. (2022); Dhawalikar et al. (2022); Hu et al. (2022)), which in our simulation, leads to only a time evolution of about \(\sim 0.3\) turbulent turnover time. Therefore, we only observe exponential or 'kinematic' phase growth rate of the magnetic field due to magnetic excitation from the viscous scale, which does not achieve saturation.
Figure 19: Same as Fig. 18, but for the turbulent magnetic field spectrum. Here we observe a slightly shallower spectrum than the velocity field.
Figure 18: Lagrangian frequency spectrum of the velocity fluctuations, the solid black line is mean spectra across all tracer trajectories within the analysis box, and the shaded region indicates 16-th and 84-th percentile from the mode. Coloured dashed lines are energy spectra of random singular trajectories. A near \(\omega^{-2}\) scaling is observed at the inner scale, which is consistent with the K41 Lagrangian frequency scaling.
The decay in the kinetic energy further complicates the system by making continual amplifications impossible in long time evolutions, which we expect will lead to a dynamical saturation pathway of the SSD, where \(E_{\rm mag}\) and \(E_{\rm kin}\) both decay as \(\sim t^{-n}\), ensuring that the turbulence remains Alfvenic (\(\delta B\sim\delta v\)) as shown in some periodic box simulations (e.g., Park (2017); Sur (2019); Brandenburg et al. (2019)). Turbulent cross-helicity measurements also clearly indicate that the velocity and magnetic fields become more aligned, due to the decrease in turbulent kinetic energy and fluctuations. These contribute to the overall inefficiency in the dynamo process (Mac Low et al., 1998; Sur, 2019).
2. It has also been shown that the dynamo kinematic growth rate in this configuration matches that obtained for driven turbulence in the subsonic, compressive-driving regime. This result is consistent with prior works on shock-driven turbulence in periodic boxes. Therefore, if the turbulent magnetic field amplification is completely isolated as uniquely done here through the post-shock Lagrangian framework, the salient features of dynamo action remain the same.
3. The kinetic energy decay rate found in our simulations is very close to the Saffman scaling, as well as to subsonic turbulence simulations in prior works. These all highlight that the dynamo effect cannot be sustained over long time periods without external driving.
4. The Lagrangian frequency spectra of the magnetic and velocity fields display similar scalings, and they are comparable to that found in prior works, as well as that expected from the Kolmogorov theory. This is shown for the first time in the context of shock-driven turbulence.
## Acknowledgements
We thank Siyao Xu and Yue Hu for their valuable comments on the manuscript. We further thank Turlough Downes for helpful discussions. We also thank the anonymous referee for their constructive feedback on the manuscript. We acknowledge the NIF Discovery Science Program for allocating upcoming facility time on the NIF Laser to test aspects of the models and simulations discussed in this paper. J.K.J.H. acknowledges funding via the ANU Chancellor's International Scholarship. C.F. acknowledges funding provided by the Australian Research Council (Future Fellowship FT180100495 and Discovery Projects DP230102280), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). We further acknowledge high-performance computing resources provided by the Leibniz Rechenzentrum and the Gauss Centre for Supercomputing (grants pr32lo, pr48pi and GCS Large-scale project 10391), the Australian National Computational Infrastructure (grant ek9) and the Pawsey Supercomputing Centre (grant pawsey0810) in the framework of the National Computational Merit Allocation Scheme and the ANU Merit Allocation Scheme. The simulation software FLASH was in part developed by the DOE-supported Flash Center for Computational Science at the University of Chicago.
## Data Availability
The simulation data presented in this work are available on reasonable request to the corresponding author.
|
2305.13877 | NarrativeXL: A Large-scale Dataset For Long-Term Memory Models | We propose a new large-scale (nearly a million questions) ultra-long-context
(more than 50,000 words average document length) reading comprehension dataset.
Using GPT 3.5, we summarized each scene in 1,500 hand-curated fiction books
from Project Gutenberg, which resulted in approximately 150 scene-level
summaries per book. After that, we created a number of reading comprehension
questions based on these summaries, including three types of multiple-choice
scene recognition questions, as well as free-form narrative reconstruction
questions. With 990,595 total questions, our dataset is an order of magnitude
larger than the closest alternatives. Crucially, most questions have a known
``retention demand'', indicating how long-term of a memory is needed to answer
them, which should aid long-term memory performance evaluation. We validate our
data in four small-scale experiments: one with human labelers, and three with
existing language models. We show that our questions 1) adequately represent
the source material 2) can be used to diagnose a model's memory capacity 3) are
not trivial for modern language models even when the memory demand does not
exceed those models' context lengths. Lastly, we provide our code which can be
used to further expand the dataset with minimal human labor. | Arseny Moskvichev, Ky-Vinh Mai | 2023-05-23T09:55:32Z | http://arxiv.org/abs/2305.13877v2 | # Narrative XL: A Large-scale Dataset For Long-Term Memory Models
###### Abstract
Despite their tremendous successes, most large language models do not have any long-term memory mechanisms, which restricts their applications. Overcoming this limitation would not only require changes to the typical transformer architectures or training procedures, but also a dataset on which these new models could be trained and evaluated. We argue that existing resources lack a few key properties, and that at present, there are no naturalistic datasets of sufficient scale to train (and not only evaluate) long-term memory language models. We then present our solution that capitalizes on the advances in short-term memory language models to create such a dataset. Using GPT 3.5, we summarized each scene in 1500 hand-curated books from Project Gutenberg, which resulted in \(\sim\)150 scene-level summaries per book. We then created a number of reading comprehension questions based on these summaries, including three types of multiple-choice scene recognition questions, as well as free-form narrative reconstruction questions. Each book is thus associated with \(\sim\)500 reading comprehension questions. Crucially, most questions have a known "retention demand", indicating how long-term of a memory is needed to answer it, which should aid long-term memory performance evaluation. We validate our data in three small-scale experiments: one with human labelers, and two with existing language models. We show that our questions 1) adequately represent the source material 2) can be used to diagnose the model's memory capacity 3) are not trivial for modern language models even when the memory demand does not exceed those models' context lengths. Lastly, we provide our code which can be used to further expand the dataset in an automated manner.
## 1 Introduction
Typical transformer architectures as well as Large Language Models (LLM) based on them usually do not have any long-term memory mechanisms, which limits information retention after training to the length of their context window. While it is possible to mitigate the problem with simple workarounds, for example, by providing the model with a searchable database of its previous inputs / outputs, a more principled approach is highly desirable. Although a number of architectural solutions have been proposed (see subsection 8.1), this progress has been stymied by the lack of corresponding dataset development that could support this endeavor during both the training and evaluation stages. In this work, we capitalize on recent advances in LLMs to create such a dataset. Notably, for tasks that do not require long-term memory, LLMs rival the performance of human labelers [5] so we use this "local" competence to create a long-term memory task.
Language Modeling is Not Enough
In theory, longer memory should allow for better next word prediction performance, hence one might argue that specialized long-term memory datasets are unnecessary, given the abundance of unsupervised data. In practice, Language Modeling alone might not be the best approach to train and test long-term memory transformers.
**First,** Language Modeling performance will likely see diminishing returns when the context window is increased. Many documents in popular unsupervised datasets are simply not long enough to benefit from context larger than ten thousand words. Additionally, for longer documents (e.g. fiction books), it is likely that remembering the last read chapter or two is nearly equivalent, in terms of the next word prediction quality, to remembering all the chapters read so far. It is possible that in some narratives a given character or item might reappear after a long absence, but such cases are likely to happen only a few times per book, making the task extremely sparse and inefficient in training long-term memory models.
**Second,** language modeling does not offer a direct way to interpretably measure long-term memory capacity and performance. For example, we do often see improvement in perplexity when the effective context window is increased (e.g. [3]), but it is still difficult to measure and understand what kind of information is retained. One scenario could be that a longer context window helps a given model better understand lengthy philosophical treatises present in the dataset, which, in turn, allows the model to extrapolate such arguments in consistent and sound ways, resulting in lower perplexity. Alternatively, the model might simply be better at populating bibliography sections of such treatises, being able to copy the cited names from the main text into the bibliography using its long context.
We believe, therefore, that in order for long-term memory models to thrive, there needs to be a specialized dataset that addresses these limitations.
## 3 Existing datasets
Traditionally, long-term memory transformers were tested either on 1) artificial tasks (e.g. [10; 7]) or 2) language modeling (e.g. [3; 9; 1]).
Evaluation on supervised naturalistic datasets is relatively rare. Until recently, creating such datasets in a brute-force manner had been prohibitively expensive, as that would require tens of thousands of hours of human labor. There have been, however, creative workarounds.
In this context, it is most important to discuss [6] since this work is especially close to ours in its goals. In NarrativeQA, the authors employed crowd-source workers to create book and movie script understanding questions based on corresponding web-scraped summaries. While we highly resonate with the importance and motivation of their work, the dataset has a few crucial disadvantages.
1) Since all questions in NarrativeQA are written based on summaries alone, by construction, the dataset can not test any reading comprehension tasks that goes beyond knowing the summary. Arguably, by reading a book one gains detailed memories and thorough understanding far exceeding those one could get by simply reading its summary. It seems highly desirable for any long-term reading comprehension dataset to reflect that.
2) The size of the dataset is limited. There is only a limited number of pre-existing summaries available, which limits the dataset to 1500 documents. Moreover, of those, only 400 are books, the rest being movie scripts, which are, generally, much shorter. Overall, the dataset contains 45000 questions, which is is good for evaluation, might not be enough for training a long-term memory mechanism from scratch.
3) The dataset does not offer a natural learning progression. All questions are asked at the end of the book/movie script and there is no natural curriculum for the model (e.g. learning to handle short retention questions first).
The last two issues are inter-related and slightly more subtle:
4) The NarrativeQA dataset selected only books and movie scripts that had corresponding Wikipedia plot summaries (according to the paper, it was the difficulty in finding these summaries that ultimately limited the dataset size [6]). In practice, such summaries are usually present only on
Wikipedia pages that cover highly popular movies and books. Unfortunately, popular books, including their key events, plot twists, and development, are likely to be extensively discussed on various review websites, social networks, and so on. Thus, any LLM trained on unsupervised web-scraped data (as most LLMs are) is likely to have extensive knowledge about these books and movies. Our dataset does not solve this issue (since we still rely on publicly available books), but is less affected by it, as we do not bias our dataset towards popular books. In fact, many of the books in our dataset have no Wikipedia pages, reviews, or summaries we could find online.
5) We have manually filtered 1500 Project Gutenberg books that our dataset is based on. In that process, it became evident that many books, especially highly impactful ones, often include prefaces discussing the contents of the book. They also sometimes include story blurbs and summaries. The books in the NarrativeQA dataset were not filtered/processed beyond making sure that web-scraped summaries matched the books. This further exacerbates the previous issue, indicating that the dataset might be, to an extent, "self-contaminated".
## 4 Methodology
Our general approach is to test long-term reading memory retention through book scene recall and recognition. Since we want to encourage flexible memory representations rather than verbatim text memorization, instead of raw scenes, we use scene summaries.
### Data preparation
Raw books were downloaded from Project Gutenberg, with the boilerplate license information removed using a script. After that, we manually inspected each book, to remove 1) books that do not have an overarching narrative, such as short story collections, diaries, memoirs, published letters of prominent figures, and so on 2) Author names, titles, table of contents, dedication, preface, addendum, glossary, translator's notes and similar non-narrative information, 3) Duplicate books. When a given work has more than one volume, Project Gutenberg often lists them separately as individual works and then as a single unified composition. Keeping both versions of such books would have lead to various dataset contamination issues.
Overall, the goal of this stage was to leave only books that contain a single connected narrative, while also removing irrelevant information.
### Summary creation
To obtain scene summaries, books were split into \(\sim\)20000-symbol chunks (with 300-symbol overlap), then each chunk was summarized using GPT-3.5 API (the code (including prompt details) is provided in the supplementary materials).
## 5 Question types
### Read-along questions (multiple-choice)
Most reading comprehension datasets assume that their questions will be asked after the entire document is processed. In contrast, real-life linguistic activities are more "on-line". For example, one does not get to wait until the end of a long dialogue or a book to start understanding it. Moreover, one's understanding often changes as reading/talking proceeds.
To capture this property, we have constructed a large number of "read along" questions that are to be asked not at the end of the book, but rather at specific times as reading progresses. These questions are multiple-choice, in the form of "In what you've read so far, was there a scene where...", after which a number of scene summaries are given, along with a "None of the above" option.
The true answer options are either true scene summaries from the book being read (see subsection 4.2), or "None of the above". Negative answer options are of three types: 1) Lookahead: scene summaries from the same book but from parts that have not been read yet at the time when the question is asked 2) Other book: scene summaries from other books (with character names
substituted to match the true book) 3) Scene distortion: scene summaries that describe a similar setting but different events (generated using GPT-3.5). See Table 1 for illustrations.
Notably, the same question might have different answers depending on when it is asked, which, we hope, will discourage "overfit" solutions where a model simply memorizes all scenes in a given book. Additionally, each question has a clearly defined "memory load": how long ago was the target scene read. This endows the dataset with 1) natural curriculum learning opportunities 2) a simple way to measure any model's memory capacity by looking at its forgetting curve.
### End-of-book summary correction questions (freeform)
While our multiple-choice questions provide a controlled and interpretable way to measure memory performance, free-form answers might sometimes provide a richer learning signal to the model. We, therefore, added "summary correction" questions to our dataset, which take the following form: "Question: This partial book summary contains a number of errors. Rewrite it to accurately reflect the book you have read. [DISTORTED SUMMARY]", "Answer: [TRUE SUMMARY]", essentially mapping the rightmost column in Table 1 to the middle one. Here, true and distorted summaries are obtained using GPT-3.5 in the same way as in subsection 4.2 and subsection 5.1.
Our distorted summaries are constructed to be plausible and generally fitting the book setting, while not being true to the book's events. We believe that this disentanglement of factual and stylistic knowledge will make our task better suited for training or fine-tuning long-term memory models than traditional next-word or masked-word predictions.
We also believe that the task also well suited for testing Reading Comprehension as it requires 1) flexible knowledge of the overall narrative to recall the scene structurally closest to the distorted one 2) detailed knowledge of the book events to properly reconstruct the original scene summary.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{4}{|c|}{Summary distortion} \\ \hline Book Snippet & True Summary & False Summary \\ \hline Salt-air and dazzling society & The excerpt describes Adela and Arabella’s different experiences during the yachting \\ It was queried that Sit Twicken-ham should be at the seaside, instead of at Brookfield, wooing; but a man’s physical condition should be an excuse for any intermission of attentions. & The excerpt describes two friends, Adela and Arabella, taking a walk in the countryside. Adela is a swestruck by the natural beauty around them and tells Arabella about the great time they are having. Arabella, however, is unimpressed and complains to Adela about the lack of proper civilization out here. The Hon. Mrs. Bayruffle \\ \hline \end{tabular}
\end{table}
Table 1: True and distorted summary example. Crucially, we aimed to keep the setting the same, only changing the described events. This way, we hope to encourage models trained on our data to precisely remember book events rather than style, characters, or setting.
### Expanding to other question types
To aid future research, along with the questions we have already generated, we also release the data generation scripts, true and false summaries for all scenes, and Named Entity substitution dictionaries (see subsection 11.1). It is, therefore, easy to construct other tasks based on our data. It is also straightforward to expand our dataset if the application requires more data than what we provide.
## 6 Data Validation and Baselines
The primary goal of our dataset is to aid the development of long-term memory models. Therefore, our goal in data validation is not ensuring that our dataset can not be solved using alternative methods (e.g. retrieval-based), but rather making sure that our questions 1) can not be directly solved by language models without long-term memory 2) are diagnostic of the model's memory capacity 3) accurately represent the material on which they are based (i.e. our questions should be solvable).
### Testing for shortcut solutions
The first concern arises from the potential presence of shortcut solutions similar to those that have been recently plaguing the field (e.g. [12]). "Scene distortion" questions are especially susceptible. When such questions are generated, the "false" options might have subtle systematic differences from the true summary, as the true summary is based on the source material, while the false summaries involve some "creativity" from GPT 3.5, which may have a particular style or inclination to fantasize about specific topics. "Lookahead" and "Other book" question types are symmetric by design (meaning that all answer options are generated in exactly the same way), and hence are not susceptible to such shortcuts.
To evaluate the extent of such biases (if any), we have fine-tuned BERT [4] on "scene distortion" questions with no context (i.e. on answer options alone). We have used a subset of our data for which these options fit into BERT's context window of 512 tokens. The best of 5 runs achieved an accuracy of 0.524 (with 6 categories, the random guess accuracy was at 0.167).
These results indicate that indeed, there are some idiosyncrasies that can help to distinguish between distorted and true summaries generated by GPT-3.5. Thankfully, they do not allow to unequivocally identify true summaries among the available distorted options, leaving ample room for long-term memory-based improvement. Additionally, this does not affect the effectiveness of scene reconstruction questions (subsection 5.2) for long-term memory training. Nevertheless, it is important to keep this imbalance in mind when interpreting long-term model memory performance on multiple-choice scene distortion questions.
### Testing for memory impact
Although at present, most LLMs could not fit complete books into their context, some of our questions (the ones asked early in the reading process, and having low "retention load") should be solvable by LLMs without long-term memory mechanisms. We have evaluated Claude v1.3 100k 2 and GPT-4 [8] models on a small subset of our data in a zero-shot manner. Each model received 60 questions with retention loads of no more than 8 scenes ( 4000 words), achieving overall accuracies of 0.53 and 0.783 for Anthropic and GPT-4 respectively. This small experiment validates our data generation procedure, showing that the book context allows to answer the questions that we have designed. It also highlights the intuition that having a large enough context window is not equivalent to having perfect memory within the length of this context window, indicating that our data can be potentially useful for fine-tuning short-term memory models as well.
Footnote 2: developed by Anthropic ([https://www.anthropic.com/](https://www.anthropic.com/))
### Testing for adequacy
Apart from being balanced, we need our questions to accurately reflect book content. In order to test that, we have conducted a small-scale human study. Using a subset of our data, human participants3 were presented with randomly selected book scenes and two accompanying summaries, one true and one false, both generated by GPT-3.5. The task was to identify the true summary among the two. In total, 5 workers were recruited, being assigned 10 scenes each. Out of 50 total scenes, the workers correctly classified 48. We would like to stress that this small study served as a sanity check aiming to validate our data generation process, not to establish precise human performance benchmarks.
Footnote 3: Participants were recruited from Amazon Mechanical Turk and compensated at $9.99 for a 40-minute study. We required US-based workers with a “master worker” qualification, 99% previous HIT approval rate, and at least 1000 previously completed HITs.
## 7 Costs
Using our pipeline, processing a single book costs \(\sim\) $0.15 to create scene summaries, \(\sim\) $0.15 to create false scene summaries. The total cost of \(\sim\) $0.3 per book is two orders of magnitude less than what can be achieved with crowdsourced human labor (assuming a very fast reading speed of 5 hours per book and a moderate pay of $10/hour). In our case, initial book filtering (removing non-narrative books) was done manually, but with each book taking less then a minute to skim, this work can be outsourced at a very low cost.
## 8 Related work
### Long-term memory transformers
There have been a number of notable efforts in developing new architectures and training procedures to introduce long-term memory into transformers. Brute-force approaches such as directly increasing the context window ([10]), along with works focusing on sparse attention mechanisms (see [11] for a review), often give a good performance, but do not answer the question of how to transition from "very long working memory" to "long-term memory", as it is still not clear whether these context windows can be practically extended to capture lifetime-scale experiences.
Among methods that pursue alternative memory mechanisms rather than larger context windows, one line of research pursues knowledge-base-like storage of previous interactions. Another approach is to endow transformers with a distributed memory representation that it learns to update. Thus, [7] proposed a practical way to train transformer-like architectures to update a distributed memory state, while [9]. Lastly, model editing can also be seen as a form of long-term memory: this fruitful line of research focuses on incorporating new information directly into the model's weights using gradient updates [13, 14].
## 9 Limitations
It is likely that many of our questions could be answered using relatively simple Information Retrieval approaches (IR), e.g. by reversing our data generation process and scanning each scene in a book with a GPT-like model. We would like to stress that this does not undermine the purpose of our study, similarly to how the existence of simple hard-coded solutions to some of the tasks in the Long Range Arena challenge [10] did not negate the impact of that work. We aimed to create a naturalistic dataset that could be used to train and evaluate language models with long-term memory capacity. It is possible that any such dataset can be solved with alternative Information Retrieval methods, since IR can be interpreted as having perfect memory (unrestricted access to the whole document from which the information should be retrieved). Nevertheless, there is a need for non-IR-based long-term memory models, and we believe that our dataset offers exactly what is needed to train and evaluate such models.
Data contamination. It is impossible to control which books are included in any given LM's training set, and being exposed to a given book in advance might aid performance [2]. We do not claim to fully resolve the issue, but do take steps to ameliorate it by removing book titles and author
names, changing the named entities, and basing questions on scene summaries, rather than on raw scenes. With these measures, we hope to make it harder for models to map books they are reading to something they might already know. Additionally, our read-along questions give different answers depending on when they are asked. This makes it necessary for any model to rely on its memory to track the reading progress even if it was already exposed to the book before. In future work, it might be beneficial to paraphrase the books in our dataset to further mitigate the data contamination issue.
## 10 Conclusion
We have proposed a new reading comprehension dataset that can be used to train and evaluate long-term memory LLMs. We have conducted three data validation experiments, demonstrating that our data accurately reflects the source material and is diagnostic of long-term memory performance. Additionally, our method allows to further expand the dataset at a very low cost, making it feasible, for example, to label all books in the Gutenberg corpus at a price realistic for many academic and industry organizations.
## 11 Supplementary Materials
The code and data will be available at [https://github.com/r-seny/NarrativeXL](https://github.com/r-seny/NarrativeXL)
### Named entity substitution
Due to data contamination, a model trained on our data might "know" some of the books since its pre-training stage. To thwart such models' ability to rely on this knowledge, we identify and randomize character names in each book (similarly to how it was done in [6]). It is especially important for the "other book" decoy questions (see subsection 5.1), as we want to avoid shortcuts where scene summaries from other books can be identified simply by looking at the named entities mentioned in them.
|
2301.01984 | The Evolutionary Computation Methods No One Should Use | The center-bias (or zero-bias) operator has recently been identified as one
of the problems plaguing the benchmarking of evolutionary computation methods.
This operator lets the methods that utilize it easily optimize functions that
have their respective optima in the center of the feasible set. In this paper,
we describe a simple procedure that can be used to identify methods that
incorporate a center-bias operator and use it to investigate 90 evolutionary
computation methods that were published between 1987 and 2022. We show that
more than half (47 out of the 90) of the considered methods have the
center-bias problem. We also show that the center-bias is a relatively new
phenomenon (with the first identified method being from 2012), but its
inclusion has become extremely prevalent in the last few years. Lastly, we
briefly discuss the possible root causes of this issue. | Jakub Kudela | 2023-01-05T09:39:24Z | http://arxiv.org/abs/2301.01984v1 | # The Evolutionary Computation Methods No One Should Use
###### Abstract
The center-bias (or zero-bias) operator has recently been identified as one of the problems plaguing the benchmarking of evolutionary computation methods. This operator lets the methods that utilize it easily optimize functions that have their respective optima in the center of the feasible set. In this paper, we describe a simple procedure that can be used to identify methods that incorporate a center-bias operator and use it to investigate 90 evolutionary computation methods that were published between 1987 and 2022. We show that more than half (47 out of the 90) of the considered methods have the center-bias problem. We also show that the center-bias is a relatively new phenomenon (with the first identified method being from 2012), but its inclusion has become extremely prevalent in the last few years. Lastly, we briefly discuss the possible root causes of this issue.
keywords: Evolutionary Computation, Benchmarking, Metaheuristics, Center-bias, Zero-bias
## 1 Introduction
Imagine the following situation. Encountering a challenging optimization task, you decide to find the most recently developed algorithm for optimization published in some of the most prestigious journals. The analysis of the method performed on several standard benchmarks clearly shows that it is superior to all the other old methods. The paper also contains a link to a repository with the code. So, you give it a try. And it fails. The best results it provides are hardly better (or worse) than the ones you got from a simple implementation of a method that is more than two decades old. Maybe the problem you tried to solve is too challenging? Perhaps a bit of hyperparameter optimization could help the method perform as advertised? Or, maybe the method is not as good as it presented itself.
Through inspiration from natural behaviors, the field of evolutionary computation (EC) produced over its long history a great number of important metaheuristic algorithms, such as Evolutionary Strategy, Genetic Algorithms, Particle Swarm Optimization, or Differential Evolution. Such methods found applications in complex systems where the use of exact algorithms was either inadequate or computationally too prohibitive. However, over the past few years we have witnessed an explosion of "novel" methods that are based on natural/evolutionary principles. The bestiary of EC1, which tries to catalog of a portion of these nature-based methods, now contains over 250 methods that claim their inspiration in natural processes. And new methods are emerging at an ever-increasing rate. It is also becoming clearer that there is more creativity being spent at naming these "novel" methods, than in making sure they contain anything new computation-wise. After many of these methods have been found to conceal their lack of novelty behind a methaphor-rich jargon [1; 2; 3; 4; 5], a call was made from within the EC community [6]. In the letter, the collective of authors and signatories identified four main issues with the high-volume inflow of new methods: useless metaphors, lack of novelty, poor experimental validation and comparison, and publishing these methods in off-topic journals.
Footnote 1: Campelo, F., Aranha, C. Evolutionary computation bestiary. [https://github.com/fcampelo/EC-Bestiary](https://github.com/fcampelo/EC-Bestiary)
In this text, we will focus on the poor experimental validation of some of the EC methods. Most of the reasoning about the viability of metaheuristics is done through benchmarking [7]. If a new method performs well on a universally accepted set of benchmark problems, it is likely to be seen as valid. There have been several benchmark functions/sets proposed over the years, but the most widely recognized ones came from special sessions (competitions) on black-box optimization at two conferences: the IEEE Congress on Evolutionary Computation (CEC), and the Genetic and Evolutionary Computation Conference (GECCO), where the Black-Box Optimization Benchmarking (BBOB) workshop was held.
There is, however, another quite widely used benchmark set that contains some of the most well-known functions such as Griewank, Ackley, Rastrigin, Rosenbrock, and Schwefel. It was recently uncovered [8] that this set contains a serious design flaw, as a large portion of the functions in the set have their respective optimum at a zero vector (or in the
centre of the feasible set). This would be fine, if it were not for the methods that utilize this flaw to appear competitive. These methods incorporate a "check-the-middle" routine or have a centre-bias (or zero-bias) operator that draws them towards the center of the feasible set. One would expect that such methods do not get published very often, are easily spotted, or at the very least do not appear in high-profile journals.
In this paper, we describe a simple methodology that we use to uncover whether or not a given evolutionary computation method utilizes a center-bias operator. We then investigate 90 evolutionary computation methods from the mealpy library2 and Mathworks code repositories3 for the inclusion of the center-bias.
Footnote 2: N. V. Thieu, “A collection of the state-of-the-art meta-heuristics algorithms in python: Mealpy,” Available: [https://doi.org/10.5281/zenodo.3711948](https://doi.org/10.5281/zenodo.3711948)
Footnote 3: [https://www.mathworks.com/](https://www.mathworks.com/)
## 2 Methodology
We utilize the same methodology that was used to uncover the center-bias problem in [9] and [8]. The 13 benchmark function used for our test (and optimization ranges) are shown in Table 1. One can easily see that many of these functions, apart from F08, have the corresponding optima either at the zero vector or very close to it. The problem F08 is quite different from the rest, as its optimum is far away from the center.
For the evaluation we set the dimension of the problems to 30 and allow for at most 50,000 function evaluations. We also chose a simple performance measure - the mean error (as the difference between the optimal function value and best function value found) over 20 independent runs. Here, we also treat any value smaller than 1e-08 as identical to 1e-08, as the problem is essentially solved and additional precision is not needed (we could treat it as a 0 as well, but we will shortly use fractions of these numbers, which would bring unwanted hassle). We refer to the results of the computation as the "unshifted" ones. Afterwards, we introduce a shift operation, that "moves" the benchmark function by a predetermined vector \(s\), meaning that function \(f(x)\) becomes \(f(x+s)\). One expects that a "small" value of \(s\) should not result in a large deviation in the behaviour of the optimization method, as the two problems are very similar. We chose the shift vector as 10% of the range - e.g., for F01, \(s=[20,20,\ldots]\). We use the same computational framework (i.e., dimension 30, at maximum 50,000 function evaluations, and 20 independent runs) and refer to the results of these computations as the "shifted" ones.
What we are interested in is the "ratio" between the "shifted" and "unshifted" results for the individual benchmark functions, i.e., how many times are the results on the shifted problem worse than on the unshifted one. For the methods that do not incorporate a center-bias, one expects this number to be close to 1 (as the ushifted and shifted problems are similar), while for the methods that include a center-bias, this ratio should be much bigger than 1. Naturally, the value of this ratio will fluctuate depending on the given benchmark function, as well as on the number of independent runs of the algorithms. As a simple indicator of the center-bias, we look at the geometric mean of the ratios for the different benchmarks - if this value is bigger than 1E+01 (meaning that the method performs roughly at least on order of magnitude better on unshifted problems), we take it as a confirmation of the presence of the center-bias operator.
A small example of this computation is shown in Table 2, where we investigate five EC methods - Artificial Bee Colony (ABC) [10], Differential Evolution (DE) [11], LSHADE [12], Satin Bowerbird Optimizer (SBO) [13], and Runge Kutta Optimizer (RKO) [14]. The first two methods (ABC and DE) can be thought of as the "standard"
\begin{table}
\begin{tabular}{l l l l l l l} ID & name & type & range & \(f^{*}\) & \(f(0)\) & \(x*\) \\ \hline F01 & Sphere & U, S & [-100,100] & 0 & 0 & [0,0...] \\ F02 & Schwefel 2.22 & U, N & [-100,100] & 0 & 0 & [0,0...] \\ F03 & Schwefel 1.2 & U, N & [-100,100] & 0 & 0 & [0,0...] \\ F04 & Schwefel 2.21 & U, S & [-100,100] & 0 & 0 & [0,0...] \\ F05 & Rosenbrock & U, N & [-30,30] & 0 & 2.90E+01 & [1,1...] \\ F06 & Step & U, S & [-100,100] & 0 & 7.50E+00 & [-0.5,0...] \\ F07 & Quaritte with noise & U, S & [-1.28,1.28] & 0 & 0 & 0 & [0,0...] \\ F08 & Schwefel 2.26 & M, S & [-500,500] & -1.25E+04 & 0 & [420.9, 420.9,...] \\ F09 & Rastrigin & M, S & [-5.12,5.12] & 0 & 0 & [0,0...] \\ F10 & Ackley & M, N & [-32,32] & 0 & 0 & [0,0...] \\ F11 & Griewank & M, N & [-600,600] & 0 & 0 & [0,0...] \\ F12 & Penalized1 & M, N & [-50,50] & 0 & 1.67E+00 & [-1,-1,...] \\ F13 & Penalized2 & M, S & [-50,50] & 0 & 3.00E+00 & [1,1...] \\ \end{tabular}
\end{table}
Table 1: The 13 benchmark functions, dimension 30. U -- unimodal, M — multimodal, S — separable, N — non-separable, \(f^{*}\) – the optimal function value, \(f(0)\) – function value at the zero vector, \(x^{*}\) – optimal solution.
ones, LSADE is among the state-of-the ones (as it served as a basis of many of the best methods for recent CEC competitions), and the last two (SBO and RKO) are the "new" ones. One can quite easily see that for the first three methods (ABC, DE, and LSHADE) the geometric mean of the ratios is roughly 1, meaning that no center-bias was detected. For SBO, the situation is a bit more complicated, as on many benchmark functions the ratio is relatively low (roughly between 1 and 2), but is very large (almost 5E+04) on F2. This could be a fluke. Fortunately, the nature of the geometric mean will suppress some of the individual flukes - the value for SBO is 3.95E+00 (i.e., \(<\)1E+01), so we do not label it as a method with a center-bias. The same cannot be said about RKO. Here, many of the ratios are extremely big (\(>\)1E+06), and the value of the geometric mean is 7.36E+04. We can confidently say that RKO incorporates a center bias operator.
An interesting observation can be made regarding the benchmark function F08. For all five methods, the ratio between the shifted and unshifted results on F08 is very close to 1. Recall that F08 is the only function in the benchmark set that has the optimum quite far away from the center of the feasible set, and its function value at the zero-vector is also quite far away from the optimal value. Although it is arguably not surprising that the methods have a ratio around 1 on this function, it is still valuable to have it confirmed - the function F08 serves as a sanity check in the benchmark set.
## 3 Results and Discussion
In this section we report the results of using the methodology described in the previous section on 90 selected EC methods. The selected methods, the year of the publication that describes them, and the geometric mean of the ratios are shown (in alphabetical order) in Table 3, with the ones with a confirmed center-bias (i.e., values \(>\)1E+01) highlighted in red. These results are extremely worrying, as more than a half (47 out of the 90) methods have a confirmed center-bias. And they become even worse when we take a look at the number of methods with center-bias that were proposed recently, as shown in Figure 1.
Figure 1: Number of papers proposing methods with/without center-bias in time.
We can find that while the number newly proposed methods that do not have the center-bias problem increased only slightly over the last three decades, the number of methods that we have identified as having a center-bias problem is growing extremely fast, especially in the last five years. It has gotten so bad that an overwhelming majority of newly proposed methods have the center-bias problem. An important thing to remark is that we only considered the "baseline" (or original) versions of the methods, and not any of the "improved" or "enhanced" variants that are also being published at an ever-increasing rate. If these were considered as well, we suspect that the graph would look even worse.
We can also see that the first method that we have found to incorporate the center-bias was Teaching Learning-based Optimization (TLO) in 2012, followed by Wind Driven Optimization (WDO) in 2013, and Grey Wolf Optimizer (GWO) in 2014. From these three, TLO and GWO have become extremely influential (gathering thousand of citations) and spawned a large number of variants and modifications. Our failure to quickly identify that they are defective is one of the root causes of the mess we have to deal with now. Although the defect of the GWO was uncovered in 2019 [101], GWO is still used in numerical comparisons (even on problems that are susceptible to the center-bias operator). Similar defects have been also found for the Salp Swarm Optimization (SSO), Sooty Tern Optimization Algorithm (STOA), Tunicate Swarm Algorithm (TSA), Harris Hawks Optimization (HHO), Butterfly Optimization Algorithm (BOA), Slime Mould Algorithm (SMA), Gradient-Based Optimizer (GBO), Marine Predators Algorithm (MPA), and Komodo Mlipir Algorithm (KMA), all in 2022 [102; 9; 8].
For the most part, the methods that incorporate a center-bias procedure have been developed by a diverse groups of authors (i.e., most authors have only one or two such methods). There is, however, one very notable exception. The group of S. Mirjalili, A. H. Gandomi, and A. A. Heidari is collectively responsible for 20 of the 47 methods that contain center-bias (and S. Mirjalili is also one of the authors of GWO).
\begin{table}
\begin{tabular}{l l l l l l l} Abbreviation & Method name & Year & Goomen & Abbreviation & Method name & Year & geom \\ \hline ABC [10] & Artificial Bee Colony & 2008 & 1.29E+00 & HC [15] & Hill Climbing & 1993 & 1.13E+00 \\ ACO [16] & Ant Colony Optimization Contours & 2008 & 7.40E+01 & HC [17] & Hunger Games Search & 2021 & 3.66E+06 \\ AEO [18] & Artificial Ecosystem-based Optimization & 2020 & [101E+007] & HSO [19] & Henry Gas Solubility Optimization & 2019 & 8.07E+03 \\ ALO [20] & Ant Lion Optimizer & 2015 & 1.44E+00 & HHO [21] & Harris Hawks Optimization & 2019 & 1.62E+05 \\ AO [22] & Avila Optimization & 2021 & 2.26E+05 & HS [23] & Harmony Search & 2001 & 9.75E-01 \\ AOA [24] & Arithmetic Optimization Algorithm & 2021 & 1.01E+00 & IWO [25] & Invasive Wied Optimization & 2006 & 1.88E+00 \\ ArchOx [26] & Arch ArchOx [26] & Arch ArchOx [26] & Arch ArchOx [26] & ArchOx [26] & ArchOx [26] & ArchOx [26] & 1.19E+01 \\ ASO [28] & Atom Search Optimization & 2019 & 8.71E+01 & KMA [29] & Komodo Mlipir Algorithm & 2022 & 1.84E+05 \\ BA [30] & Bat-inspired Algorithm & 2010 & 1.44E+00 & LCO [31] & Lic Choice-based Optimization & 2020 & 8.31E+07 \\ BBO [32] & Biogeography-Based Optimization & 2008 & 6.43E-01 & MA [33] & Memetic Algorithm & 1989 & 1.68E-03 \\ Bees [34] & Bees Algorithm & 2006 & 1.16E+00 & MPG [35] & Multi-Flame Optimization & 2015 & 1.73E-01 \\ BFS [36] & Bald Eagle Search & 2020 & [2.62E+08] & MO [37] & Mountain Gazelle Optimizer & 2022 & 1.28E+01 \\ BFO [38] & Bacterial Forging Optimization & 2002 & 9.66E-01 & MPA [39] & Marine Predators Algorithm & 2020 & 1.06E+02 \\ BOA [40] & Butterfly Optimization Algorithm & 2019 & 9.57E+05 & MRPO [41] & Martin Ray Frogging Optimization & 2020 & 6.40E+07 \\ BBO [42] & Battle Royle Optimization & 2021 & 2.59E+09 & MSA [43] & Motis Search Algorithm & 2018 & 8.37E+00 \\ BSA [44] & Bird Swarm Algorithm & 2016 & 1.09E+01 & MVO [45] & Multi-Vers Optimizer & 2016 & 1.75E-00 \\ BSO [46] & Brain Storm Optimization & 2011 & 7.85E+00 & NMR [47] & Naked-Rot Azeit Algorithm & 2019 & 9.56E+08 \\ CA [48] & Culture Algorithm & 2009 & 7.18E-01 & NOR [49] & Nuclear Reaction Optimization & 2019 & 2.32E+06 \\ CEM [50] & Cross-Entropy Method & 2005 & 1.33E+00 & JPFA [51] & Pathfinder Algorithm & 20019 & 3.11E+08 \\ CGO [52] & Chaos Game Optimization & 2021 & 2.14E+07 & PSO [53] & Particle Swarm Optimization & 1995 & 9.70E-01 \\ CHOA [54] & Chimp optimization algorithm & 2020 & 3.39E+03 & PSS [55] & Pareto-like Sequential Sampling & 2021 & 7.17E+08 \\ COA [56] & Coyote Optimization Algorithm & 2018 & 4.00E+06 & QSA [57] & Queuing Search Algorithm & 2021 & 7.91E-01 \\ - CRO [58] & Coral Reefs Optimization & 2014 & 9.69E-01 & RXU [14] & Range Katz Optimization & 2021 & 7.38E+01 \\ CSA [59] & Cocke Search Algorithm & 2009 & 1.10E+00 & SAO [50] & Simulated Annealing & 1987 & 8.98E-01 \\ CSO [61] & Cat Swarm Optimization & 2006 & 9.58E-01 & SARO [62] & Search And Rescue Optimization & 2019 & 2.27E+00 \\ DE [11] & Differential Evolution & 1997 & 9.66E-01 & SDO [13] & Sain Bowderth Polarimeter & 2017 & 3.95E+00 \\ Dandol [63] & Dandol Optimizer & 2022 & 3.39E+02 & SCA [64] & Sinc Cosine Algorithm & 2016 & 1.18E+04 \\ DO [65] & Dragonfly Optimization & 2016 & 6.62E+02 & SPG [66] & SailFish Optimizer & 2019 & 2.57E+07 \\ ERO [67] & Electromagnetic Field Optimization & 2016 & 6.78E-01 & SID [68] & Spottedted Beam Optimizer & 2017 & 1.31E+00 \\ EHO [69] & Elephant Herming Optimization & 2015 & 3.99E+03 & SDO [70] & Sea Lion Optimization Algorithm & 2019 & 2.83E+00 \\ EO [71] & Equilibrium Optimizer & 2020 & 4.65E+03 & SMA [72] & Slime Mould Algorithm & 2020 & 4.54E+06 \\ EOA [73] & Earthworm Optimization Algorithm & 2018 & 2.58E+05 & SSRR [74] & Swarm Robotics Search And Rescue & 2017 & 2.03E+0 \\ EP [75] & Evolutionary Programming & 1999 & 1.43E+00 & XSA [76] & Sparant Search Algorithm & 2020 & 2.162E+06 \\ ES [77] & Evolution Strategies & 2002 & 1.14E+00 & SSDO [78] & Social Skk-Driver Optimization & 2020 & 5.40E+08 \\ FA [79] & Fireworks Algorithm & 2010 & 1.34E+00 & SO [80] & Salp Swarm Optimization & 2017 & 2.28E+01 \\ FAO [81] & Frevis-Based Investigation Optimization & 2020 & 5.07E+06 & SSPdata [82] & Social Spider Algorithm & 2015 & 1.11E+00 \\ FFA [83] & Firefly Algorithm & 2011 & 1.18E+00 & STOA [84] & Scoty Tern Optimization Algorithm & 2019 & 6.78E+01 \\ FAO [85] & Fruit-By Optimization Algorithm & 2012 & 4.01E+00 & ITO [86] & Teaching Learning-based Optimization & 2012 & 3.19E+01 \\ FPA [87] & Flower Pollimation Algorithm & 2012 & 9.74E-01 & ITO [88] & Tree Physiology Optimization & 2019 & 2.22E+01 \\ GA [89] & Genetic Algorithm & 1994 & 1.02E+00 & TSA [90] & Taxie Swarm Algorithm & 2020 & 6.25E+06 \\ GRO [91] & Gradient-Based Optimizer & 2020 & 1.71E+00 & IWO [2] & Tug for our Optimization & 2017 & 9.60E-01 \\ GEO [93] & Germinal Center Optimization & 2018 & 1.02E+00 & VCS [94] & Virus Colony Search & 2016 & 29.0E+04 \\ GOA [95] & Grasshopper Optimization Algorithm & 2017 & 3.39E+00 & WDO [96] & Wind Driven Optimization & 2013 & 4.86E+01 \\ GSKA [97] & Gaining Sharing Knowledge-based Algorithm & 2020 & 4.51E-01 & WHO [98] & Wildebeest Hred Optimization & 2019 & 8.63E+02 \\ GWO [99] & Grey Wolf Optimizer & 2014 & 8.89E+05 & WOO [100] & While Optimization Algorithm & 2016 & 1.87E
Another interesting point to make is that some of methods that display the worst center-bias properties (i.e., the largest values of the geometric mean of the ratios) are the ones which were supposedly based on "mathematical" processes - Arithmetic Optimization Algorithm (AOA), Gradient-Based Optimizer (GBO), Runge Kutta Optimizer (RKO), and Sine Cosine Algorithm (SCA). The following are the first few sentences from the abstract of the paper describing RKO [14]:
"The optimization field suffers from the metaphor-based "pseudo-novel" or "fancy" optimizers. Most of these cliche methods mimic animals' searching trends and possess a small contribution to the optimization process itself. Most of these cliche methods suffer from the locally efficient performance, biased verification methods on easy problems, and high similarity between their components' interactions. This study attempts to go beyond the traps of metaphors and introduce a novel metaphor-free population-based optimization method based on the mathematical foundations and ideas of the Runge Kutta (RK) method widely well-known in mathematics."
The irony is rich.
## 4 Conclusion
The center-bias problem is right now one of the major issues plaguing the field of evolutionary computation. In this paper, we have described a simple procedure for identifying methods with center-bias and used it to investigate 90 methods that were proposed in the last three decades. We have found that 47 of the 90 methods utilize center-bias. We have also shown that the utilization of center-bias is a relatively new phenomenon, with first instances from 2012-2014. However, the number of methods that use it grew extremely fast in the last five years.
We should note that there is an additional problem that plagues the field right now, which is the equivalence of some of the methods that is hidden under a metaphore-rich jargon. Some of the methods that we have identified as not having a center bias, such as Harmony Search (HS), Cockco Search Algorithm (CSA), Firefly Algorithm (FA), Moth-Flame Optimization (MFO), Ant Lion Optimizer (ALO) should also not be used, as they have been found to be either extremely similar (or identical) to other methods [1; 4; 103].
Further utilization, development and improvement of the methods that contain a center-bias is an exercise in futility, as by their very nature they cannot be considered as efficient algorithms. Enough computational and human resources were already wasted in writing, testing, comparing, and reviewing these methods. The field of evolutionary computation needs a spring cleaning. The sooner the better.
## Acknowledgment
This work was supported by IGA BUT: FSI-S-20-6538.
|
2303.11336 | Studying Limits of Explainability by Integrated Gradients for Gene
Expression Models | Understanding the molecular processes that drive cellular life is a
fundamental question in biological research. Ambitious programs have gathered a
number of molecular datasets on large populations. To decipher the complex
cellular interactions, recent work has turned to supervised machine learning
methods. The scientific questions are formulated as classical learning problems
on tabular data or on graphs, e.g. phenotype prediction from gene expression
data. In these works, the input features on which the individual predictions
are predominantly based are often interpreted as indicative of the cause of the
phenotype, such as cancer identification. Here, we propose to explore the
relevance of the biomarkers identified by Integrated Gradients, an
explainability method for feature attribution in machine learning. Through a
motivating example on The Cancer Genome Atlas, we show that ranking features by
importance is not enough to robustly identify biomarkers. As it is difficult to
evaluate whether biomarkers reflect relevant causes without known ground truth,
we simulate gene expression data by proposing a hierarchical model based on
Latent Dirichlet Allocation models. We also highlight good practices for
evaluating explanations for genomics data and propose a direction to derive
more insights from these explanations. | Myriam Bontonou, Anaïs Haget, Maria Boulougouri, Jean-Michel Arbona, Benjamin Audit, Pierre Borgnat | 2023-03-19T19:54:15Z | http://arxiv.org/abs/2303.11336v1 | # Studying Limits of Explainability by Integrated Gradients for Gene Expression Models
###### Abstract
Understanding the molecular processes that drive cellular life is a fundamental question in biological research. Amhibious programs have gathered a number of molecular datasets on large populations. To decipher the complex cellular interactions, recent work has turned to supervised machine learning methods. The scientific questions are formulated as classical learning problems on tabular data or on graphs, e.g. phenotype prediction from gene expression data. In these works, the input features on which the individual predictions are predominantly based are often interpreted as indicative of the cause of the phenotype, such as cancer identification. Here, we propose to explore the relevance of the biomarkers identified by Integrated Gradients, an explainability method for feature attribution in machine learning. Through a motivating example on The Cancer Genome Atlas, we show that ranking features by importance is not enough to robustly identify biomarkers. As it is difficult to evaluate whether biomarkers reflect relevant causes without known ground truth, we simulate gene expression data by proposing a hierarchical model based on Latent Dirichlet Allocation models. We also highlight good practices for evaluating explanations for genomics data and propose a direction to derive more insights from these explanations.
Explainability, Transcriptomic data, Supervised Learning, Feature Attribution, Integrated Gradients
## I Introduction
Understanding the molecular mechanisms that drive cellular metabolism is vital to better diagnose, treat and prevent diseases such as cancers or dementia. Schematically, DNA encodes genes (genome) which are transcribed into RNA (transcriptome) and then translated into proteins (proteome) that catalyse the complex molecular processes. The expression of genes is regulated in part by transcription factors and by epigenetic mechanisms (epigenome). It can also be modified by genetic mutations. A long standing objective is to seek relations between measurements of gene expression and phenotypes such as clinical features.
Amhibious programs have gathered such molecular databases on disease patients and on the general population, e.g. The Cancer Genome Atlas (TCGA) for cancer study ([https://www.cancer.gov/tcga](https://www.cancer.gov/tcga)), ROSMAP [1] about dementia, and the UK Biobank with genetic information on \(\sim 500000\) people ([https://www.ukbiobank.ac.uk](https://www.ukbiobank.ac.uk)). Data from these projects were analysed to seek relations between genetic variation, gene expression and clinical features. This has been done using statistical tests, or relying on clustering methods that group individual samples based on their molecular profiles; see the Review [2] about TCGA. To account for more complex biological relationships, recent works have turned to supervised machine learning methods to probe the same questions.
Hence, the question is formulated as a classical learning problems on tabular data (possibly seen as on graphs): the prediction of interactions between genes [3] or between multi-omics modalities [4], prognosis prediction from gene expression data [5], phenotype prediction from similar inputs [6, 7], from single nucleotide change in DNA [8] or from multiple modalities [9]. Once trained, these models achieve reasonable performance, and the individual predictions can be explained by the input features on which they are predominantly based. These features are then often interpreted as biomarkers for the studied phenomenon, e.g. for cancer identification [6, 7]. To which extent are these biomarkers indicative of the molecular pathways responsible for the phenotypes? Here we address this question in the context of explainability for data processing.
To achieve explainability, one should adapt the learning method to the data, to the task complexity and the expected explanations [10]. In the present work, we will rely on additive feature attribution methods [11]. Especially, the Integrated Gradients (IG) method [12] is a widely used approach that satisfies the completeness axiom and is computationally efficient. Explanation reliability can then be quantified by various metrics; e.g. faithfulness, stability and fairness have been developed for tabular data [13]. Is it efficient to identify biomarkers in biological data using IG?
To study this question and explore the limits of explainability in this context, we put forward three contributions:
* Using TCGA gene expression dataset as a relevant example, we show that ranking features by importance is not enough to robustly identify biomarkers. Our point is that such features should be both sufficient and necessary for the predictions.
* To this end, we propose to systematically evaluate two metrics on genomic data: the Prediction Gap on Unimportant features (PGU) estimating the number of important features necessary for a single prediction, the Prediction Gap on Important features (PGI) estimating the number of important features sufficient for a single prediction.
* Seeking confidence and control over the proposed explanations, we propose to adapt the Latent Dirichlet Allocation (LDA) model [14] to generate data that have similar properties to biological data (e.g., the hierarchical properties of expression pathways). We then evaluate the explanations obtained on this LDA model thanks to the IG attribution method.
The article is organised as follows: Section II introduces the data, the learning methods, the attribution methods and the metrics used. Section III details the study on pan-cancer classification based on transcriptomic data from TCGA. It highlights the importance of systematically estimating several metrics, before reporting important features. Section IV proposes a LDA based generative model, and we then evaluate the quality of the generated explanations. Concluding remarks are in Section V. The code and data are available publicly on [https://github.com/mbonto/XAI_for_genomics](https://github.com/mbonto/XAI_for_genomics).
## II Background
The objective is to solve a classification task over \(C\) classes for gene expression data. A data sample, \(\mathbf{x}\in\mathbb{R}^{F}\), contains the expression of \(F\) genes (features). The dataset is a collection of cells of various classes (cancer types in TCGA). For the present work, we consider classical supervised methods, denoted by \(f(\cdot)\): Logistic Regression (LR) or MultiLayer Perceptron (MLP). Nonetheless, the methodology could be extended to any learning architecture, e.g. neural networks. A softmax function is applied before the output of the model which is \(f(\mathbf{x})\in\mathbb{R}^{C}\). Working in a supervised context, the parameters are updated by gradient descent in order to minimise a loss function on a set of training examples. As the classes in TCGA are unbalanced, learning is evaluated on a test set by the so-called _balanced accuracy_, computed as the average of the recalls obtained on each class.
Explainability will be the ability of a method to propose biomarkers on a single prediction, i.e. which features are important for a prediction as estimated by a score \(\phi_{i}\) computed for each feature \(\mathbf{x}_{i}\). We choose the integrated gradients method (IG) [12] rather than a perturbation based method such as SHAP [11] because of its performance w.r.t. computation time. To simplify the notations, \(f(\mathbf{x})\) will denote the output associated to the true class \(c\). Given a baseline \(\mathbf{x}^{\prime}\in\mathbb{R}^{F}\), the IG score is:
\[\phi_{i}(\mathbf{x})=(\mathbf{x}_{i}-\mathbf{x}^{\prime}_{i})\int_{\alpha=0}^{ 1}\frac{\partial f(x)}{\partial\mathbf{x}_{i}}\bigg{|}_{x=\mathbf{x}^{\prime }+\alpha(\mathbf{x}-\mathbf{x}^{\prime})}d\alpha\,. \tag{1}\]
Integrated gradients satisfy the completeness property as the sum of the attributions is equal to the difference between the output of the network at the input and at the chosen baseline:
\[\sum_{i=1}^{F}\phi_{i}(\mathbf{x})=f(\mathbf{x})-f(\mathbf{x}^{\prime}). \tag{2}\]
Note that a LR is by itself interpretable as the weights' amplitudes reveal which features are important. The IG score of a feature used in a LR simply reflects the product of the weight and the feature value.
At a single prediction level, some metrics exist to evaluate the relevance of features [13]. PGI and PGU are two metrics measuring the faithfulness of an explanation. They are computed as the area under the curve of the prediction gap while a varying proportion of input features is set to 0 (the average value of features). Denoting \(\mathbf{x}\) the original input and \(\tilde{\mathbf{x}}\) the modified input (with some features set to 0), the prediction gap is: \(\max(f(\mathbf{x})-f(\tilde{\mathbf{x}}),0)\). It increases as the features are removed. For PGI (resp. PGU), the features identified as the most (resp. less) important are removed first (Fig. 1). By construction, the maximum area under the curves is 1. When important features are removed first, the model quickly makes wrong predictions, thus, PGI is expected close to 1. When less important features are removed first, the prediction stays stable until a large number of features is removed. When it occurs, the prediction of the model is close to the baseline prediction (zero input). As in practice, the observed area is restricted by \(f(\mathbf{x})-f(\mathbf{x}^{\prime})\), the PGU values can be adjusted for interpretation. Here, we divide them by \(1-f(\mathbf{x}^{\prime})\). The prediction gaps are averaged over all test samples correctly classified.
When the features that cause the class distinction are known, do they stand out among the most important features identified for the model as a whole? Feature Agreement (FA) is a metric that quantifies the number of a priori important features
Fig. 1: Scheme describing the Prediction Gaps on Important features (PGI) and on Unimportant features (PGU).
retrieved in the most important feature set identified. Denoting the set of features characteristic of a class as \(\mathcal{F}\) and the set of features identified (here by IG) as the most important for the method as \(\mathcal{M}\),
\[\text{FA}=\frac{|\mathcal{F}\cap\mathcal{M}|}{|\mathcal{F}|} \tag{3}\]
For instance, in our setting, the important features attributed to a class should be over-expressed genes resulting from over-expressed pathways characteristic of this class.
## III Why ranking features by importance is not enough? A motivating example on Pan-Can TCGA
### _Pan-Can TCGA dataset_
TCGA, a cancer genomics program, generated genomic, epigenomic, transcriptomic, and proteomic datasets spanning 33 cancer types (publicly available at [https://portal.gdc.cancer.gov/](https://portal.gdc.cancer.gov/)). In the present study, we consider a cancer type classification task on a gene expression dataset called TCGA Pan-Can. It relies on 9680 samples classified in the 33 cancer types. Class size ranges from 36 to 1095 samples. For each sample, the expression of 16335 genes is measured. The transcriptomic data have been pre-processed by the Pan-Cancer Atlas initiative to mitigate the bias induced by the diversity of the experimental settings [15]. The gene expression features are \(\log_{2}(count+1)\) where \(count\) is the upper-quartile normalized raw count, so as to reduce the impact of outliers.
### _Learning method and experimental setting_
For classification learning, we consider a LR and a MLP. We also include a diffusion layer (D) beforehand; it diffuses the initial gene expression vector on the gene correlation graph. The relevance of this diffusion will be later motivated in Section IV. Before training, each feature (gene expression) is standardised. The attribution scores are computed by IG using a zero baseline in Eq. (1). The code uses Pytorch [16] and the Captum library [17]. More details are in Appendix A.
### _Interpretation_
The results are presented in Table Ia. The balanced accuracy is slightly higher for the MLP than for the LR. The diffusion does not significantly degrade the performance. For the LR, a PGU of 0.003 can be interpreted as the possibility to remove every gene except the 49 most important ones from the model without affecting its prediction. A PGI of 0.95 implies that the 817 most important genes should be removed from the model to hurt the prediction. Hence, the 49 genes are sufficient (PGU) but not necessary (PGI) to get a correct prediction. For the MLP, even more genes could be removed without hurting the performance. Similar results are obtained for the diffused models. This probably underlies the biological complexity.
We conduct another experiment to interpret the global meaning of these scores. The balanced accuracy scores obtained from iteratively adding the most important features in the LR either computed on each class independently (dark purple) or on the whole dataset (light purple) are plot in Fig. 1(a). In the first case, the first point is obtained by keeping the most important gene on average for each of the 33 classes. The curve increases rapidly and contains about the same information than PGI: \(\sim 50-100\) genes are enough to perform correctly. The difference between light and dark purple curves highlights that computing the IG score averaged on all sampled (light purple curve) as done classically provide less informative genes that classifying the important genes by classes (dark purple curve).
For comparison, we also computed the balanced accuracy obtained while keeping random features (brown, error bars are standard deviations computed over 100 trials) or random features without the ones identified as important for the classes (brown dots). The global interpretation of these two curves is that any set of 800 random features, even within the less important ones, is sufficient to get a good performance. The dataset contains highly redundant information, which is not surprising knowing the diversity of cancer cells and the numerous and complex metabolic pathways involving many bio-actors. These results raise several questions on the interpretation of the explanations: the features proposed as relevant thanks to IG, are not important individually. To go further, we propose to simulate a biologically plausible (yet far more simple) dataset with known ground truth explanations.
## IV The importance of a mesoscopic scale highlighted on a simulated dataset
### _A generative model of transcriptomic data,_
For transcriptomic data, various simulation models already exist [18, 19]. Here, we propose to use a generative probabilistic model called LDA [14] where gene expression is controlled by the activation of metabolic pathways. It is well known for producing documents with a fixed number of words associated with various subjects, yet has already been used in genomics [20]. With LDA, we can generate individual samples (documents) with a fixed number of sequencing reads (words) associated with diverse metabolic pathways (subjects). The
\begin{table}
\end{table} TABLE I: Explainability metrics obtained on the test sets of several datasets, using different supervised learning methods LR: logistic regression. MLP: multilayer perceptron. D: diffusion on correlation graph. Metrics are: PGI: prediction gap on important features. PGU: prediction gap on unimportant features. FA: feature agreement.
expression of the genes in the same pathway appears to be highly correlated in the simulated data.
Formally, the dataset contains expression levels for \(G\) genes, themselves grouped in \(P\) sets modelling metabolic pathways. An individual sample is described by a set of couples (\(\text{gene},\text{value}\)), where the value is a relative number of drawn reads associated with the gene. The model requires priors \(\boldsymbol{\eta}_{p}\) on the relative proportion of genes expressed in each pathway \(p\) and priors \(\boldsymbol{\alpha}_{c}\) defining the relative proportion of pathways expressed in a sample belonging to the class \(c\).
The relative proportion of reads appearing in a pathway is drawn once and for all as \(\boldsymbol{\beta}_{p}\sim\text{Dirichlet}(\boldsymbol{\eta}_{p})\). To generate a single sample \(s\), belonging to class \(c\), two steps are followed:
1. Draw the proportion of pathways: \(\boldsymbol{\theta}_{s}\sim\text{Dirichlet}(\boldsymbol{\alpha}_{c})\),
2. Drawing of \(N\) reads. For each read \(i\), 1. a pathway is assigned \(p_{i}\sim\text{Multinomial}(\boldsymbol{\theta}_{s})\), 2. a read is observed \(g_{i}\sim\text{Multinomial}(\boldsymbol{\beta}_{p})\).
To make our simulated task more interpretable, the classes are designed to influence a different set of pathways and the pathways to only influence a sparse set of genes.
### _Experimental setting: simulation and learning_
Two simulated datasets, noted as SIMU1 and SIMU2, are generated from the model. They contain 9900 examples with 15000 genes generated from 33 classes. A class is defined by 37 pathways which are over-expressed. By default, all the other pathways have an equal probability to be activated. In SIMU1, the classes have non-overlapping over-expressed pathways. In SIMU2, pathways can overlap, which more closely reflects the complexity of real signalling pathways. More details on the data generation process are in Appendix B.
The same setting as in Section III-B is used to evaluate the quality of the feature-based explanations. As the ground truth is known, FA can be computed. Additionally, under the intuition that correlated features should have the same importance, we compute a diffused FA (D + FA) from the IG \(\phi_{i}\) diffused by D as defined in Appendix A.
### _Interpretation_
We discuss here in details the results of LR but similar conclusion can be drawn for all models. Fig. 2 shows the flexibility of the model across two simulations (and can be compared with real TCGA data).
In Tables I(b) the 142 (resp. 178) most important genes should be removed to significantly hurt the prediction for SIMU1 (0.9905 PGI) (resp. SIMU2 (0.9881 PGI)). It is consistent with the previous finding as the data is less redundant than in TCGA by design. FA shows that among the top-370 genes of a single sample in SIMU1 (number of over-expressed gene per class), 266 genes belong to the over-expressed pathways. In SIMU2, only 43% of the most important genes belongs to the over-expressed pathways. The metric D + FA obtains better results. This is not surprising given the design of the model where genes inside the same pathway are strongly correlated, and this calls for the seeking of mesoscopic explanations instead of individual feature attributions. Although we did not observe any significant improvement when diffusing the inputs of the model directly (D+LR and D+MLP), it seems promising, from the data perspective, to take into account the correlation graph as an element for the processing of such data, and the diffused feature attribution for explainability.
## V Conclusion
In this work, we proposed good practices for evaluating biomarkers derived from explainability methods on transcriptomic data. We first evaluated the complexity of a real dataset, TCGA, by characterising how the accuracy of a network evolves as an increasing proportion of the genes, sorted in different manners, are set to zero. We evaluated two simple metrics: the PGU that allowed to estimate that a specific set of 50 genes are sufficient to correctly classify each sample and the PGI that showed that removing this set was not degrading the prediction. We additionally showed that random subsets of 800 genes were good enough to correctly classify the classes, even when removing the 800 most important genes from the possible choices. These results underline the spread of the information and the ambiguity in defining well behaved explanatory features in gene expression data. Then, we proposed a simulation tool, based on LDA, with granularity fine enough so that it allowed us to analyse the pertinence of the genes selected by IG. Interestingly, diffusing the IG score on the correlation matrix led to very strong performance
Fig. 2: Explainability metrics on real (Pan-Can TCGA) and simulated (SIMU1 and SIMU2) data, obtained after learning with logistic regression.
increase in term of explainability (\(\sim 95-100\)% of correctly selected genes). This interesting direction will be investigated in a future work.
|
2308.13653 | On the combinatorics of Lotka-Volterra equations | We study an approach to obtaining the exact formal solution of the 2-species
Lotka-Volterra equation based on combinatorics and generating functions. By
employing a combination of Carleman linearization and Mori-Zwanzig reduction
techniques, we transform the nonlinear equations into a linear system, allowing
for the derivation of a formal solution. The Mori-Zwanzig reduction reduces to
an expansion which we show can be interpreted as a directed and weighted
lattice path walk, which we use to obtain a representation of the system
dynamics as walks of fixed length. The exact solution is then shown to be
dependent on the generator of weighted walks. We show that the generator can be
obtained by the solution of PDE which in turn is equivalent to a particular
Koopman evolution of nonlinear observables. | Francesco Caravelli, Yen Ting Lin | 2023-08-25T19:52:29Z | http://arxiv.org/abs/2308.13653v1 | # On the combinatorics of Lotka-Volterra equations
###### Abstract
We study an approach to obtaining the exact formal solution of the 2-species Lotka - Volterra equation based on combinatorics and generating functions. By employing a combination of Carleman linearization and Mori-Zwanzig reduction techniques, we transform the nonlinear equations into a linear system, allowing for the derivation of a formal solution. The Mori-Zwanzig reduction reduces to an expansion which we show can be interpreted as a directed and weighted lattice path walk, which we use to obtain a representation of the system dynamics as walks of fixed length. The exact solution is then shown to be dependent on the generator of weighted walks. We show that the generator can be obtained by the solution of PDE which in turn is equivalent to a particular Koopman evolution of nonlinear observables.
**Keywords:** Lotka-Volterra equations, Generating functions, Koopman evolution, formal solution, Carleman linearization, Mori-Zwanzig
###### Contents
* 1 Introduction
* 2 Formal solution method
* 2.1 Linearization
* 2.2 The Mori-Zwanzig formalism for linear system
* 3 Simple example: \(\dot{x}=x^{2}\)
* 3.1 Carleman linearization and Mori-Zwanzig reduction
* 3.2 Series summation
* 3.3 Generating function of directed lattice walks
* 3.4 Closed form PDE for the generating function
* 3.5 Connection to the Koopman's representation
* 3.6 Properties of the formal solution
* 4 The Lotka-Volterra equations
* 4.1 Carleman linearization of LV
* 4.2 Mori-Zwanzig reduction
* 4.3 Directed walks on lattice
* 4.4 Example of lattice coefficient calculation
* 4.5 Lotka-Volterra via generating functions
* 4.6 Comments on Lotka-Volterra with N species and hyperlattices
* 5 Conclusions
## 1 Introduction
The two species Lotka-Volterra equations are the archetypal nonlinear equations, exhibiting both laminar and oscillatory behavior. These are also known as predator-prey equations and are a pair of coupled differential equations used to model the interactions between two species in an ecosystem. Developed independently by Alfred J. Lotka and Vito Volterra in the early 20th century [1, 2, 3, 4], these equations provide a mathematical framework for understanding the dynamics of predator-prey relationships and to model the nonlinearity of co-dependent species. This model has experienced some renewed interest in later years because random multi-species models exhibit marginally stable equilibria [5]. The techniques used to study these problems come from disordered systems, which have shed some light on a century-old problem. While the LV equations have been extensively studied, and various analytical and numerical methods have been employed to find solutions and analyze their behavior, we believe that the results and combinatorial structure presented in this paper have never been explored in the past [6, 7].
The Lotka-Volterra equations assume two main populations: the predator and the prey. They describe how the populations of these species change over time based on their interactions and the availability of resources. In the most general form, the
equations are as follows:
\[\frac{dx}{dt} = \alpha x+\beta xy \tag{1}\] \[\frac{dy}{dt} = \gamma y+\delta xy \tag{2}\]
where \(x\) and \(y\) represent the real-valued population densities of the prey (e.g. rabbits) and predator (e.g. wolves) species, respectively. The quantities \(dx/dt\) and \(dy/dt\) represent the rates of change of the population densities over time. The parameters \(\alpha>0\) and \(\gamma<0\) represent the per-capita growth rate of the prey and predator population in the absence of predation, respectively (e.g. rabbits reproduce indefinitely in the absence of wolves, while wolves die off in the absence of rabbits). The nonlinear terms are controlled by the parameters \(\beta\) and \(\delta\): \(\beta<0\) and \(\delta>0\) represent the rate of change of prey and predator population (per unit of prey population, per unit predator population) due to predation. Above, one often assumes the ecologically meaningful parameter regime: \(\alpha>0\), \(\beta<0\), \(\gamma<0\), and \(\delta>0\), where oscillatory dynamics can occur as a result of feedback, instead of unstable growth or complete decay. This paper is predominantly focused on the two species case, but we will later discuss some features of higher dimensional LVs as well in view of the combinatorial analysis of this paper.
The LV equations illustrate the complex interplay between predators and prey in an ecosystem. They demonstrate how fluctuations in one population can influence the dynamics of the other population. These equations have been used to study various ecological systems, such as the interaction between hares and lynx [8] or bacteria and Bacteriophages [9], and provide insights into population dynamics, cycles, and stability [10]. While the LV equations offer a simplified representation of predator-prey dynamics, they provide a valuable tool for understanding the fundamental principles of ecological interactions and have contributed to developing more sophisticated ecological models (e.g., [11; 12]). This is also why understanding the structure underlying the two-species model can provide insights into more complex ecological networks [13].
The present manuscript is the result of an attempt to highlight certain features of coupled and nonlinear ODE that have been neglected. Using the Lotka-Volterra (LV) equations as our model, we study the relationship between quantities that are combinatorial in nature, such as generating functions of weighted walks, the Koopman operator, and how these two are related formal exact solution of the two species LV.
The relationship between Koopman evolution [14; 15; 16], Mori-Zwanzig methods [17; 18; 19], and ordinary differential equations (ODEs) lies at the intersection of dynamical systems theory and statistical mechanics [20]. The Koopman operator is a mathematical tool used in the study of dynamical systems. It describes the evolution of observables (functions) over time without explicitly solving the underlying ODEs. The Koopman operator provides a linear representation of the dynamics, enabling the analysis of complex systems often through spectral methods and linear algebra techniques. It is particularly useful for systems with high-dimensional state spaces. On the other hand, Mori-Zwanzig methods are a set of mathematical techniques used in statistical mechanics to derive effective equations of motion for a system with many
degrees of freedom [21, 22, 23, 24, 25, 26, 27]. The main idea is to systematically eliminate the "fast" degrees of freedom (unresolved variables) to obtain a reduced model that only considers the "slow" (resolved) or relevant variables. This reduced model is often in the form of ODEs and captures the long-term behavior of the system while discarding fast oscillations or fluctuations. The Koopman operator and Mori-Zwanzig methods are related in the context of understanding the dynamics of complex systems [28, 29]. The Koopman operator provides an abstract and linear perspective on the system's evolution [30, 31], while Mori-Zwanzig methods focus on obtaining reduced models for high-dimensional systems in terms of ODEs. The Koopman operator can be used to study the dynamics of observables, and by exploiting its spectral properties, one can identify relevant slow modes that capture the system's long-term behavior. The Koopman operator has been used extensively in fluid dynamics [32], where it has been linked to the Dynamic Mode Decomposition [33, 34]. By constructing a reduced model using Mori-Zwanzig methods based on these slow modes, one can effectively obtain a system of ODEs that approximates the original dynamics.
As we show in this paper for two sets of differential equations, the additional insight that we gain in this manuscript is that the Koopman evolution is indeed also the same operator that generates certain weighted and directed lattice walks, which directly enter into the formal solution of these equations. These lattice walks can be thought to be associated with monomials in the initial conditions.
The paper is organized as follows. We first introduce in section 2 the two key techniques that we use to formally solve the equations, given by a Carleman linearization [35, 36] first in Sec. 2.1, followed by a Mori-Zwanzig reduction in Sec. 2.2. We then first solve the well-known equation \(\dot{x}=x^{2}\) in Sec 3.1-3.2-3.3, showing that the solution has an interpretation in terms of a one-dimensional lattice walk, and show that the solution depends on the generator of lattice walks. We derive a closed-form PDE for these generators in Sec. 3.4, with an analysis of the utility of this representation in Sec. 3.6. We then focus on the Lotka-Volterra equations in Sec. 4. We introduce the variables used for the Carleman linearization in Sec. 4.1, and perform the Mori-Zwanzig reduction in 4.2, introducing the expression of the formal solution of the LV. In Sec. 4.3 we provide the interpretation of the formal solution in terms of weighted lattice walks, we introduce the lattice coefficients, showing that we can in principle use Monte Carlo techniques to obtain the solution in Sec. 4.4. In Sec. 4.5 we show that the solution can be analyzed in terms of the generating function of lattice walks, and provide a closed-form quasi-linear PDE for the solution of the memory kernel of the solution, which is the generating function lattice walks, and analyzing the formal solution of the PDE in terms of the Lagrange-Charpit method. In Sec. 4.6 we provide a brief analysis of the N-species Lotka-Volterra equations. Conclusions follow.
## 2 Formal solution method
We begin by providing the main ingredients which allow to formally write the solution of the LV equations. We first discuss Carleman linearization, and then discuss the Mori-Zwanzig formalism.
### Linearization
Carleman linearization [35] is a powerful technique used to transform nonlinear differential equations into linear ones [36], facilitating their analysis and solution. This method involves introducing additional variables and expressing the original nonlinear equations as a linear system with respect to these new variables. By doing so, Carleman linearization enables the use of well-established linear techniques to investigate the behavior and properties of the system. In this paper, we utilize Carleman linearization to obtain lattice path expansions, which serve as the foundation for deriving the exact formal solution of the 2-species Lotka-Volterra equation. Consider a system of differential equations of the form
\[\frac{d\vec{x}}{dt}=\vec{f}(\vec{x}). \tag{3}\]
We can linearize this system by considering a set of observables given by the terms of observables of the form \(r_{\vec{a}}=\prod_{i}x_{i}^{a_{i}}\) with \(a_{i}\in\mathbb{N}\). Then, from eqn. (3), expanding \(f_{i}(\vec{x})\) (assuming they are analytical), one also writes the infinite chain of equations in terms of \(r_{i}\). If \(f\) is time-invariant, then the result equation \(\dot{\vec{r}}=A\vec{r}\) is also time-invariant, and formally the solution for this system is known. However, the chain of equations is infinite-dimensional. We use instead a variant of this technique, which is the Mori-Zwanzig formalism. However, the key property of quadratic equations such as Lotka-Volterra, as we show below, is that they are amenable to a combinatorial treatment.
### The Mori-Zwanzig formalism for linear system
Let us consider the simplest example of Mori-Zwanzig coarse-graining [21]. We have a linear ODE system of the form
\[\frac{d}{dt}\vec{x}=A\vec{x} \tag{4}\]
with \(\vec{x}(0)=\vec{x}_{0}\). We wish to express the system's dynamics in a generalized Langevin formalism. We then assume that our observables are a subset of the vector \(\vec{x}\), e.g.
\[\vec{x}(t)=\begin{pmatrix}\vec{y}(t)\\ \vec{z}(t)\end{pmatrix} \tag{5}\]
with \(\vec{x}\in\mathcal{R}^{N}\), while \(\vec{y}\in\mathcal{R}^{N-m}\) and \(\vec{z}\in\mathcal{R}^{m}\). We partition the matrix \(A\) in blocks, so that
\[A=\begin{pmatrix}A_{rr}&A_{ru}\\ A_{ur}&A_{uu}\end{pmatrix} \tag{6}\]
where \(A_{rr}\) is \((N-m)\times(N-m)\), \(A_{ru}\) is \(N\times(m\times m)\), \(A_{ur}\) is \(m\times(N-m)\) and \(A_{uu}\) is of size \((m\times m)\). Then, we can write
\[\frac{dy}{dt} = A_{rr}\vec{y}+A_{ru}\vec{z} \tag{7}\] \[\frac{dz}{dt} = A_{ur}\vec{y}+A_{uu}\vec{z}. \tag{8}\]
In the blocks above, \(r\) stands for resolved variables, while \(u\) stands for unresolved variables. Our observables, thus the resolved variables, are the \(y\) components. We thus write the formal expression for \(z\), which we assume to be unresolved, e.g.
\[z(t)=e^{A_{uu}t}\vec{z}_{0}+\int_{0}^{t}ds\ e^{A_{uu}(t-s)}A_{ur}\vec{y}(s). \tag{9}\]
We now plug this expression into the first set of equations, obtaining
\[\frac{dy}{dt} = A_{rr}\vec{y}+\int_{0}^{t}ds\ A_{ru}e^{A_{uu}(t-s)}A_{ur}\vec{y}( s)+A_{ru}e^{A_{uu}t}\vec{z}_{0} \tag{10}\]
We then identify the noise term \(F(t)=A_{ru}e^{A_{uu}t}\vec{z}_{0}\) and the kernel operator, given by
\[K(t-s)=A_{ru}e^{A_{uu}(t-s)}A_{ur}. \tag{11}\]
from which then we obtain
\[\frac{d\vec{y}}{dt} = A_{rr}\vec{y}+\int_{0}^{t}ds\ K(t-s)\vec{y}(s)+\vec{F}(t), \tag{12}\]
which is in the generalized Langevin equation. Thus, our problem reduces to a simpler problem to analyze than the generalized Langevin equation, and the memory kernel can be written down explicitly.
## 3 Simple example: \(\dot{\vec{x}}=\vec{x}^{2}\)
The techniques introduced in the previous sections can be showcased on a particular ODE, whose solution has many of the key characteristics of the LV equations. In particular, after having applied the Carleman linearization and the Mori-Zwanzig formalism, we will see that the solution of this equation obtained via power series resummation can be shown to be connected to the generator of (weighted) lattice walks in one dimension. We consider the nonlinear ODE
\[\frac{dx}{dt}=x^{2},\ x(0)=x_{0}. \tag{13}\]
The solution is given by \(x(t)=x_{0}(1-x_{0}t)^{-1}.\) To see this, we can write
\[\frac{1}{x^{2}}\frac{d}{dt}x=\frac{d}{dt}(-\frac{1}{x})=1 \tag{14}\]
and thus
\[\frac{1}{x_{0}}-\frac{1}{x(t)}=t \tag{15}\]
which, inverting, gives
\[x(t)=\frac{x_{0}}{1-x_{0}t}. \tag{16}\]
### Carleman linearization and Mori-Zwanzig reduction
We now use the Carleman linearization, we have that the observables are the Taylor powers, given by
\[r_{k}=x^{k}. \tag{17}\]
It is not hard to see that the equation above can be written in the form
\[\frac{d}{dt}\vec{r}=A\vec{r} \tag{18}\]
with \(A_{ij}=i\delta_{i+1,j}.\) We will treat \(r_{1}\) as the resolved observable, and the rest \(r_{k},\)\(k\geq 2\) as the unresolved observables.
We now use eqn. (10). First, note that \(A_{rr}=A_{ur}=0.\) Then, our equation for the observable \(r_{1}\) reads
\[\frac{dr_{1}}{dt}=\frac{dx}{dt}=A_{ru}e^{A_{uu}t}\vec{z}_{0}. \tag{19}\]
and then
\[x(t)=x_{0}+A_{ru}\int^{t}ds\ e^{A_{uu}s}\vec{z}_{0}. \tag{20}\]
In the equation above, \(\vec{z}_{0}=(x_{0}^{2},x_{0}^{3},\cdots).\) The vector \(A_{ru}\) is infinite and looks like \(A_{ru}=(1,0,\cdots,0)\) while \(A_{uu}\) is a square super-diagonal matrix, where the super diagonal is infinite, and given by \((2,3,4,\cdots).\) We must then calculate \(\exp(A_{uu}t).\)
### Series summation
We use the expression
\[e^{A_{uu}t}=\sum_{k=0}^{\infty}\frac{t^{k}}{k!}A_{uu}^{k}, \tag{21}\]
for which we need to calculate an expression for the first row of the matrix, as
\[A_{ru}\sum_{k=0}^{\infty}\frac{t^{k}}{k!}A_{uu}^{k}\vec{z}_{0}= \sum_{j}(e^{A_{uu}t})_{1j}x_{0}^{j+1}. \tag{22}\]
Note that
\[(A_{uu})_{ij}=(i+1)\delta_{i+1,j} \tag{23}\]
Then,
\[(A_{uu}^{r})_{ab} = \sum_{k_{1}\cdots k_{r-1}}(A_{uu})_{ak_{1}}(A_{uu})_{k_{1}k_{2}} \cdots(A_{uu})_{k_{r-1}b} \tag{24}\] \[= (a+1)(a+2)\cdots(a+r)\delta_{a+r+1,b}\] \[= \frac{(a+r)!}{a!}\delta_{a+r,b}\]
Then,
\[(e^{A_{uu}t})_{ab}=\sum_{r=0}^{\infty}\frac{t^{r}}{r!}(A_{uu}^{r })_{ab}=\sum_{r=0}^{\infty}\frac{(a+r)!t^{r}}{a!r!}\delta_{a+r,b} \tag{25}\]
from which we get then
\[x(t) = x_{0}+\int^{t}ds\sum_{r=0}^{\infty}\frac{(r+1)!s^{r}}{1!r!}\sum _{b=1}^{\infty}\delta_{r+1,b}x_{0}^{b+1} \tag{26}\] \[= x_{0}+\sum_{r=0}^{\infty}\frac{(r+1)!t^{r+1}}{r!(r+1)}x_{0}^{r+2}\] (27) \[= \sum_{r=0}^{\infty}x_{0}^{r+1}t^{r}=x_{0}\sum_{r=0}^{\infty}x_{0} ^{r}t^{r}=\frac{x_{0}}{1-x_{0}t}. \tag{28}\]
which is exactly the solution of \(x^{\prime}=x^{2}\) with \(x(0)=x_{0}\).The solution above was derived already in [37] along similar lines. We show now that there is a completely different approach to obtaining the solution.
### Generating function of directed lattice walks
Here we wish to show that we can express the solution in terms of the generating function of a walk on a path graph.
We simply define a walk as a weighted sequence of nodes. To see this, we first introduce the path graph, as illustrated in Fig. 1. The path graph is a directed graph, whose nodes are observables \(r_{k}:=x^{k}\) and weighted edges are the interactions between the observables, i.e., the weight between nodes \(r_{k}\) and \(r_{k+1}\) is \(k\) (because \(\dot{r_{k}}=kr_{k+1}\)). We further define \(\mathcal{N}_{(i,j)}^{k}\), \(i,j\in\mathbb{N}\), as the product of weights along the (only) path of length \(k\) connecting nodes \(r_{i}\) and \(r_{j}\). However, since our graph is directed, we must have \(\mathcal{N}_{(i,j)}^{k}=\delta_{k,|i-j|}\mathcal{M}(i,j)\), where
\[\mathcal{M}(i,j):=\frac{(j-1)!}{(i-1)!}. \tag{29}\]
We now make a connection between the path graph and the key term in Eq. (22), \((A_{uu})_{1,j}^{k}\):
\[(A_{uu})_{1,j}^{k}=\sum_{i_{1}}\ldots\sum_{i_{k}}\left(A_{uu}\right)_{1,i_{1}} \ldots\left(A_{uu}\right)_{i_{k},j}. \tag{30}\]
Because of the super-diagonal structure of the matrix \(A_{uu}\) (see Eq. (23)), each of the \(A_{uu}\) in the above product can be interpreted as a weight of a single step moving from a node \(r_{j}\) to the next \(r_{j+1}\). As such, the whole sum can be interpreted as the product of the weights along a path with length \(k\) connecting nodes \(r_{2}\) and \(r_{j+1}\) (note that the indices of unresolved observables begin with 2:
\[\left(A_{uu}\right)_{1,j}^{k}=\delta_{k,j-1}\mathcal{M}(2,j+1)=\delta_{k,j-1} j!. \tag{31}\]
Then, Eq. (22) can be expressed straightforwardly
\[A_{ru}\sum_{k=0}^{\infty}\frac{t^{k}}{k!}A_{uu}^{k}\vec{z}_{0} =\sum_{k=0}^{\infty}\sum_{j=1}^{\infty}\frac{t^{k}}{k!}\left(A_{ uu}^{k}\right)_{j}x_{0}^{j+1}\] \[=\sum_{k=0}^{\infty}\sum_{j=1}^{\infty}\frac{t^{k}}{k!}\delta_{k, j-1}\mathcal{M}(2,j+1)x_{0}^{j+1}=\sum_{k=0}^{\infty}\frac{t^{k}}{k!} \mathcal{M}(2,k+2)x_{0}^{k+2}. \tag{32}\]
We now define a generating function [38] for the weighted walks on the path graph:
\[G(s,x):=\sum_{k=0}^{\infty}\frac{s^{k}x^{k+2}}{k!}\mathcal{M}(2,k+2), \tag{33}\]
Figure 1: The path graph of the \(\dot{x}=x^{2}\) model
which can be calculated analytically, and we explicitly computed it in Sec. 3.2. One can see that the Taylor expansion of \(G\) in the parameter \(s\) controls the length of the walk, while the Taylor expansion in \(x\) controls between which nodes of the graph the walk occurs, as the terms monomials \(s^{k}x^{k+2}\) multiply the number of walks \(\mathcal{N}(2,k+2)\) between node \(2\) and \(2+k\). Plugging these in the definition Eq. (29), we obtain:
\[G(s,x)=x^{2}\sum_{k=0}^{\infty}\frac{(sx)^{k}}{k!}(k+1)!=x^{2}\sum_{k=0}^{ \infty}(k+1)(sx)^{k}=\frac{x^{2}}{(1-sx)^{2}}. \tag{34}\]
We can then interpret the solution of the differential equation as the time integrand of the generating function of these weighted walks evaluated at \(x=x_{0}\):
\[x(t) = x_{0}+\int_{0}^{t}G(x_{0},s)ds \tag{35}\] \[= x_{0}+\frac{tx_{0}^{2}}{1-tx_{0}}=\frac{x_{0}}{1-tx_{0}} \tag{36}\]
which is the expression we had obtained before, but interpreted now in terms of the generating function.
### Closed form PDE for the generating function
We now establish that the generating function satisfies a Partial Differential Equation (PDE), which is what we will use for the Lotka-Volterra equation below. Applying \(\partial_{x}\) and \(\partial_{s}\) to the generating function Eq. (33), we obtain:
\[\partial_{x}G(s,x) =x\sum_{k=0}^{\infty}(k+2)\frac{t^{k}x^{k}}{(k-1)!}\mathcal{M}(2,k+2)=x^{3}(k+2)\sum_{k=0}^{\infty}\frac{t^{k}x^{k}}{k!}\mathcal{M}(2,k+2), \tag{37}\] \[\partial_{s}G(s,x) =x^{2}\sum_{k=1}^{\infty}\frac{t^{k-1}x^{k}}{(k-1)!}\mathcal{M}( 2,k+2)=x^{3}\sum_{k=0}^{\infty}(k+2)\frac{t^{k}x^{k}}{k!}\mathcal{M}(2,k+2). \tag{38}\]
In the derivation above, we have used the identity
\[\mathcal{M}(j,k+3)=(k+2)\,\mathcal{M}(j,k+2). \tag{39}\]
Clearly, the above equations suggest the closed-form PDE
\[\partial_{t}\big{(}G(x,t)\big{)}=x^{2}\partial_{x}G(x,t). \tag{40}\]
The solution of the differential equation can be derived using the method of characteristics. We impose the Lagrange-Charpit equality
\[\frac{dt}{-1}=\frac{dx}{x^{2}}=\frac{dG}{0} \tag{41}\]
From the first two, we get
\[-t+\Phi=-1/x. \tag{42}\]
From the latter, we get \(dG=0\to G=\Gamma\). Imposing the Cauchy surface condition that \(\Gamma\) depends on \(\Phi\), \(G=\Gamma(\Phi)=f(\frac{1-tx}{x})\). Thus, \(G(x,t)=f(\frac{1-tx}{x})\). We know however that \(G(x,0)=x^{2}\), and thus \(f(\cdot)=(\cdot)^{-2}\). Then, \(G(x,t)=x^{2}/(1-xt)^{2}\) which is the function we found earlier while summing the series explicitly. Then, the interpretation of the solution of Eq. (35) can also be applied to the case of other nonlinearly coupled equations.
### Connection to the Koopman's representation
The formal solution Eq. (35) can be derived from yet another method, which is arguably the most formal and general derivation. Such a derivation is tightly connected to Koopman's representation [14, 15], with which one aims to prescribe the evolutionary equation of the observables. Let us consider a general observable \(g:\mathbb{R}^{1}\rightarrow\mathbb{R}^{1}\). In the Koopman picture, the states do not move and stay at \(x_{0}\), the initial condition, but the observable \(g\) becomes time-dependent, denoted by \(g_{t}:=\mathcal{K}_{t}g\) where \(\mathcal{K}_{t}\) is the finite-time Koopman operator. We write a field \(\psi(x,t)\) satisfying the following equation
\[\partial_{t}\psi(x,t) =\mathcal{L}\psi(x,t), \tag{43a}\] \[\psi(x,0) =g(x), \tag{43b}\]
where \(\mathcal{L}\) is the (backward) generator of the process, \(\mathcal{L}=x^{2}\partial_{x}\). The above PDE and the associated initial condition are often referred to as the Liouville equation [39, 40], albeit this nomenclature is not standard in non-equilibrium statistical physics1. We note that the structure of the PDE is exactly the one derived earlier.
Footnote 1: In statistical physics, the Liouville equation is the forward equation of the dynamics, where the state variables are time-dependent and the observables are static, which is the adjoint of Eq. (43)
The solution of
\[\psi(x_{0},t)=\left(\mathcal{K}_{t}g\right)(x_{0})\equiv g\left(x(t;x_{0})\right) \tag{44}\]
contains all the information of the ODE system, with an arbitrary \(g\). For example, the formal state solution \(x(t)\) can be obtained by setting \(g(x):=x^{2}\), as will be seen below. One can also consider the indicator function \(g(x):=\delta(x-x^{\prime})\) to probe if the solution is at a query point \(x^{\prime}\) at an arbitrary time \(t>0\).
Interestingly, from this Koopman viewpoint, the PDE above is often solved implicitly in terms of the ODE solution. In general, solving the backward PDE (43) is a challenging task. For the \(\dot{x}=x^{2}\) model, we have the ODE solution
\[x(t)=\frac{x_{0}}{1-x_{0}t}. \tag{45}\]
In line with the method of characteristics, we can "pull-back" by express the \(x(t)\) as a function of the initial condition \(x_{0}\):
\[\psi(x,t)=g\left(\frac{x}{1-tx}\right). \tag{46}\]
One can show that the PDE is satisfied:
\[\partial_{t}\psi(x,t) =g^{\prime}\left(\frac{x}{1-tx}\right)\frac{x^{2}}{\left(1-tx \right)^{2}} \tag{47a}\] \[x^{2}\partial_{x}\psi(x,t) =x^{2}g^{\prime}\left(\frac{x}{1-tx}\right)\left[\frac{1}{1-tx}+ \frac{xt}{\left(1+tx\right)^{2}}\right]\] (47b) \[= g^{\prime}\left(\frac{x}{1-tx}\right)\frac{x^{2}}{\left(1-tx \right)^{2}}. \tag{47c}\]
To connect to the solution in the previous calculation, we note that the formal solution of the ODE can be written as
\[x(t;x_{0})=x_{0}+\int_{0}^{t}x^{2}\left(s\right)\,ds, \tag{48}\]
With this expression, it is clear that the observable of interest should be \(g(x):=x^{2}\), leading to the solution
\[\psi(x,t)=g\left(\frac{x}{1-tx}\right)=\left(\frac{x}{1-tx}\right)^{2}, \tag{49}\]
which is exactly the generating function Eq. (33).
### Properties of the formal solution
We note that this approach allows us to write explicitly the properties of the solution as a function of the initial conditions. For instance, for the example above, we have
\[\partial_{x_{0}}x(t) = 1+\int_{0}^{t}\partial_{x_{0}}G(x_{0},t)dt \tag{50}\] \[= 1+\frac{1}{x_{0}^{2}}\int_{0}^{t}\partial_{t}G(x_{0},t)dt\] \[= 1+\frac{G(x_{0},t)-G(x_{0},0)}{x_{0}^{2}}\] \[= 1+\frac{G(x_{0},t)-x_{0}^{2}}{x_{0}^{2}}=\frac{G(x_{0},t)}{x_{0} ^{2}} \tag{51}\]
which can be promptly checked using the solution. It is also interesting to note that there is another way of deriving this PDE, knowing the expression for the solution. Using Eqn. (50), and from the fundamental theorem of calculus, we can equate
\[x^{2}=x^{\prime}(t)=G(x,t) \tag{52}\]
which we note to be also satisfied. We know from the exact solution that \(G(x,t)=x_{0}^{2}/(1-x_{0}t)^{2}=x(t)^{2}\). Now note that from the expression above, we can obtain
\[\partial_{t}G=2xx^{\prime}=2x^{3}=x^{2}\partial_{x}G, \tag{53}\]
which is exactly the PDE we derived using the generating function method.
We wish to show that a similar approach is also possible for the Lotka-Volterra equations, where however the combinatorial structure is slightly more involved, and where the solution is only formal.
## 4 The Lotka-Volterra equations
After having discussed at length the techniques used for the simpler toy model of eqn. (13), let us consider the case of the Lotka-Volterra equation (LV). The solution of the quadratic case shares in fact many similarities with the formal solution we obtain below.
First, we begin by writing the LV equations in the form
\[\frac{dx}{dt} = \alpha x+\beta xy\equiv f(x,y)\] \[\frac{dy}{dt} = \gamma y+\delta xy\equiv g(x,y). \tag{54}\]
In the typical formulation of the LVs, \(\alpha>0\), \(\beta<0\), \(\gamma<0\), and \(\delta>0\).
### Carleman linearization of LV
As for the case of the toy model, we perform a Carleman linearization, where now the variables are the following monomials
\[r_{(a,b)}=x^{a}y^{b}. \tag{55}\]
We have then
\[\frac{dr_{(a,b)}}{dt} = ax^{a-1}(\beta xy+\alpha x)y^{b}+by^{b-1}(\delta xy+\gamma y)x^{a} \tag{56}\] \[= b\delta x^{a+1}y^{b}+a\beta x^{a}y^{b+1}+x^{a}y^{b}(a\alpha+b\gamma)\] \[= a\beta r_{(a,b+1)}+b\delta r_{(a+1,b)}+(a\alpha+b\gamma)r_{(a,b)}\]
then, if we consider the combined index \((a,b)\), it follows that the Lotka-Volterra equations, the linearized LV equation becomes then
\[\frac{dr_{(0,1)}}{dt} = \delta r_{(1,1)}+\gamma r_{(0,1)}\] \[\frac{dr_{(1,0)}}{dt} = \beta r_{(1,1)}+\alpha r_{(1,0)}\] \[\frac{dr_{(1,1)}}{dt} = \beta r_{(1,2)}+\delta r_{(2,1)}+(\alpha+\gamma)r_{(1,1)}\] \[\frac{dr_{(1,2)}}{dt} = \beta r_{(1,3)}+2\delta r_{(2,2)}+(\alpha+2\gamma)r_{(1,2)}\] \[\frac{dr_{(2,1)}}{dt} = 2\beta r_{(2,2)}+\delta r_{(3,1)}+(2\alpha+\gamma)r_{(2,1)} \tag{57}\] \[\vdots\]
It is important to stress that the one above is an _exact_ representation of the Lotka-Volterra system. The LV equations can represent a path graph, which is the directed two-dimensional lattice shown in Fig. 2.
### Mori-Zwanzig reduction
Because of the linear relationship between the variables, we can write the equations in the form
\[\frac{d}{dt}\vec{r}=A\vec{r}, \tag{58}\]
where the vector \((\vec{r})_{(a,b)}(t)=x^{a}(t)y^{b}(t)\), and where the matrix \(A\) is an upper triangular matrix that can be expressed as a weighted directed graph like the one shown in Fig. 2. Let us call \(\vec{v}=(r_{(1,0)},r_{(0,1)})\equiv(x,y)\) and \(\vec{z}=(r_{(1,1)},r_{(2,1)},r_{(1,2)},\cdots)\), so that \(\vec{r}^{t}=(\vec{v}^{t},\vec{z}^{t})\). Using equation (10), we have
\[\frac{d\vec{z}}{dt} = A_{ru}e^{A_{uu}t}\vec{z} \tag{59}\]
which is decoupled from the variables \(r_{(1,0)}\) and \(r_{(0,1)}\).
The matrix \(A\), whose first two rows and columns correspond to \(r_{(0,1)}\) and \(r_{(1,0)}\), given the ordering \(r_{(0,1)}\), \(r_{(1,0)}\), \(r_{(1,1)}\), \(r_{(2,1)}\), \(r_{(1,2)}\), can be written in the form:
\[A=\left[\begin{array}{ccccc}\gamma&0&\delta&0&0&\cdots\\ 0&\alpha&\beta&0&0&\cdots\\ \hline 0&0&(\alpha+\gamma)&\delta&\beta&\cdots\\ 0&0&0&(\alpha+2\gamma)&0&\cdots\\ 0&0&0&(2\alpha+\gamma)&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right]\]
From the structure of the matrix above, we see that the solution can be written in the form
\[\frac{d\vec{v}}{dt} = A_{rr}\vec{v}+A_{ru}e^{A_{uu}t}\vec{z}_{0}\]
The equation above is a linear system of 2 equations with a non-homogeneous forcing. The set of linearly coupled differential equations above can be solved analytically, with a solution of the form
\[\vec{v}(t)=e^{A_{rr}t}\vec{v}_{0}+e^{A_{rr}t}\int_{0}^{t}ds\ e^{-sA_{rr}}A_{ru}e ^{A_{uu}s}\vec{z}_{0}. \tag{60}\]
The simplicity of the solution is only apparent. While \(A_{rr}\) is a two-dimensional diagonal matrix, \(A_{uu}\) is a semi-infinite matrix. While semi-infinite, the elements of the matrix \((e^{A_{uu}s})_{(a,b)(c,d)}\) can be calculated combinatorially, analyzing \(e^{A_{uu}s}=\sum_{k=0}^{\infty}(A_{uu})^{k}\frac{s^{k}}{k!}\). We first discuss the interpretation of this matrix.
### Directed walks on lattice
Let us then consider \((A_{uu})_{(a,b),(c,d)}^{k}\). Since the graph representing \(A\) is directed, we must have \(c>a\geq 0\), \(d>b\geq 0\) in our notation. In particular, we must have \(|a-c|+|b-d|\leq k\). If \(\alpha=\gamma=0\), then the inequality becomes an inequality, and this becomes a problem of counting paths on the lattice from \((a,b)\) to \((c,d)\). Let us now formalize the combinatorics of such a process. We call \(\mathcal{R}=c-a\) and \(\mathcal{D}=d-b\). Then, in order for a directed walk to start from \((a,b)\) and reach \((c,d)\) in \(k\) steps, no matter the order, if we go down \(\mathcal{D}\) and right \(\mathcal{R}\) times, we will definitely get to \((c,d)\). We can
Figure 2: The matrix \(A\) from the Carleman linearization in transition between the variables \(r_{(a,b)}\) for a portion of the wedge containing the variables \(x=r_{(1,0)}\) and \(y=r_{(0,1)}\). The variable \(r_{(0,0)}\) are constants and thus \(dr_{(0,0)}/dt=0\). The pink variables are not involved in the lattice calculations.
stop anytime then, for a total amount of \(k-\mathcal{U}-\mathcal{D}=\mathcal{S}\) times, for a walk of \(k\) steps. Walks can then be mapped as permutations of a vector of moves Down, Right, and Stop of the form
\[\left(\underbrace{D\cdots D}_{\mathcal{D}\text{ \ \ times }\mathcal{R}\text{ \ \ times }\mathcal{S}\cdots S}_{\mathcal{D}\text{ \ \ times}\mathcal{S}\text{\ \ times}}\right) \tag{61}\]
Given a configuration, and depending on the edges it goes through, then we will have a numerical coefficient \(C_{\sigma}\) which depends on the permutation2, a coefficient \(\beta^{\mathcal{R}}\delta^{\mathcal{D}}\) and a coefficient \(\rho\) which depends on the \(\alpha\) and \(\gamma\) values and product of the weights along the path. We note that in the limit \(\alpha,\gamma\to 0\), \(\rho\) becomes \(1\) if \(\mathcal{R}+\mathcal{D}=k\) otherwise, e.g. the walk has to start from \((a,b)\) and reach the final \((c,d)\) point in exactly \(k\) moves. We can then write
Footnote 2: In fact, in this representation, one has to divide by a symmetry factor corresponding to the number permutations corresponding to the same path. If for instance, we have \((DDRRSS)\) the sequence with which we perform the first or second down move is actually independent. Thus, we must divide a path like the one above by \(1/2^{3}\).
\[(A_{22}^{k})_{(a,b)(c,d)}=\sum_{\sigma\in\mathcal{S}_{k}}C_{(c,d)}^{k}(\sigma) \beta^{c-a}\delta^{d-b}\rho(\sigma)\theta_{k}(\sigma)(c-a)\theta(d-b) \tag{62}\]
where is the Heaviside function, defined as \(\theta(x)=1\) for \(x\geq 0\) and zero otherwise and \(\mathcal{S}_{k}\) is the group of permutations of \(k\) elements. We then have
\[(e^{A_{uu}s})_{(a,b)(c,d)}=\sum_{k=0}^{\infty}\sum_{\sigma\in\mathcal{S}_{k}}C _{(c,d)}^{k}(\sigma)\beta^{c-a}\delta^{d-b}\rho_{k}(\sigma)\theta(c-a)\theta( d-b)\frac{s^{k}}{k!}. \tag{63}\]
Let us now write eqn. (60) explicitly. This is a 2-component equation, while \(\vec{z}_{0}=(x_{0}y_{0},x_{0}^{2}y_{0},x_{0}y_{0}^{2},\cdots)\) is the vector of the appropriate powers of initial conditions. First, we note that \(e^{-sA_{rr}}\) is a diagonal matrix with elements \(e^{-s\gamma}\), \(e^{-s\alpha}\), corresponding to \((0,1)\), \((1,0)\). Similarly, \(A_{ru}e^{A_{uu}s}\) selects only the first two rows of \(e^{A_{uu}s}\) with the appropriate weights. Then, we have
\[e^{-sA_{rr}}A_{ru}e^{A_{uu}s}\vec{z}_{0}=\sum_{c,d}\begin{pmatrix}\delta e^{-s \gamma}(e^{A_{uu}s})_{(1,1),(c,d)}(\vec{z}_{0})_{(c,d)}\\ \beta e^{-s\alpha}(e^{A_{uu}s})_{(1,1),(c,d)}(\vec{z}_{0})_{(c,d)}\end{pmatrix} \tag{64}\]
Then, if we define the diagonal matrix \(\text{diag}(\delta e^{-s\gamma},\beta e^{-s\alpha})\) and
\[\eta(s,x_{0},y_{0})=\sum_{c,d}(e^{A_{uu}s})_{(1,1),(c,d)}(\vec{z}_{0})_{(c,d)} \tag{65}\]
we get
\[e^{-sA_{rr}}A_{ur}e^{A_{uu}s}\vec{z}_{0}=\eta(s)\begin{pmatrix}\delta e^{-s \gamma}&0\\ 0&\beta e^{-s\alpha}\end{pmatrix}\vec{1}. \tag{66}\]
and thus the solution can be written as
\[\begin{pmatrix}y(t)\\ x(t)\end{pmatrix}=\begin{pmatrix}e^{\gamma t}y_{0}\\ e^{\alpha t}x_{0}\end{pmatrix}+\int_{0}^{t}ds\ \eta(s,x_{0},y_{0})\begin{pmatrix} \delta e^{\gamma(t-s)}\\ \beta e^{\alpha(t-s)}\end{pmatrix} \tag{67}\]
We can see from the equation above that for \(t=0\) we recover the initial conditions.
From a perturbative standpoint, we see that in the equation above we need to calculate integrals of the form
\[I_{m}(t)=\int_{0}^{t}\eta(s,x_{0},y_{0})e^{-ms}ds \tag{68}\]
for arbitrary \(m\). If we write
\[\eta(s)=\sum_{c,d\geq 1}\sum_{k=0}^{\infty}\frac{s^{k}}{k!}\sum_{\sigma\in \mathcal{S}_{k}}C^{k}_{(c,d)}(\sigma)\beta^{c-1}\delta^{d-1}\rho_{k}(\sigma)( \vec{z}_{0})_{(c,d)}. \tag{69}\]
We see that the expansion can be reduced to expressions of the form
\[I^{k}_{m}(t)=\int_{0}^{t}s^{k}e^{-ms}ds. \tag{70}\]
Furthermore, \(I^{k}_{m}(t)\) can be expressed as
\[I^{k}_{m}(t)=\frac{\Gamma(k+1)-\Gamma(k+1,mt)}{m^{k+1}},\ \text{if}\ m>0, \tag{71}\]
where \(\Gamma(k+1)=(k+1)!\) and \(\Gamma(k+1,mt)=\int_{t}^{\infty}e^{-s}s^{k}\ ds\) is the incomplete Gamma function. Note that for small values of \(t\), \(I^{k}_{m}(t\to 0^{+})=(mt)^{k}/(1+k)+O(t^{k+1})\), and thus at small values of \(t\) the expansion also corresponds to a time Taylor expansion. Instead, in the limit \(m\to 0\), we have from the properties of the \(\Gamma\) functions that
\[\lim_{m\to 0}I^{k}_{m}(t)=\frac{t^{1+k}}{1+k}. \tag{72}\]
which is consistent with setting \(m=0\) in eqn. (71) and integrating \(s^{k}\).
The summation over \(c\) and \(d\) can be swapped with the summation over \(k\) and \(\sigma\), from which we get
\[\Gamma_{k}=\sum_{c=1}^{\infty}\sum_{d=1}^{\infty}\sum_{\sigma\in \mathcal{S}_{k}}\delta^{d-1}y_{0}^{d}\beta^{c-1}x_{0}^{c}C^{k}_{(c,d)}(\sigma )\rho_{k}(\sigma) \tag{73}\]
from which we can write
\[y(t) = e^{\gamma t}y_{0}+\delta e^{\gamma t}\Omega_{\gamma}(t) \tag{74}\]
\[x(t)\,=\,e^{\alpha t}x_{0}+\beta e^{\alpha t}\Omega_{\alpha}(t) \tag{75}\]
where
\[\Omega_{m}(t)=\sum_{k=0}^{\infty}\Gamma_{k}I_{m}^{k}(t) \tag{76}\]
which is the final expression for the lattice paths expansion solution of the Lotka-Volterra equation. We define the parameters \(\Gamma_{k}\) as the lattice coefficients. However, we wish to stress that Eq. (71) is true only for \(m>0\). Because conventional parametrization demands \(\gamma<0\), one must rely on Eq. (70) to evaluation of \(I_{\gamma}^{k}(t)\). It is clear that \(I_{\gamma}^{k}(t)\) is not converging for larger values of \(t\) for any \(k\in\mathbb{Z}_{\geq 0}\). Consequently, this formal method by lattice coefficients is unfortunately of little use, if not only formally.
This is now a good moment to stop and focus on the meaning of these expressions. Equation (67), albeit formal, provides an interpretation of the exact solution of the 2-species Lotka-Volterra equations in terms of directed, and weighted, walks on lattice paths. As a result, the function \(\eta(s,x,y)\) can in principle be estimated via a Monte Carlo approach, with random lattice walks stopping at \((a,b)\) in order to estimate the contributions of order \(x_{0}^{a}y_{0}^{a}\) in the initial condition. This is the first result of this paper. Yet, admittedly this is a quite cumbersome approach. Although in the next section, we provide examples for the calculation of the lattice coefficients, it is worth mentioning that the purpose of the rest of the paper will be to show that \(\eta\) is the solution of a particular differential equation.
### Example of lattice coefficient calculation
The solutions of eqn. (74)-(75) are, as a matter of fact, written explicitly in terms of coefficients that need to be evaluated at every order \(k\) of the lattice expansion. However, unless an exact expression for \(\Omega_{m}(t)\) can be found, this is only formal for the time being. In fact, the symmetric group \(\mathcal{S}_{k}\) on a set of \(k\) elements has order \(k!\). It is then immediate to see that the complexity of each order increases dramatically. However, we still think that this approach can lead to a way of formalizing this solution.
To show how this expansion works, let us consider the first few orders. We define the boundary set \(\mathcal{B}_{k}\) of the nodes in the lattice that can be reached in \(k\) steps. For \(k=0\), we identify \(\mathcal{S}_{0}=\emptyset\). In this case \(\Gamma_{0}=r_{(1,1)}=x_{0}y_{0}\). At \(k=1\), \(\mathcal{S}_{1}=\{Id\}\), the boundary of nodes that can be reached after one step is composed of \(\mathcal{B}_{1}=\{r_{(1,2)},r_{(2,1)}\}\). It is then easy to see that \(\sigma=Id\), \(\Gamma_{1}=\beta x_{0}y_{0}^{2}+\delta x_{0}^{2}y_{0}\). The first non-trivial case is \(k=2\). The boundary of the nodes that can be reached is \(\mathcal{B}_{2}=\{r_{(1,2)},r_{(2,1)},r_{(2,2)}\}\). The set \(\mathcal{S}_{2}=\{\sigma_{1}=\{1,2\},\sigma_{2}=\{2,1\}\}\).
For the nodes \(r_{(1,2)},r_{(2,1)}\) the walk has to stop once at least and go right and down respectively, while for \(r_{(2,2)}\) once right and down at least.
Thus, the transitions are given by
1. for \(r_{(1,2)}=x_{0}y_{0}^{2}\), our moves must be \(\sigma_{1}(R,S)=(R,S)\) or \(\sigma_{2}(R,S)=(S,R)\), thus \(C_{2}(\sigma_{1})=C_{2}(\sigma_{2})=1\), while \(\rho_{2}(\sigma_{1})=(\alpha+\gamma)\), \(\rho_{2}(\sigma_{2})=(\alpha+2\gamma)\), and \(\beta^{c-1}\delta^{d-1}=\beta\);
2. for \(r_{(2,1)}=x_{0}^{2}y_{0}\) we have \(\sigma_{1}(D,S)=\{D,S\}\) and \(\sigma_{2}(D,S)=(S,D)\), thus \(C_{2}(\sigma_{1})=1=C_{2}(\sigma_{2})=1\), while \(\rho_{2}(\sigma_{1})=\alpha+\gamma\) and \(\rho_{2}(\sigma_{2})=2\alpha+\gamma\), with \(\beta^{c-1}\delta^{d-1}=\delta\);
3. for \(r_{(2,2)}\) we must have \(\sigma_{1}(R,D)=(R,D)\) and \(\sigma_{2}(R,D)=(D,R)\), and \(C_{2}(\sigma_{1})=C_{2}(\sigma_{2})=2\), and \(\rho_{2}=1\) and \(\beta^{c-1}\delta^{d-1}=\beta\delta\).
These correspond then to
\[\Gamma_{2} = \beta\big{(}(\alpha+\gamma)+(\alpha+2\gamma)\big{)}x_{0}y_{0}^{2}\] \[+\delta\big{(}(\alpha+\gamma)+(2\alpha+\gamma)\big{)}x_{0}^{2}y_ {0}\] \[+\big{(}(\beta)(2\delta)+(2\beta)(\delta)\big{)}x_{0}^{2}y_{0}^{2}\] \[= \beta(2\alpha+3\gamma)x_{0}y_{0}^{2}+\delta(3\alpha+2\gamma)x_{0 }^{2}y_{0}+4\beta\delta x_{0}^{2}y_{0}^{2}.\]
We then get that the expansion up to order \(k=2\) is given by
\[y(t) = e^{\gamma t}y_{0}+\delta e^{\gamma t}\Big{(}x_{0}y_{0}\frac{1-e ^{-\gamma t}}{\gamma}+\Gamma_{1}I_{\gamma}^{1}(t)+\Gamma_{2}I_{\gamma}^{2}(t) \Big{)}\] \[+O(t^{3})\] \[x(t) = e^{\alpha t}x_{0}+\beta e^{\alpha t}\Big{(}x_{0}y_{0}\frac{1-e^{ -\alpha t}}{\alpha}+\Gamma_{1}I_{\alpha}^{1}(t)+\Gamma_{2}I_{\alpha}^{2}(t) \Big{)}\] \[+O(t^{3})\]
which provides an example of the form of the solution. Then up to order two, the solution above versus the numerically obtained one is shown in Fig. 3. We see that this expansion is problematic at a perturbative level, because of its slow convergence, and for \(\beta<0,\gamma<0\), it has an alternating sign which does not improve the convergence either. Albeit in the following we focus on deriving an exact equation for \(\eta\), the approach described above allows in principle to study the solution of LV equations using Monte Carlo techniques.
### Lotka-Volterra via generating functions
The analysis of the previous section has highlighted the importance of deriving exact equations for \(\eta\). As we saw in the plots above, the perturbative expansion is slowly converging. We thus need a better expression for the solution.
We now know however that \(\eta(t,x,y)\) in eqn. (65) is exactly the generator function of walks of length \(k\) from \((1,1)\) to \((c,d)\), and the coefficient corresponding to \(t^{k}/k^{2}(x^{c}y^{d})\) is the weighted walk of length \(k\). Similar to what we introduced earlier for the simpler model in eqn. (29) for the model of eqn. (13), we now have a weighted walk coefficient \(\mathcal{N}^{k}_{(x_{1},y_{1}),(x_{2},y_{2})}\) which is the sum of all possible walks from vertex \((x_{1},y_{1})\) to vertex \((x_{2},y_{2})\) on a lattice. In our case, since we are interested in walks of length \(k\) starting always from node \((1,1)\), we can simply define \(\mathcal{N}^{k}_{(1,1),(a,b)}=N^{k}_{(a,b)}\).
We can then write, similarly to what we did for the simpler ODE of eqn. (13), a recursion relation of the form
\[N^{k}_{(a,b)}=(\alpha a+b\gamma)N^{k-1}_{(a,b)}+a\beta N^{k-1}_{(a,b-1)}+b \delta N^{k-1}_{(a-1,b)}, \tag{78}\]
where \(N_{a,b}^{k}\) is the product of the weights along a path of length \(k\) connecting \((1,1)\) and \((a,b)\) (the dependence of \((1,1)\) is suppressed for brevity). It is clear that we have the following conditions: (1) \(N_{(1,1)}^{0}=1\), (2) \(N_{(0,1)}^{k}=N_{(1,0)}^{k}=0\), \(k\in\mathbb{Z}_{\geq 0}\), and (3) \(N_{(a,b)}^{r}=0\) if \(r\neq a+b-2\), \(a,b\in\mathbb{N}\). Then, the generating function can be written in the form
\[\eta(t,x,y)=\sum_{a,b=1}^{\infty}\sum_{k=0}^{\infty}N_{(a,b)}^{k}\frac{t^{k}}{ k!}x^{a}y^{b}. \tag{79}\]
We multiply the left and right of the recursion relation Eq. (78) by \(t^{k-1}x^{a}y^{b}/(k-1)!\) and sum over \(a\), \(b\), and \(k\) from \(1\) to \(\infty\). Let us analyze this term by term. On the left-hand side, we obtain \(\sum_{a,b=1}^{\infty}\sum_{k=1}^{\infty}t^{k-1}x^{a}y^{b}N_{(a,b)}^{k}/(k-1)!= \partial_{t}\eta(t,x,y)\). On the right hand side, the summation over \(k\) is exactly right to give the \(t\) dependence, and we focus on the \(a,b\) summations.
\[\alpha:\ \sum_{a,b=1}^{\infty}\sum_{k=1}^{\infty}\frac{t^{k-1}}{(k-1)!}ax^{a}y^ {b}N_{(a,b)}^{k-1}=x\partial_{x}\eta(t,x,y)\]
Figure 3: Analytical at order \(k\) versus forward Euler method with \(dt=0.01\) (solid lines), for the parameters \(\alpha=1\), \(\beta=-0.5\), \(\gamma=-0.5\), \(\delta=2\). We see that the expansion captures the short time behavior of the system.
\[\gamma: \sum_{a,b=1}^{\infty}\sum_{k=1}^{\infty}\frac{t^{k-1}}{(k-1)!}bx^{a} y^{b}N_{(a,b)}^{k-1}=y\partial_{y}\eta(t,x,y)\] \[\delta: \sum_{a,b=1}^{\infty}\sum_{k=1}^{\infty}\frac{t^{k-1}}{(k-1)!}bx^ {a}y^{b}N_{(a-1,b)}^{k-1}=xy\partial_{y}\eta(t,x,y)\] \[\beta: \sum_{a,b=1}^{\infty}\sum_{k=1}^{\infty}\frac{t^{k-1}}{(k-1)!}ax ^{a}y^{b}N_{(a,b-1)}^{k-1}=xy\partial_{x}\eta(t,x,y),\]
from which we obtain the PDE
\[\partial_{t}\eta(t,x,y) = \Big{(}(\alpha x+\delta xy)\partial_{x}+(\gamma y+\beta yx) \partial_{y}\Big{)}\eta(s,x,y) \tag{80}\] \[= \Big{(}f(x,y)\partial_{x}+g(x,y)\partial_{y}\Big{)}\eta(t,x,y)\] \[\eta(0,x,y) = xy\] (81) \[\eta(t,0,y) = \eta(t,x,0)=0 \tag{82}\]
where the boundary conditions are derived as follows. Since \(N_{(1,0)}^{k}=N_{(0,1)}^{r}=0\), it follows that \(\eta(s,0,y)=\eta(s,x,0)=0\). Since \(t=0\) must imply that the only surviving term is \(N_{(1,1)}^{0}xy\) with \(N_{(1,1)}^{0}=1\), \(\eta(0,x,y)=xy\) must be supplied as initial condition.
The PDE can also be derived from the formal Koopman's picture, analogous to the technique presented in Sec. 3.5. To get to the same form as the PDE above, we first rewrite the ODE to absorb the linear terms
\[\frac{d}{dt}\left(e^{\alpha t}x(t)\right) = \beta e^{\alpha t}x(t)\,y(t), \tag{83a}\] \[\frac{d}{dt}\left(e^{\gamma t}x(t)\right) = \delta e^{\gamma t}x(t)\,y(t). \tag{83b}\]
To this end, we can see that both terms involve \(x(t)y(t)\), hinting that the key observable is \(g(x,y):=xy\). As such, the backward PDE (Eq. (43)) is
\[\partial_{t}\eta(x,y,t) = \left[(\alpha x+\beta xy)\,\partial_{x}+(\gamma y+\delta xy)\, \partial_{y}\right]\eta(x,t), \tag{84a}\] \[\eta(x,y,0) = g(x,y)=xy, \tag{84b}\]
and once \(\eta\) is solved (recall that \(\eta(x_{0},y_{0},t)=x(t;x_{0})\,y(t;y_{0})\), we can integrate Eq. (83):
\[x(t) = x(0)+\beta\int_{0}^{t}e^{\alpha(t-s)}\eta(x_{0},y_{0},t)\,ds, \tag{85a}\] \[y(t) = y(0)+\gamma\int_{0}^{t}e^{\gamma(t-s)}\eta(x_{0},y_{0},t)\,ds. \tag{85b}\]
which is exactly the same as the solution obtained from combining Carleman linearization, Mori-Zwanzig formalism, and generating function of the directed weighted graph.
To see that the equation above is correct, let us consider inserting explicitly the formal solution (Eq. (67)) into the ODE. We have
\[\alpha x+\beta xy = x^{\prime}(t) \tag{86}\] \[= \alpha e^{\alpha t}x_{0}+\beta\left[\alpha\int_{0}^{t}\eta(x_{0}, y_{0},t)e^{\alpha(t-s)}ds+\eta(x_{0},y_{0},t)\right]\]
We now note that on the right-hand side, we have
\[\alpha x+\beta xy=x^{\prime}(t)=\alpha x(t)+\beta\eta(x_{0},y_{0},t) \tag{87}\]
from which we obtain the identity
\[\eta(x_{0},y_{0},t)\equiv\eta(x(t),y(t))=x(t)y(t) \tag{88}\]
we now take the derivative with respect to time, obtaining
\[\partial_{t}\eta(x,y) = x^{\prime}(t)y(t)+x(t)y^{\prime}(t)\] \[= (\alpha x+\beta xy)y(t)+(\gamma y+\delta xy)x(t)\]
Now note that we can write
\[x(t) = \partial_{y}\eta \tag{89}\] \[y(t) = \partial_{x}\eta \tag{90}\]
From which we obtain
\[\partial_{t}\eta=(\alpha x+\beta xy)\partial_{x}\eta+(\gamma y+\delta xy) \partial_{y}\eta \tag{91}\]
Which is the expression obtained using the generating function method.
The solution of the PDE above can be solved (formally, and in principle) on a Cauchy surface determined by constants of integration emerging from the Lagrange-Charpit equations, as the above is a quasi-linear equation, exactly as in the case of the simpler example described earlier. However, our attempts to solve the equation analytically lead only to a complicated implicit form that we decided to omit in the present manuscript.
### Comments on Lotka-Volterra with N species and hyperlattices
Such methodology for the generation of exact solutions of Lotka-Volterra equations can be generalized to higher dimensions. We consider then the set of coupled differential
equations of the form
\[\frac{dx_{i}}{dt}=\alpha_{i}x_{i}+x_{i}\sum_{j}^{N}\beta_{ij}x_{j} \tag{92}\]
where the matrix \(\beta\) is zero on the diagonal. Similarly to what we had done in the previous section, we consider the observables \(r_{\vec{a}}=\prod_{i=1}^{N}x_{i}^{a_{i}}\), and focus on their time derivatives
\[\frac{d}{dt}r_{\vec{a}} = \sum_{j=1}^{N}a_{j}x_{j}^{a_{j-1}}\frac{dx_{j}}{dt}\prod_{r\neq j }x_{r}^{a_{r}}\] \[= \sum_{j=1}^{N}a_{j}x_{j}^{a_{j-1}}(\alpha_{j}x_{j}+x_{j}\sum_{k}^ {N}\beta_{jk}x_{k})\prod_{r\neq j}x_{r}^{a_{r}}\] \[= (\sum_{j}a_{j}\alpha_{j})r_{\vec{a}}+\sum_{j=1}^{N}a_{j}\sum_{k} ^{N}\beta_{jk}r_{\vec{a}+\hat{x}_{k}}\] \[= \tilde{\alpha}_{a}r_{\vec{a}}+\sum_{k=1}^{N}\tilde{\beta}_{k}r_{ \vec{a}+\hat{x}_{k}}. \tag{94}\]
As a result, we obtain a lattice equation in higher dimensions. Equation (61) has to be generalized, as now we have \(N\) dimensions for the lattice path. It is known that in general this construction can be expressed in a tensorial representation [41].
The key difference between the \(2-\)species case and the higher dimensional one is that the lattice depends on the initial point, which makes the analysis slightly more complicated. For \(3-\)species, the initial points of the lattice paths are given by \(x=r_{(1,0,0)},y=r_{(0,1,0)}\) and \(z=r_{(0,0,1)}\), analogously to \(x=r_{(1,0)}\) and \(y=r_{(0,1)}\) in Fig. 2 in the \(2-\)species. It can be promptly seen that, since \(\dot{x}\) has the monomials \(x\),\(xy\) and \(xz\) on the right hand side, \(\dot{y}\) has the monomials \(y\),\(yx\) and \(yz\) and \(\dot{z}\) instead \(z\), \(zx\) and \(zy\), the lattice expansion implies
\[r_{1,0,0} \rightarrow r_{1,0,0},r_{1,1,0},r_{1,0,1} \tag{95}\] \[r_{0,1,0} \rightarrow r_{0,1,0},r_{1,1,0},r_{0,1,1}\] (96) \[r_{0,0,1} \rightarrow r_{0,0,1},r_{1,0,1},r_{0,1,1} \tag{97}\]
From this we see that the lattice walks for the resolved variables begin at three sublattices starting at \(r_{0,1,1}\), \(r_{1,0,1}\) and \(r_{1,1,0}\) which do not coincide as in the two-dimensional case. For comparison, in the \(2-\) species case we had
\[r_{1,0} \rightarrow r_{1,0},r_{1,1} \tag{98}\] \[r_{0,1} \rightarrow r_{0,1},r_{1,1} \tag{99}\]
which implies that we could write the solution in terms of a single generating function \(\eta\) corresponding to walks from \(r_{1,1}\) to \(r_{a,b}\). However, since the lattice paths for a higher number of species are directed, the sublattices resulting from these paths do
not coincide in this case. Thus, in the multi-species case, one has to resort to multiple generating functions. In the \(3-\)species case, these are \(\eta_{xy}\), \(\eta_{xz}\) and \(\eta_{yz}\), related to walks from \(r_{1,1,0}\), \(r_{1,0,1}\) and \(r_{0,1,1}\) to a generic point \(r_{a,b,c}\) respectively. Although by symmetry these three must be related, we see immediately that the hyperlattices require a separate discussion that goes beyond the purpose and space limitations of this note. We can see however that our methodology can also be applied here, with due attention.
## 5 Conclusions
This paper presents a novel approach to analyzing the Lotka-Volterra equations by introducing a lattice path expansion, preceded by a Carleman linearization and by a Mori-Zwanzig reduction which we used to derive the formal solution. We first analyzed the simpler model \(\dot{x}=x^{2}\), which although had been solved before, as we have shown her the solution has an interpretation in terms of lattice path expansion. As far as we know, such lattice structure employed in this study was not previously known or utilized in the simpler model examined in the paper, nor had it been applied to the Lotka-Volterra equations. Our paper elucidates the interplay between the formal solutions of the ODEs, the generating functions of the lattice walks, and how these can be interpreted in terms of Koopman evolution.
This approach provides a powerful method for understanding the Lotka-Volterra equations. Indeed, it came as a posteriori surprise that the generators of these lattice path expansions have also, in turn, an interpretation in terms of Koopman evolution operators, applied to a particular observable.
Some critical comments are in order. Although we could not provide an exact solution to the two-dimensional Lotka-Volterra equations, what we think to be an important contribution of this paper is the connection between combinatorial methods and the solution of nonlinear ODE, and in turn the relationship between the generator of walks on graphs to the Koopman evolution of observables. We think of the Carleman and Mori-Zwanzig techniques as a method to obtain the formal solution that in principle can be applied, clearly with harder work, to a large class of ODE that can be expressed in terms of monomials of the dynamical variables. The formal solution of the LV equations should be seen only as that, as it is written in terms of the generator of lattice walks on the graph. Solving the PDE of such a generator is as hard, if not more difficult, than solving the ODE directly. In future works we will focus on the numerical analysis of the PDE for the LV equation, trying to find connections between the transition to oscillatory regime in the Lotka-Volterra ODEs and to properties of the related PDE.
In future work, we will focus on the numerical integration of eqn. (91) trying to elucidate these connections.
Acknowledgments.The work of FC and YTL was carried out under the auspices of the NNSA of the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396, and in particular support from LDRD via 20220063DR, 20230338ER, and 20230627ER. |
2310.08987 | Expansions for Hilbert schemes of points on semistable degenerations | The aim of this paper is to extend the expanded degeneration construction of
Li and Wu to obtain good degenerations of Hilbert schemes of points on
semistable families of surfaces, as well as to discuss alternative stability
conditions and parallels to the GIT construction of Gulbrandsen, Halle and
Hulek and logarithmic Hilbert scheme constructions of Maulik and Ranganathan.
We construct a good degeneration of Hilbert schemes of points as a proper
Deligne-Mumford stack and show that it provides a geometrically meaningful
example of a construction arising from the work of Maulik and Ranganathan. | Calla Tschanz | 2023-10-13T10:07:28Z | http://arxiv.org/abs/2310.08987v1 | # Expansions for Hilbert Schemes of Points on Semistable Degenerations
# Expansions for Hilbert Schemes of Points on Semistable Degenerations
Calla Tschanz
**Abstract.** The aim of this paper is to extend the expanded degeneration construction of Li and Wu to obtain good degenerations of Hilbert schemes of points on semistable families of surfaces, as well as to discuss alternative stability conditions and parallels to the GIT construction of Gulbrandsen, Halle and Hulek and logarithmic Hilbert scheme constructions of Maulik and Ranganathan. We construct a good degeneration of Hilbert schemes of points as a proper Deligne-Mumford stack and show that it provides a geometrically meaningful example of a construction arising from the work of Maulik and Ranganathan.
###### Contents
* 1 Introduction
* 2 Background on tropical perspective
* 3 The expanded construction
* 4 GIT stability
* 5 Stack perspective
* 6 The canonical moduli stack
## 1 Introduction
The study of moduli spaces is a central topic in algebraic geometry; among moduli spaces, Hilbert schemes form an important class of examples. They have been widely studied in geometric representation theory, enumerative and combinatorial geometry and as the two main examples of hyperkahler manifolds, namely Hilbert schemes of points on K3 surfaces and generalised Kummer varieties. A prominent direction in this area is to understand the local moduli space of such objects and, in particular, the ways in which a degeneration of smooth Hilbert schemes may be given a modular compactification.
For example, we may consider the geometry of relative Hilbert schemes on a degeneration whose central fibre has normal crossing singularities. We may then ask how the singularities of such a Hilbert scheme may be resolved while preserving certain of its properties or how it may be expressed as a good moduli space. This then becomes a compactification problem with respect to the boundary given by the singular locus. Historically, an important method used in moduli and compactification problems has been Geometric Invariant Theory (GIT). More recently, the work of Maulik and Ranganthan [13] has explored how methods of tropical and logarithmic geometry can be used to address such questions for Hilbert schemes. This builds upon previous work of Li [14] and Li and Wu [12] on expanded degenerations for Quot schemes and work of Ranganathan [11] on logarithmic Gromov-Witten theory with expansions.
Briefly stated, the aim of this paper is to provide explicit examples of such compactifications and explore the connections between these methods.
### Basic setup
Let \(k\) be an algebraically closed field of characteristic zero. Let \(X\to C\) be a projective family of surfaces over a curve \(C\cong\mathbb{A}^{1}\) such that the total space is smooth and the central fibre \(X_{0}\) has simple normal crossing singularities. At a triple point of the singular fibre, \(X\) is etale locally given by \(\operatorname{Spec}k[x,y,z,t]/(xyz-t)\). In this etale local model, the general fibres are smooth and the central fibre \(X_{0}\) is given by three planes intersecting transversely in \(\mathbb{A}^{3}\). Throughout this work, these will be denoted \(Y_{1},Y_{2}\) and \(Y_{3}\), given in local coordinates by \(x=0,\ y=0\) and \(z=0\) respectively. Let \(X^{\circ}\coloneqq X\setminus X_{0}\), which lies over \(C^{\circ}\coloneqq C\setminus\{0\}\). Given such a family \(X\to C\), we will explore how techniques of expanded degenerations may be used to construct good compactifications of the relative Hilbert scheme of \(m\) points \(\operatorname{Hilb}^{m}(X^{\circ}/C^{\circ})\).
The aim is to construct a compactification which is flat over \(C\) and in which all limit subschemes can be chosen to satisfy some transversality condition in some modification of \(X_{0}\). In general, transversality will mean that the subschemes should be normal to the codimension 1 strata of the central fibre. This forces any interesting behaviour of the subschemes to occur on the smooth irreducible components of the modifications of \(X_{0}\). In the case which interests us here, namely Hilbert schemes of points, it will just mean that we would like our subschemes to have support in the smooth loci of the fibres. We will refer to this condition throughout this work as the condition that the subschemes are _smoothly supported_. The problem therefore is to construct _expansions_ (birational modifications of the central fibre of \(X\) in a 1-parameter family) in which all limits needed to compactify \(\operatorname{Hilb}^{m}(X^{\circ}/C^{\circ})\) can be chosen to be smoothly supported. This allows us to break down the problem of studying Hilbert schemes of points on \(X_{0}\) into smaller parts, by studying the products of Hilbert schemes of points on the irreducible components of the modifications of \(X\). Moreover, this approach will allow us to construct compactifications as stacks which have good properties as moduli spaces. In particular, for all the compactifications constructed in this way, the data of each family of length \(m\) zero-dimensional subschemes over \(C\) is completely determined by its degenerate fibre, i.e. by its limit in the compactification. As we will mention in the following section, the work of Li and Wu only covers the case where the singular locus of \(X_{0}\) is smooth. We
would like to highlight that understanding how these problems work in general for simple normal crossings is quite powerful, as we can always use semistable reduction to reduce to this case.
As is mentioned in Section 1.3, this type of construction can be applied to construct type III degenerations of Hilbert schemes of points on K3 surfaces. This will be described in future work.
### Previous work in this area
Expanded degenerations were first introduced by Li [11] and then used by Li and Wu [12] to study Quot schemes on degenerations \(X\to C\), in the case where the singular locus of \(X_{0}\) is smooth. They construct a stack of expansions \(\mathfrak{C}\) and a family \(\mathfrak{X}\) over it, which is a stack of modifications of \(X_{0}\) in an expanded family, subject to some equivalence relations; among others a relation induced by a natural torus action on the modified fibres. They then impose a stability condition which cuts out families of subschemes of \(\mathfrak{X}\) which meet the boundary of the special fibre in a transverse way. For each choice of Hilbert polynomial the family thus obtained is a proper Deligne-Mumford stack.
Following on from [12], Gulbrandsen, Halle and Hulek [10] present a GIT version of the above construction in the case of Hilbert schemes of points. They construct an explicit expanded degeneration, i.e. a modified family over a larger base, whose fibres correspond to blow-ups of components of \(X_{0}\) in the family. They present a linearised line bundle on this space for the natural torus action and they are able to show that in this case the Hilbert-Mumford criterion simplifies down to a purely combinatorial criterion. Using this, they impose a GIT stability condition which recovers the transverse zero-dimensional subschemes of Li and Wu and prove that the corresponding stack quotient is isomorphic to that of Li and Wu. A motivation for this work was to construct type II degenerations of Hilbert schemes of points on K3 surfaces. Indeed, type II good degenerations of K3 surfaces present these types of singularities in the special fibre, which is a chain of surfaces intersecting along smooth curves.
There is more recent work of Maulik and Ranganathan [14], building upon earlier ideas of Ranganathan [13] and results of Tevelev [20], in which they use techniques of logarithmic and tropical geometry to construct appropriate expansions of \(X\to C\). This allows them to define moduli stacks of transverse subschemes starting from the case where \(X_{0}\) is any simple normal crossing variety. They show that the stacks thus constructed are proper and Deligne-Mumford. For more details on this, see Section 2.2.
### Main results
Let \(X\to C\) be a semistable degeneration of surfaces. In the following sections, we propose explicit constructions of expanded degenerations and stacks of stable length \(m\) zero-dimensional subschemes on these expanded families, which we show to have good properties.
We start by constructing expanded degenerations as schemes and discuss various GIT stability conditions on the corresponding relative Hilbert schemes of points. Unlike the
situation in [1], however, a single GIT condition is not sufficient to obtain the desired stable locus for this problem. This is explained in Section 5.1. We then define a stack of expansions \(\mathfrak{C}\) and a family \(\mathfrak{X}\) over it, which contains all expansions of \(X\) which can be obtained using a specific sequence of blow-ups. The type of expansion of \(X\) which can appear in \(\mathfrak{X}\) is therefore greatly restricted, which in this case offers significant advantages, as we will see. We then describe how _Li-Wu stability_ (abbreviated LW stability) can be extended to this setting and define an alternative notion of stability, called _smoothly supported weak strict stability_ (abbreviated SWS stability), derived from GIT stability conditions. We may then construct the stacks \(\mathfrak{M}^{m}_{\text{LW}}\) and \(\mathfrak{M}^{m}_{\text{SWS}}\) of LW and SWS stable length \(m\) zero-dimensional subschemes on \(\mathfrak{X}\). Our first main results are the following.
**Theorem 1.3.1**.: _The stacks \(\mathfrak{M}^{m}_{\text{LW}}\) and \(\mathfrak{M}^{m}_{\text{SWS}}\) are Deligne-Mumford and proper._
**Theorem 1.3.2**.: _There is an isomorphism of stacks_
\[\mathfrak{M}^{m}_{\text{LW}}\cong\mathfrak{M}^{m}_{\text{SWS}}.\]
This construction has the benefit of being very straightforward compared to the other possible constructions solving this problem, as we will discuss later. The restrictive choices made in the construction of \(\mathfrak{X}\) mean that LW or SWS stability are already sufficient conditions to make the stacks of stable objects be proper. This is somewhat unexpected; indeed, in general we will need to take an additional stability condition, as can be seen in [14] (see Section 2.2 for the role of Donaldson-Thomas stability in this problem).
Allowing for different choices of expansions.In this paper, we discuss only a specific choice of model for the Hilbert scheme of points which we call the canonical moduli stack. In upcoming work, we will investigate how these methods can be extended to describe other choices of models. We will consider an approach which parallels work of Kennedy-Hunt on logarithmic Quot schemes [13], as well as recover certain geometrically meaningful choices of moduli stacks arising from the methods of Maulik and Ranganathan [14]. In particular, we will discuss how tube components and Donaldson-Thomas stability enter the picture in these more general cases (see Section 2.2 for definitions).
Application to hyperkahler varieties.We only consider here the property that \(X\) is a degeneration of surfaces with a specific type of singularity in its special fibre. A natural question is to study the more specific case where \(X\) is a type III good degeneration of K3 surfaces and try to construct a family of Hilbert schemes of points on \(X\) which will be minimal in the sense of the minimal model program, meaning a good or dlt minimal degeneration (see [20] and [12] for definitions of the minimality conditions). The singularities arising in such a degeneration \(X\) are of the type described here, i.e. we can restrict ourselves to the local problem where \(X_{0}\) is thought of as given by \(xyz=0\) in \(\mathbb{A}^{3}\). Among other reasons, Hilbert schemes of points on K3 surfaces are interesting to study because they form a class of examples of hyperkahler varieties.
### Organisation
We start, in Section 2, by giving some background on logarithmic and tropical geometry, and an overview of the work of Maulik and Ranganathan from [14] which we will want to refer to in later sections. Then, in Section 3, we set out an expanded construction on schemes and, in 4, we discuss how various GIT stability conditions can be defined on this construction. In Section 5, we describe a corresponding stack of expansions and family over it, building on the expanded degenerations we constructed as schemes. In Section 6, we extend our stability conditions to this setting. We then show that the stacks of stable objects defined have the desired Deligne-Mumford and properness properties.
**Acknowledgements.** I would like to thank Gregory Sankaran for all his support throughout this project. Thank you also to my PhD examiners, Alastair Craw and Dhruv Ranganathan, for their many helpful comments. This work was undertaken while funded by the University of Bath Research Studentship Award. I am also grateful to Patrick Kennedy-Hunt and Thibault Poiret for many interesting conversations.
## 2 Background on tropical perspective
We briefly introduce here the language of tropical and logarithmic geometry in the context of this problem. For more details on the contents of this section, see the article [21], lecture notes [17], as well as the first section of [14].
### Tropicalisation and expansion
**Tropicalisation.** Let \((X,\mathcal{M}_{X})\) be a logarithmic scheme, where the sheaf of monoids \(\mathcal{M}_{X}\) gives the divisorial logarithmic structure with respect to some simple normal crossing divisor \(D\subset X\). Explicitly, for an open subset \(U\subseteq X\), the sheaf \(\mathcal{M}_{X}\) is given by
\[\mathcal{M}_{X}(U)\coloneqq\{f\in\mathcal{O}_{X}(U)\ |\ f|_{U\setminus D}\in \mathcal{O}_{X}^{*}(U\setminus D)\}.\]
Then we can associate a fan \(\Sigma_{X}\) to this in the following way. Recall that the characteristic sheaf \(\overline{\mathcal{M}}_{X}\) for the divisorial logarithmic structure is defined by
\[\overline{\mathcal{M}}_{X}(U)=\{f\in\mathcal{O}_{X}(U)\ |\ f|_{U\setminus D}\in \mathcal{O}_{X}^{*}(U\setminus D),\ f|_{D}=0\},\]
i.e. it records the data of monomials which vanish at the divisor \(D\). The purpose of \(\Sigma_{X}\) will be to keep track of the corresponding degrees of vanishing. We let
\[\Sigma_{X}\coloneqq\operatorname{colim}_{x\in X}(\overline{\mathcal{M}}_{X,x })^{\vee}.\]
This will be contained in \(\mathbb{R}_{\geq 0}^{r}\), where \(r\) is the number of components of \(D\), since \(D\) is a simple normal crossing divisor. We call \(\Sigma_{X}\) the _tropicalisation_ of \(X\).
**Subdivisions of the tropicalisation define expansions of \(X\).** In the following, we will want to study possible birational modifications of the scheme \(X\) around the divisor \(D\). In the tropical language, these are expressed as subdivisions.
**Definition 2.1.1**.: Let \(\Upsilon\) be a fan, let \(|\Upsilon|\) be its support and \(\upsilon\) be a continuous map
\[\upsilon\colon|\Upsilon|\longrightarrow\Sigma_{X}\]
such that the image of every cone in \(\Upsilon\) is contained in a cone of \(\Sigma_{X}\) and that is given by an integral linear map when restricted to each cone in \(\Upsilon\). We say that \(\upsilon\) is a _subdivision_ if it is injective on the support of \(\Upsilon\) and the integral points of the image of each cone \(\tau\in\Upsilon\) are exactly the intersection of the integral points of \(\Sigma_{X}\) with \(\tau\).
A subdivision of the tropicalisation defines a birational modification of \(X\) in the following way. The subdivision
\[\Upsilon\longleftrightarrow\Sigma_{X}\longleftrightarrow\mathbb{R}_{\geq 0 }^{r}\]
has an associated toric variety \(\mathbb{A}_{\Upsilon}\), which comes with a \(\mathbb{G}_{m}^{r}\)-equivariant birational map \(\mathbb{A}_{\Upsilon}\to\mathbb{A}^{r}\). Then we have an induced morphism of quotient stacks
\[[\mathbb{A}_{\Upsilon}/\mathbb{G}_{m}^{r}]\longrightarrow[\mathbb{A}^{r}/ \mathbb{G}_{m}^{r}]\]
and we may define the induced birational modification of \(X\) to be
\[X_{\Upsilon}\coloneqq X\times_{[\mathbb{A}^{r}/\mathbb{G}_{m}^{r}]}[\mathbb{ A}_{\Upsilon}/\mathbb{G}_{m}^{r}].\]
We shall call such a birational modification an _expansion_ of \(X\).
**Visualising the problem.** Here, we describe how to visualise the tropicalisation arising from the divisorial logarithmic structure on \(X\) associated to a simple normal crossing divisor \(D\subset X\). We explain this for the case which interests us here, that is, we assume that \(X\to C\) is locally given by \(\operatorname{Spec}k[x,y,z,t]/(xyz-t)\) and the boundary divisor is \(D\coloneqq X_{0}\).
Given a divisorial logarithmic structure on \(X\), the tropicalisation is a fan or cone complex which for each defining function of the divisor records the degree of vanishing of this function in \(X\). Here, the functions vanishing at \(D\) will be \(x,y\) and \(z\), therefore we may represent \(\Sigma_{X}\) as a fan in \(\mathbb{R}_{\geq 0}^{3}\), in this case the positive orthant and its faces, as can be seen in Figure 1. In this image the three half-lines correspond to the divisors \(Y_{1},\ Y_{2}\) and \(Y_{3}\) in \(X\). The 2-dimensional faces spanned by two such lines correspond to the intersections \(Y_{i}\cap Y_{j}\) and the three dimensional interior of the cone corresponds to the triple intersection point \(Y_{1}\cap Y_{2}\cap Y_{3}\). For convenience, we shall refer to this tropicalisation as \(\operatorname{trop}(X)\) in later sections.
The tropicalisation of \(D\) can be visualised by taking a hyperplane slice through the cone in Figure 1; this yields a triangle with vertices corresponding to \(Y_{1},Y_{2}\) and \(Y_{3}\) in \(X_{0}\), edges between these vertices corresponding to the lines \(Y_{i}\cap Y_{j}\), and 2-dimensional interior corresponding to the point \(Y_{1}\cap Y_{2}\cap Y_{3}\), as pictured in Figure 2. This also corresponds to the _dual complex_ of the fibre \(X_{0}\) (see [10] for a definition). We shall refer to the tropicalisation of \(X_{0}\) as \(\operatorname{trop}(X_{0})\).
Recall that \(C\cong\mathbb{A}^{1}\) and the fan of \(\mathbb{A}^{1}\) is a half-line with a distinguished vertex. Making a choice of point on this line corresponds to choosing a height for the triangle within the cone \(\mathbb{R}^{3}_{\geq 0}\). Geometrically, we can think of changing the height of the triangle as making a finite base change on \(X\).
### Maulik-Ranganathan construction
We will briefly recall some key points of [14]. The aim of their work is to study the moduli space of ideal sheaves of fixed numerical type which meet the boundary divisor transversely. Some key motivations for the study of such an object come from enumerative geometry. For example, a common method used to address problems of curve counting in a given smooth variety is to degenerate this variety to a singular union of simpler irreducible components. The property of transversality is then crucial to ensure that all interesting behaviour of the ideal sheaves on the degenerate object occurs with support in the interior of the simpler irreducible components, which allows us to study it with more ease. One of the main difficulties with this approach is that often, as in this setting, the space of transverse ideal sheaves with respect to \(D\) is non-compact. Constructing the appropriate compactification will yield a space which is flat and proper over \(C\). In [14], Maulik and Ranganathan formulate the Donaldson-Thomas theory of the pair \((X,D)\), starting by constructing compactifications of the space of ideal sheaves in \(X\) transverse to \(D\).
We discuss [14] specifically with respect to the case which interests us here, namely
Figure 1: Tropicalisation of \(X\).
Figure 2: Tropicalisation of \(X_{0}\).
that of a degeneration \(X\to C\) as described above, where we seek to study the moduli space of ideal sheaves with fixed constant Hilbert polynomial \(m\), for some \(m\in\mathbb{N}\) with respect to the boundary divisor \(D=X_{0}\). The key idea is to construct the tropicalisation of \(X\), denoted \(\Sigma_{X}\), and a corresponding tropicalisation map, which is used to understand how to obtain the desired transversality properties in our compactifications.
**Tropicalisation map.** We may construct a tropicalisation map which takes points of \(X^{\circ}\) to \(\Sigma_{X}\), as in Section 1.4 of [14]. We recall the details of this map here. We assume that \(\mathcal{K}\) is a valued field extending \(k\). First, we take a point of \(X^{\circ}(\mathcal{K})\), given by some morphism \(\operatorname{Spec}\mathcal{K}\to X^{\circ}\). By the properness of \(X\), this extends to a morphism \(\operatorname{Spec}R\to X\) for some valuation ring \(R\). Now, let \(P\in X\) denote the image of the closed point by the second morphism. The stalk of the characteristic sheaf at \(P\) is given by \(\mathbb{N}^{r}\), where \(r\) is the number of linearly independent vanishing equations of \(D\) at the point \(P\). For example, in our context, if \(P\in Y_{1}\subset X_{0}\), then \(r=1\) and \(\mathbb{N}\) is generated by the function \(x\); if \(P\in Y_{1}\cap Y_{2}\), then \(r=2\) with \(\mathbb{N}^{2}\) generated by the functions \(x\) and \(y\); etc.
Each element of \(\mathbb{N}^{r}\) corresponds to a function \(f\) on \(X\) in the neighbourhood of \(P\) up to multiplication by a unit and we may then evaluate \(f\) with respect to the valuation map associated to \(\mathcal{K}\). This determines an element of
\[[\mathbb{N}^{r}\to\mathbb{R}_{\geq 0}]\in\operatorname{Hom}(\mathbb{N}^{r}, \mathbb{R}_{\geq 0})\longleftrightarrow\Sigma_{X}.\]
This gives rise to a morphism
\[\operatorname{trop}\colon X^{\circ}\longrightarrow\Sigma_{X}\]
called the tropicalisation map. Now let the valuation map \(\mathcal{K}\to\mathbb{R}\) be surjective and let \(Z^{\circ}\subset X^{\circ}\) be an open subscheme. We denote by \(\operatorname{trop}(Z^{\circ})\) the image of the map \(\operatorname{trop}\) restricted to \(Z^{\circ}(\mathcal{K})\). Maulik and Ranganathan are then able to show, based on previous work of Tevelev [13] for the toric case, that given such an open subscheme \(Z^{\circ}\subset X^{\circ}\), the subset \(\operatorname{trop}(Z^{\circ})\) gives rise to an expansion \(X^{\prime}\) of \(X\) in which the closure \(Z\) of \(Z^{\circ}\) has the required transversality properties. This gives us a convenient dictionary to move back and forth between the geometric and combinatorial points of view.
The possible tropicalisations of such subschemes, corresponding to expansions on the geometric side, are captured on the combinatorial side by the notion of _1-complexes_ embedding into \(\Sigma_{X}\). See [14] for precise definitions.
**Existence and uniqueness of transverse limits.** Maulik and Ranganathan introduce notions of _dimensional transversality_ and _strong transversality_, which, in the specific case of Hilbert schemes of points, happen to be equivalent to Li-Wu stability (see Section 5.3 for a definition of this stability condition). In general for higher dimensional subschemes this will not be the case, however.
As mentioned above, for an open subscheme \(Z^{\circ}\subset X^{\circ}\), we may consider its image \(\operatorname{trop}(Z^{\circ})\) under the tropicalisation map. Now recall from Section 2.1 that a subdivision of the tropicalisation \(\Sigma_{X}\) defines an expansion of \(X\). The expansion corresponding to the subdivision given by \(\operatorname{trop}(Z^{\circ})\) in \(\Sigma_{X}\) will in general determine a birational modification, not necessarily a blow-up. We note here that while contracting components of a fibre
over a 1-parameter family will in general be flat, this is no longer the case when this operation is made over a larger base. It will therefore be necessary, for each possible \(\operatorname{trop}(Z^{\circ})\), to make a choice of polyhedral subdivision corresponding to an actual blow-up on \(X\). Maulik and Ranganathan prove that, given any such \(Z^{\circ}\), an expansion can be constructed from \(\operatorname{trop}(Z^{\circ})\) which has the required transversality property and good existence and uniqueness properties.
**Construction of the stacks of expansions.** Through these methods, one can obtain existence and uniqueness properties for these flat limits. To build a moduli space of such subschemes, Maulik and Ranganathan start by constructing a moduli space of possible expansions arising from Tevelev's procedure. Let us denote the set of isomorphism classes of 1-complexes which embed into \(\Sigma_{X}\) by \(|T(\Sigma_{X})|\). Some subtleties arise at this point, namely that in general the space constructed will not be representable as a logarithmic algebraic stack. This can be seen through the fact the category of logarithmic algebraic stacks is equivalent to the category of cone stacks, but \(|T(\Sigma_{X})|\) cannot in general be given a proper cone structure. In order to give it a cone structure, Maulik and Ranganathan study the spaces of maps \(\mathbb{X}_{G}\) from the graphs \(G\) associated to 1-complexes in \(|T(\Sigma_{X})|\) to \(\Sigma_{X}\), and identify maps which have the same image. By taking appropriate subdivisions of the objects \(\mathbb{X}_{G}\) and identifying them in the right way, they obtain a _moduli space of tropical expansions_\(T\), which has the desired cone structure.
This operation results in non-uniqueness, as we are making a choice of polyhedral subdivision and there is in general no canonical choice.
**Proper Deligne-Mumford stacks.** In order to construct the universal family \(\mathfrak{Y}\subset T\times\Sigma\), some additional choices must be made. Indeed, as mentioned above, \(\operatorname{trop}(Z^{\circ})\) does not in general define a blow-up, so, when fitting the expansions we constructed together into one large family over a larger base, we must modify these expansions to ensure flatness. Here this is resolved by adding distinguished vertices to the relevant complexes. These added vertices will be 2-valent vertices along edges of the 1-complexes parameterised by \(T\) and we call them _tube vertices_. Geometrically, they look like \(\mathbb{P}^{1}\)-bundles over curves in \(X_{0}\) (where we took \(X\) to be a family of surfaces). Again, this operation is not canonical and results in non-uniqueness.
The addition of these tube vertices in the tropicalisation means that there are more potential components in each expansion, which interferes with the previously set up uniqueness results. Indeed, recall that \(\operatorname{trop}(Z^{\circ})\) gave us exactly the right number of vertices in the dual complex in order for each family of subschemes \(Z^{\circ}\subset X^{\circ}\) to have a unique limit representative. Therefore, to reflect this, _Donaldson-Thomas stability_ asks for subschemes to be DT stable if and only if they are tube schemes precisely along the tube components. We say that a 1-dimensional subscheme is a _tube_ if it is the schematic preimage of a zero-dimensional subscheme in \(D\). In the case of Hilbert schemes of points, this condition will translate simply to a 0-dimensional subscheme \(Z\) being DT stable if and only if no tube component contains a point of the support of \(Z\) and every other irreducible component expanded out by our blow-ups contains at least one point of the support of \(Z\).
Maulik and Ranganathan define a subscheme to be _stable_ if it is strongly transverse
and DT stable. For fixed numerical invariants the substack of stable subschemes in the space of expansions forms a Deligne-Mumford, proper, separated stack of finite type over \(C\).
**Comparison with the results of this paper.** The construction we present in this paper has the surprising property that we do not need to label any components as tubes in order for the stack of stable objects we define to be proper. This is an artifact of the specific choices of blow-ups to be included in our expanded degenerations. The work of Maulik and Ranganathan shows us that this is not expected in general. As mentioned in Section 1.3, we will discuss in an upcoming paper how to construct proper stacks of stable objects in cases where different choices of expansions are made and it becomes necessary for us as well to introduce a Donaldson-Thomas stability condition.
## 3 The expanded construction
In this section we construct explicit expanded degenerations \(X[n]\) out of a 1-parameter family \(X\to C\) by expanding the base and making sequences of blow-ups on the expanded family. As we will see these support a global action by the torus \(G\coloneqq\mathbb{G}_{m}^{n}\). We construct these spaces as schemes here. Later, in Section 5, we give a stack construction building upon these schemes, in which we impose additional equivalence relations which essentially set to be equivalent any two fibres which look identical. We will touch more upon why this is necessary in Section 5.
**Setup and assumptions.** As before, let \(X\to C\) be a family of surfaces over a curve isomorphic to \(\mathbb{A}^{1}\), where \(X\) is given in etale local coordinates by \(\operatorname{Spec}k[x,y,z,t]/(xyz-t)\). We denote by \(X_{0}\) the special fibre and by \(Y_{1}\), \(Y_{2}\) and \(Y_{3}\) the irreducible components of this special fibre given locally by \(x=0,y=0\) and \(z=0\) respectively. Figure 3 shows a copy of the special fibre \(X_{0}\) both from the geometric point of view, on the left, and tropical point of view, on the right.
Figure 3: Geometric and tropical pictures of the special fibre \(X_{0}\).
**Output of expanded construction.** The expanded degeneration \(X[n]\to C[n]\) which we construct in this section has the following properties:
* The morphism \(X[n]\to C[n]\) is projective and \(G\)-equivariant.
* Etale locally, \(X[n]\) is a subvariety of \((X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1})\times(\mathbb{P}^{1})^{2n}\).
### The blow-ups
In the following, we construct expanded degenerations by enlarging the base \(C\) and making sequences of blow-ups in the family over this larger base. We start by taking a copy of \(\mathbb{A}^{n+1}\), with elements labelled \((t_{1},\dots,t_{n+1})\in\mathbb{A}^{n+1}.\) Throughout this work, we shall refer to the entries \(t_{i}\) as _basis directions_. Now, let \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) be the fibre product given by the map \(X\to C\cong\mathbb{A}^{1}\) and the product
\[(t_{1},\dots,t_{n+1})\longmapsto t_{1}\cdots t_{n+1}.\]
In this expanded degeneration construction, we will be blowing up schemes along Weil divisors. A consequence of the way these blow-ups are defined is that the blow-up morphisms contract only components of codimension at least 2.
**First blow-up of the \(Y_{1}\) component.** We start by blowing up \(Y_{1}\times_{\mathbb{A}^{1}}V(t_{1})\) inside \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\), where \(V(t_{i})\) denotes the locus where \(t_{i}=0\).
_Notation._ We name the space resulting from this blow-up \(X_{(1,0)}\) to signify we have blown up the component \(Y_{1}\) once and the component \(Y_{2}\) zero times.
We can describe this blow-up locally in the following way. The ideal of the blow-up is \(I_{1}=\langle x,t_{1}\rangle\). Globally this will correspond to an ideal sheaf \(\mathcal{I}_{1}\). Then there is a surjective map of graded rings
\[A[x_{0}^{(1)},x_{1}^{(1)}]\longrightarrow S_{1}=\bigoplus_{n\geq 0}I_{1}^{n}\]
which maps
\[x_{0}^{(1)}\longmapsto x\ \ \text{and}\ \ x_{1}^{(1)}\longmapsto t_{1},\]
where \(A\coloneqq k[x,y,z,t_{1},\dots,t_{n+1}]/(xyz-t_{1}\cdots t_{n+1})\). This induces an embedding
\[\operatorname{Proj}(S_{1})\longleftrightarrow\operatorname{Proj}A[x_{0}^{(1) },x_{1}^{(1)}]=\mathbb{P}^{1}\times\operatorname{Spec}A\]
and \(\operatorname{Proj}(S_{1})\), i.e. our blow-up, is cut out in \(\mathbb{P}^{1}\times\operatorname{Spec}A\) by the equations
\[x_{0}^{(1)}t_{1} =xx_{1}^{(1)}\] \[x_{0}^{(1)}yz =x_{1}^{(1)}t_{2}\cdots t_{n+1}.\]
**Proposition 3.1.1**.: \(X_{(1,0)}\) _is isomorphic to \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) away from the locus where \(t_{1}=t_{i}=0\), for any \(i\neq 1\)._
Proof.: Let \(X_{(1,0)}\to\mathbb{A}^{n+1}\) be the natural projection. Then the fibres above \((t_{1},\dots,t_{n+1})\) where \(t_{1}\) is nonzero are still the same after the blow-up and so are the fibres where \(t_{1}=0\) and all the other \(t_{i}\) are nonzero because the total space is still smooth at all points of these fibres. However, when \(t_{1}=0\) and at least one of the other \(t_{i}\) is zero, then we get singularities of the total space appearing in the fibre of \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) and the blow-up causes a new component to appear around the \(Y_{1}\) component.
_Notation._ We denote by \(\Delta_{1}^{(1)}\) the new component introduced by the blow-up which is described in the proof of Proposition 3.1.1 above.
This can be seen in Figure 4, where the added red vertices in the tropical picture correspond to the two irreducible components of \(\Delta_{1}^{(1)}\) and the edge connecting them corresponds to the intersection of these irreducible components.
**Further blow-ups of the \(Y_{1}\) component.** Let \(b_{(1,0)}\colon X_{(1,0)}\to X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) be the map defined by the first blow-up given above. We then proceed to blow-up \(b_{(1,0)}^{*}(Y_{1}\times_{\mathbb{A}^{1}}V(t_{2}))\) inside \(X_{(1,0)}\). We name the resulting space \(X_{(2,0)}\) and the composition of both blow-ups is denoted \(b_{(2,0)}\colon X_{(2,0)}\to X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\). We continue to blow up each \(b_{(k-1,0)}^{*}(Y_{1}\times_{\mathbb{A}^{1}}V(t_{k}))\) inside \(X_{(k-1,0)}\) for each \(k\leq n\). The resulting space is denoted \(X_{(n,0)}\). Finally, we denote by
\[\beta_{(k,0)}^{1}\colon X_{(k,0)}\longrightarrow X_{(k-1,0)}\]
the morphisms corresponding to each individual blow-up. We therefore have the equality
\[\beta_{(k,0)}^{1}\circ\dots\circ\beta_{(1,0)}^{1}=b_{(k,0)}\]
We now fix the following terminology.
**Definition 3.1.2**.: We say that a dimension 2 component in a fibre of \(X_{(k,0)}\to C\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) is a _\(\Delta_{1}\)-component_ if it is contracted by the morphism \(\beta_{(i,0)}^{1}\) for some \(i\leq k\). Moreover if a \(\Delta_{1}\)-component in a fibre is contracted by such a map then we say it is _expanded out_ in this fibre.
Figure 4: Geometric and tropical pictures of a fibre in \(X_{(1,0)}\) where \(t_{1}=t_{i}=0\).
We label by \(\Delta_{1}^{(k)}\) the \(\Delta_{1}\)-component resulting from the \(k\)-th blow-up. The fibre where \(t_{i}=0\) for all \(i\in\{1,\dots,n+1\}\) has exactly \(n\) expanded \(\Delta_{1}\)-components. The equations of the blow-ups in local coordinates are as follows:
\[x_{0}^{(1)}t_{1}=xx_{1}^{(1)},\] \[x_{1}^{(k-1)}x_{0}^{(k)}t_{k}=x_{0}^{(k-1)}x_{1}^{(k)},\qquad \text{ for }\ 2\leq k\leq n, \tag{3.1.1}\] \[x_{0}^{(n)}yz=x_{1}^{(n)}t_{n+1}.\]
_Remark 3.1.3_.: If we restrict \(X_{0}\) to only the components \(Y_{1}\) and \(Y_{2}\), i.e. restrict the original degeneration to \(\operatorname{Spec}k[x,y,z,t]/(xy-t)\), we get back exactly the blow-ups of Gulbrandsen, Halle and Hulek [1].
In fibres of the construction where \(\Delta_{1}^{(k)}\), for some \(k\), is not expanded out, i.e. not contracted by some map \(\beta_{(i,0)}^{1}\), we will want to think of it in the following way.
**Definition 3.1.4**.: When \(t_{k}=0\) and all other \(t_{i}\) are nonzero, we consider \(\Delta_{1}^{(k)}\) and all \(\Delta_{1}^{(j)}\) for \(j\geq k\) as being _equal to_\(Y_{1}\), meaning that the projective coordinates introduced by the \(j\)-th blow-up are proportional to \(1/yz\). This follows from the equality
\[x_{0}^{(j)}yz=x_{1}^{(j)}t_{j+1}\cdots t_{n+1},\]
obtained from the above equations of the blow-ups. Similarly, the components \(\Delta_{1}^{(j)}\) with \(j<k\) are considered to be _equal to_ the union \(Y_{2}\cup Y_{3}\), which follows from the equality
\[x_{0}^{(j)}t_{1}\cdots t_{j}=xx_{1}^{(j)},\]
obtained from the equations of the blow-up. When \(t_{n+1}=0\) and all other \(t_{k}\) are nonzero, then all \(\Delta_{1}^{(k)}\) are _equal to_ the union \(Y_{2}\cup Y_{3}\).
**Blow-ups of the \(Y_{2}\) component.** For the component \(Y_{2}\) we can make similar definitions to the above. We blow up \(b_{(n,0)}^{*}Y_{2}\times_{\mathbb{A}^{1}}V(t_{n+1})\) in \(X_{(n,0)}\) and name the resulting space \(X_{(n,1)}\). Let \(b_{(n,k)}\colon X_{(n,k)}\to X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) be the composition of the \(n\) blow-ups of \(Y_{1}\) and the first \(k\) blow-ups of \(Y_{2}\) on \(X_{(n,0)}\). Similarly to the above, but with the order of the basis directions reversed, we blow up \(b_{(n,k-1)}^{*}(Y_{2}\times_{\mathbb{A}^{1}}V(t_{n+2-k}))\) in \(X_{(n,k-1)}\) for each \(k\leq n\).
The equations of the blow-ups in local coordinates are as follows, where \((y_{0}^{(k)}:y_{1}^{(k)})\) are the coordinates of the \(\mathbb{P}^{1}\) introduced by the \(k\)-th blow-up:
\[y_{0}^{(1)}t_{n+1}=yy_{1}^{(1)},\] \[y_{1}^{(k-1)}y_{0}^{(k)}t_{n+2-k}=y_{0}^{(k-1)}y_{1}^{(k)}\ \ \text{for }\ 2\leq k\leq n, \tag{3.1.2}\] \[y_{0}^{(n)}xz=y_{1}^{(n)}t_{1}\] \[x_{0}^{(k)}y_{0}^{(n+1-k)}z=x_{1}^{(k)}y_{1}^{(n+1-k)}.\]
_Notation._ The components introduced by these new blow-ups are labelled \(\Delta_{2}^{(k)}\). To simplify notation, we will denote the base \(\mathbb{A}^{n+1}\times_{\mathbb{A}^{1}}C\) by \(C[n]\), the expanded construction \(X_{(n,n)}\) by \(X[n]\) and the natural projection to the original family \(X\) by \(\pi:X[n]\to X\).
We have blow-up morphisms
\[\beta^{1}_{(i,j)}\colon X_{(i,j)} \longrightarrow X_{(i-1,j)},\] \[\beta^{2}_{(i,j)}\colon X_{(i,j)} \longrightarrow X_{(i,j-1)},\]
corresponding to each individual blow-up of a pullback of the \(Y_{1}\)-component and \(Y_{2}\)-component respectively. The composition of all the blow-up morphisms is denoted
\[b\coloneqq\beta^{2}_{(n,n)}\circ\cdots\circ\beta^{2}_{(n,1)}\circ\cdots\circ \beta^{1}_{(n,0)}\circ\cdots\circ\beta^{1}_{(1,0)}\colon X[n]\to X\times_{ \mathbb{A}^{1}}\mathbb{A}^{n+1}.\]
As the following proposition shows, the spaces \(X_{(i,j)}\) are well-defined, as the order in which we make the blow-ups, i.e. expand out the \(\Delta_{1}\) or the \(\Delta_{2}\)-components first, makes no difference. We can therefore express the space \(X_{(m_{1},m_{2})}\) as the space \(X_{(m_{1},0)}\) on which we perform a sequence of blow-ups of the pullback of \(Y_{2}\) or as the space \(X_{(0,m_{2})}\) on which we perform a sequence of blow-ups of the pullback of \(Y_{1}\), etc.
**Proposition 3.1.5**.: _The following blow-up diagram commutes_
Proof.: We show that the space \(X[1]=X_{(1,1)}\) can be constructed by first blowing up along \(Y_{1}\) and then \(Y_{2}\) or by reversing the order of these operations. Indeed, if we start by blowing up \(Y_{1}\times_{\mathbb{A}^{1}}V(t_{1})\) in \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\), we obtain the etale local equations (3.1.1). This gives us the space \(X_{(1,0)}\). Then blowing up \(b^{*}_{(1,0)}Y_{2}\times_{\mathbb{A}^{1}}V(t_{2})\) in \(X_{(1,0)}\) yields the etale local equations (3.1.2) and by definition this gives us the space \(X_{(1,1)}\).
Now, if we start by blowing up \(Y_{2}\times_{\mathbb{A}^{1}}V(t_{2})\) in \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\), we obtain etale local equations
\[y_{0}^{(1)}t_{n+1}=yy_{1}^{(1)},\] \[y_{0}^{(1)}xz=y_{1}^{(1)}t_{1}\]
and this yields the space \(X_{(0,1)}\). If we then blow up \(b^{*}_{(0,1)}Y_{1}\times_{\mathbb{A}^{1}}V(t_{1})\) in \(X_{(0,1)}\), we shall obtain the equations
\[x_{0}^{(1)}y_{0}^{(1)}z=x_{1}^{(1)}y_{1}^{(1)}\] \[x_{0}^{(1)}t_{1}=xx_{1}^{(1)},\] \[x_{0}^{(1)}yz=x_{1}^{(1)}t_{2}.\]
But these are exactly the equations (3.1.1) and (3.1.2), so the resulting space is again \(X[1]=X_{(1,1)}\). This argument can be easily generalised to \(X[n]\) for any \(n\).
**Proposition 3.1.6**.: _If we take \(X\to C\) to be the etale local model_
\[\operatorname{Spec}k[x,y,z,t]/(xyz-t)\longrightarrow\operatorname{Spec}k[t],\]
_the corresponding scheme \(X[n]\) obtained after the sequence of blow-ups \(b\) is a subvariety of \((X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1})\times(\mathbb{P}^{1})^{2n}\) cut out by the local equations (3.1.1) and (3.1.2)._
Proof.: This is immediate from the local description of the blow-ups above.
**Proposition 3.1.7**.: _The family \(X[n]\to C[n]\) thus constructed is projective._
Proof.: The morphism \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\to C[n]\) must be projective since \(X\to C\) is projective. Then \(X[n]\to X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) is just a sequence of blow-ups along Weil divisors, hence projective. This proves projectivity of the morphism \(X[n]\to C[n]\).
_Remark 3.1.8_.: The issue with projectivity in Proposition 1.10 of [10] only arises if the local descriptions of the blow-ups they use to create the family \(X[n]\to C[n]\) do not glue globally to define blow-ups.
We now extend the definition of \(\Delta_{1}\)-components to the schemes \(X[n]\) and fix some additional terminology.
**Definition 3.1.9**.: We say that a dimension 2 component of \(X[n]\to C[n]\) is a \(\Delta_{i}\)-_component_ if it is contracted by the morphism \(\beta^{i}_{(j,k)}\) for some \(i,j,k\). Moreover if a \(\Delta_{i}\)-component in a fibre is contracted by such a map then we say it is _expanded out in this fibre_. We say that a dimension 2 component of \(X[n]\) is a \(\Delta\)-_component_ if it is a \(\Delta_{i}\)-component for some \(i\). If it is expanded out in some fibre we may alternatively refer to it as an _expanded component_. Similarly, we may extend Definition 3.1.4 to say that a \(\Delta\)-component is _equal to_ a component \(W\) of a fibre of \(X[n]\) if the projective coordinates associated to this \(\Delta\)-component are proportional to the non-vanishing coordinates of \(W\).
**Definition 3.1.10**.: We say that a \(\Delta_{i}\)-component is of _pure type_ if it is not equal to any \(\Delta_{j}\)-component for \(j\neq i\). Otherwise we say it is of _mixed type_.
Description of fibres of \(X[n]\to C[n]\).In order to understand what these blow-ups look like, we describe the fibres of the scheme \(X[n]\) over \(C[n]\), where certain basis directions vanish.
_Only one basis direction vanishes._ If only one of the \(t_{i}=0\) and all other basis directions are nonzero, then a fibre over such a point in the base is just a copy of the special fibre \(X_{0}\).
_Two basis directions vanish._ Here, we consider fibres where \(t_{i}=t_{j}=0\) for some \(i<j\) and no other \(t_{k}=0\). The blow-ups of pullbacks of the \(Y_{1}\)-component cause exactly one \(\Delta_{1}\)-component to be expanded in such a fibre, and this expanded component is given by \(\Delta_{1}^{(i)}=\ldots=\Delta_{1}^{(j-1)}\). In this case, the singularities of the total space occurring at the intersection of \(Y_{1}\) and \(Y_{2}\) have already been resolved by expanding out this \(\Delta_{1}\)-component. As the blow-ups of pullbacks of the \(Y_{2}\)-component also cause one \(\Delta_{2}\)-component to be expanded in this fibre, given by \(\Delta_{2}^{(n+2-j)}=\ldots=\Delta_{2}^{(n+1-i)}\), we therefore have
\[\Delta_{1}^{(i)}=\ldots=\Delta_{1}^{(j-1)}=\Delta_{2}^{(n+2-j)}=\ldots=\Delta_ {2}^{(n+1-i)}\]
in the \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\) locus of the fibre. This can be easily deduced from studying the equations of the blow-ups. In the \(\pi^{*}((Y_{1}\cap Y_{3})^{\circ})\) locus of the fibre, we see a single expanded component of pure type given by \(\Delta_{1}^{(i)}=\ldots=\Delta_{1}^{(j-1)}\). Similarly, in the \(\pi^{*}((Y_{2}\cap Y_{3})^{\circ})\) locus of the fibre, we see a single expanded component of pure type given by \(\Delta_{2}^{(n+1-j)}=\ldots=\Delta_{2}^{(n+1-i)}\). Finally, the component \(\Delta_{1}^{(k)}\) is equal to the union \(Y_{2}\cup Y_{3}\) for \(k<i\) and \(\Delta_{1}^{(l)}\) is equal to the component \(Y_{1}\) if \(l>j-1\). The situation for the \(\Delta_{2}\) components is similar. This can be seen in Figure 5.
Before we continue we fix some terminology which will help us describe the expanded components.
**Definition 3.1.11**.: We refer to an irreducible component of a \(\Delta\)-component as a _bubble_. The notions of two bubbles being _equal_ and a bubble being _expanded out_ in a certain fibre are as in Definitions 3.1.4 and 3.1.9.
_Three basis directions vanish._ When \(t_{i}=t_{j}=t_{k}=0\), where \(i<j<k\), and all other basis directions are non-zero, then in the locus \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\), we see exactly two expanded components, which are both of mixed type. Note that, more generally in any
Figure 5: Geometric and tropical picture at \(t_{i}=t_{j}=0\) in \(X[n]\).
Figure 6: Geometric and tropical picture at \(t_{i}=t_{j}=t_{k}=0\) in \(X[n]\).
fibre of \(X[n]\), all expanded components in the \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\) locus are of mixed type. This is because, in any fibre of \(X[n]\), we have that \(\Delta_{1}^{(l)}=\Delta_{2}^{(n+1-l)}\) in the \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\) locus for all \(l\).
In the example given here, the two bubbles in the \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\) locus can be described as follows. The bubble which intersects \(Y_{1}\) non-trivially is given by \(\Delta_{1}^{(i)}=\ldots=\Delta_{1}^{(j-1)}\). By the above, each of these \(\Delta_{1}\)-components is equivalent to a \(\Delta_{2}\)-component in the \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\) locus, so this bubble is equivalently given by \(\Delta_{2}^{(n+1-j)}=\ldots=\Delta_{2}^{(n+1-i)}\). The second bubble in this locus, which intersects \(Y_{2}\) non-trivially, is given by
\[\Delta_{1}^{(j)}=\ldots=\Delta_{1}^{(k-1)}=\Delta_{2}^{(n+2-k)}=\ldots= \Delta_{2}^{(n+1-j)}.\]
There is a single bubble expanded out in the \(\pi^{*}(Y_{1}\cap Y_{2}\cap Y_{3})\) locus. This is a \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), given by the meeting of the \(\Delta_{1}^{(i)}=\ldots=\Delta_{1}^{(j-1)}\) and \(\Delta_{2}^{(n+2-k)}=\ldots=\Delta_{2}^{(n+1-j)}\) components. Finally, in the \(\pi^{*}((Y_{1}\cap Y_{3})^{\circ})\) locus we see exactly two bubbles given by the two distinct expanded \(\Delta_{1}\)-components and in the \(\pi^{*}((Y_{2}\cap Y_{3})^{\circ})\) locus we see also two bubbles given by the two distinct expanded \(\Delta_{2}\)-components. This can be seen in Figure 6. The intersection of the two edges in the interior of the triangle in the tropical picture creates a new vertex, corresponding to the new bubble in the \(\pi^{*}(Y_{1}\cap Y_{2}\cap Y_{3})\) locus. The other modified special fibres in \(X[n]\) can be described similarly.
Now, we note that there is a natural inclusion
\[C[n] \longleftrightarrow C[n+1] \tag{3.1.3}\] \[(t_{1},\ldots,t_{n+1}) \longmapsto(t_{1},\ldots,t_{n+1},1),\]
which, in turn, induces a natural inclusion
\[X[n] \longleftrightarrow X[n+1].\]
Under these inclusions, we may consider the space \(X[n]\) as a locus in a larger space \(X[n+k]\) where all \(t_{i}\neq 0\) for \(i>n+1\).
**The group action.** We may define a group action on \(X[n]\) very similarly to [1]. Let \(G\subset\operatorname{SL}(n+1)\) be the maximal diagonal torus. We have \(\mathbb{G}_{m}^{n}\cong G\subset\mathbb{G}_{m}^{n+1}\), where we can view an element of \(G\) as an \((n+1)\)-tuple \((\sigma_{1},\ldots,\sigma_{n+1})\) such that \(\prod_{i}\sigma_{i}=1\). This acts naturally on \(\mathbb{A}^{n+1}\), which induces an action on \(C[n]\). The isomorphism \(\mathbb{G}_{m}^{n}\cong G\) is given by
\[(\tau_{1},\ldots,\tau_{n})\longrightarrow(\tau_{1},\tau_{1}^{-1}\tau_{2}, \ldots,\tau_{n-1}^{-1}\tau_{n},\tau_{n}^{-1}).\]
We shall use the notation \((\tau_{1},\ldots,\tau_{n})\) to describe elements of \(G\) throughout this work.
**Proposition 3.1.12**.: _There is a unique \(G\)-action on \(X[n]\) such that \(X[n]\to X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) is equivariant with respect to the natural action of \(G\) on \(\mathbb{A}^{n+1}\)._
_This action is the restriction of the action on \((X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1})\times(\mathbb{P}^{1})^{2n}\), which is trivial on \(X\), acts by_
\[t_{1} \longmapsto\tau_{1}^{-1}t_{1}\] \[t_{k} \longmapsto\tau_{k}^{-1}\tau_{k-1}t_{k}\] \[t_{n+1} \longmapsto\tau_{n}t_{n+1}\]
_on the basis directions, and acts by_
\[(x_{0}^{(k)}:x_{1}^{(k)})\longmapsto(\tau_{k}x_{0}^{(k)}:x_{1}^{(k)})\] \[(y_{0}^{(k)}:y_{1}^{(k)})\longmapsto(y_{0}^{(k)}:\tau_{n+1-k}y_{1}^ {(k)}).\]
_on the \(\Delta\)-components._
Proof.: This follows immediately from [1].
Note that the group action on the \((y_{0}^{(k)}:y_{1}^{(k)})\) coordinates follows immediately from the fact that \(\Delta_{1}^{(k)}=\Delta_{2}^{(n+1-k)}\) in the \(\pi^{*}((Y_{1}\cap Y_{2})^{\circ})\) locus. Given the equations of the blow-ups above, there is no other possible choice of action such that the map \(\pi:X[n]\to X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) is \(G\)-equivariant (the equations must be invariant under group action). Note also that the natural inclusions
\[X[n]\longleftrightarrow X[n+k],\]
we described in the previous section are equivariant under the group action.
**Lemma 3.1.13**.: _We have the isomorphism_
\[H^{0}(C[n],\mathcal{O}_{C[n]})^{G}\cong k[t],\]
_where \(H^{0}(C[n],\mathcal{O}_{C[n]})^{G}\) denotes the space of \(G\)-invariant sections of \(H^{0}(C[n],\mathcal{O}_{C[n]})\)._
Proof.: This is immediate from the above description of the group action.
_Remark 3.1.14_.: We abuse notation slightly by referring to the group acting on \(X[n]\) by \(G\), instead of \(G[n]\). It should always be clear from the context what group \(G\) is meant.
### Embedding into product of projective bundles
In this section, we show how \(X[n]\) can be embedded into a fibre product of projective bundles, which locally corresponds to the embedding in \((X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1})\times(\mathbb{P}^{1})^{2n}\). The \(G\)-action on \(X[n]\) may be expressed as a restriction of a global action on this product of projective bundles. We will then be able to define a \(G\)-linearised ample line bundle \(\mathcal{L}\) on \(X[n]\) by taking the tautological bundle of this fibre product of projective bundles. From this line bundle we will then construct a second line bundle \(\mathcal{M}\) on the relative Hilbert scheme of \(m\) points \(H^{m}_{[n]}\coloneqq\operatorname{Hilb}^{m}(X[n]/C[n])\) with an induced \(G\)-linearisation.
Let \(\operatorname{pr}_{1}\) and \(\operatorname{pr}_{2}\) be the projections of \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) to \(X\) and \(\mathbb{A}^{n+1}\) respectively. Similarly to [1], we define vector bundles
\[\mathcal{F}_{1}^{(k)} =\operatorname{pr}_{1}^{*}\mathcal{O}_{X}(-Y_{1})\oplus \operatorname{pr}_{2}^{*}\mathcal{O}_{\mathbb{A}^{n+1}}(-V(t_{k}))\] \[\mathcal{F}_{2}^{(k)} =\operatorname{pr}_{1}^{*}\mathcal{O}_{X}(-Y_{2})\oplus \operatorname{pr}_{2}^{*}\mathcal{O}_{\mathbb{A}^{n+1}}(-V(t_{n+2-k}))\]
on \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\).
**Lemma 3.2.1**.: _There is an embedding_
\[X[n]\longleftrightarrow\prod_{i,j}\mathbb{P}(\mathcal{F}_{i}^{(j)}),\]
_where the product of projective bundles \(\prod_{i,j}\mathbb{P}(\mathcal{F}_{i}^{(j)})\) is constructed as a fibre product over \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\)._
Proof.: Let \(\mathcal{I}_{1}^{(k)},\mathcal{I}_{2}^{(k)}\) be the ideal sheaves corresponding to each blow-up we perform; for example \(\mathcal{I}_{1}^{(1)}\) is the ideal sheaf of \(Y_{1}\times_{\mathbb{A}^{1}}V(t_{1})\) on \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\). Then \(\mathcal{I}_{2}^{(1)}\) is the ideal sheaf of \(b_{(1,0)}^{*}(Y_{2}\times_{\mathbb{A}^{1}}V(t_{n+1}))\) on \(X_{(1,0)}\), and so on for \(\mathcal{I}_{j}^{(k)}\).
As we will explain below, we then have, for each of the vector bundles \(\mathcal{F}_{1}^{(k_{1})}\) and \(\mathcal{F}_{2}^{(k_{2})}\), the embeddings
\[X_{(k_{1},k_{2})}\longleftrightarrow\mathbb{P}(b_{(k_{1}-1,k_{ 2})}^{*}\mathcal{F}_{1}^{(k_{1})}),\] \[X_{(k_{1},k_{2})}\longleftrightarrow\mathbb{P}(b_{(k_{1},k_{2} -1)}^{*}\mathcal{F}_{2}^{(k_{2})}),\]
where \(b_{(0,0)}\) is understood to be just the identity map on \(X_{(0,0)}=X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\). Indeed, the scheme \(X_{(k_{1},k_{2})}\) embeds into the projectivisations of the ideals of these blow-ups \(\mathbb{P}(\mathcal{I}_{1}^{(k_{1})})\) and \(\mathbb{P}(\mathcal{I}_{2}^{(k_{2})})\). For a reference on projectivisations of ideals see [1]. There is a surjection
\[b_{(k_{1}-1,k_{2})}^{*}\mathcal{F}_{1}^{(k_{1})}\longrightarrow\mathcal{I}_{1 }^{(k_{1})}\text{ given by }\begin{pmatrix}b_{(k_{1}-1,k_{2})}^{*}x\\ t_{k_{1}}\end{pmatrix},\]
where \(x\) is a defining equation of the locus to be blown up projected forward to \(X\), i.e. it is the defining equation of \(Y_{1}\). Similarly, there is a surjection
\[b_{(k_{1},k_{2}-1)}^{*}\mathcal{F}_{2}^{(k_{2})}\longrightarrow\mathcal{I}_{2 }^{(k_{2})}.\]
From this, we deduce that there are embeddings
\[\mathbb{P}(\mathcal{I}_{1}^{(k_{1})})\longleftrightarrow\mathbb{P}(b_{(k_{1 }-1,k_{2})}^{*}\mathcal{F}_{1}^{(k_{1})}),\] \[\mathbb{P}(\mathcal{I}_{2}^{(k_{2})})\longleftrightarrow\mathbb{P}( b_{(k_{1},k_{2}-1)}^{*}\mathcal{F}_{2}^{(k_{2})}).\]
Hence we have embeddings
Now, similarly to [10], we can embed \(X[n]=X_{(n,n)}\) into \(\prod_{i,j}\mathbb{P}(\mathcal{F}_{i}^{(j)})\), which is to be understood as the fibre product over \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\). This can be seen by iteration on \(i,j\) in the following way. The simplest case is \(X_{(1,0)}\hookrightarrow\mathbb{P}(b_{(0,0)}^{*}\mathcal{F}_{1}^{(1)})= \mathbb{P}(\mathcal{F}_{1}^{(1)})\), which is obvious. Then for \(X_{(1,1)}\), we have the following commutative diagram
\[X_{(1,1)}\subset b^{*}_{(1,0)}\mathbb{P}(\mathcal{F}_{2}^{(1)})\]
(recall \(b^{*}_{(1,0)}\mathbb{P}(\mathcal{F}_{2}^{(1)})\) is a vector bundle over \(X_{(1,0)}\) and \(\mathbb{P}(\mathcal{F}_{2}^{(1)})\) is a vector bundle over \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\), giving us the horizontal maps). By the universal property of fibre products, there is a unique map \(X_{(1,1)}\to\mathbb{P}(\mathcal{F}_{1}^{(1)})\times\mathbb{P}(\mathcal{F}_{2} ^{(1)})\). But by universal property of the pullback there is also a unique map \(\mathbb{P}(\mathcal{F}_{1}^{(1)})\times\mathbb{P}(\mathcal{F}_{2}^{(1)})\to b ^{*}_{(1,0)}\mathbb{P}(\mathcal{F}_{2}^{(1)})\), hence the embedding \(X_{(1,1)}\hookrightarrow b^{*}_{(1,0)}\mathbb{P}(\mathcal{F}_{2}^{(1)})\) factors through \(\mathbb{P}(\mathcal{F}_{1}^{(1)})\times\mathbb{P}(\mathcal{F}_{2}^{(1)})\). Since the composition of the two maps is injective, the first map, i.e. \(X_{(1,1)}\to\mathbb{P}(\mathcal{F}_{1}^{(1)})\times\mathbb{P}(\mathcal{F}_{2 }^{(1)})\), must be injective and the image in \(\mathbb{P}(\mathcal{F}_{1}^{(1)})\times\mathbb{P}(\mathcal{F}_{2}^{(1)})\) is closed by properness. We can then iterate this argument until we obtain the embedding \(X_{(n,n)}\hookrightarrow\prod_{i,j}\mathbb{P}(\mathcal{F}_{i}^{(j)})\).
The \(G\)-action is a restriction of the torus action on \(\prod_{i,j}\mathbb{P}(\mathcal{F}_{i}^{(j)})\), described etale locally in Proposition 3.1.12.
**Linearisations.** The following lemma gives a method to construct all the linearised line bundles we will need to vary the GIT stability condition.
**Lemma 3.2.2**.: _There exists a \(G\)-linearised ample line bundle \(\mathcal{L}\) on \(X[n]\) such that locally the lifts to this line bundle of the \(G\)-action on each \(\mathbb{P}^{1}\) corresponding to a \(\Delta_{1}^{(k)}\) and on each \(\mathbb{P}^{1}\) corresponding to a \(\Delta_{2}^{(n+1-k)}\) are given by_
\[(x_{0}^{(k)};x_{1}^{(k)}) \longmapsto(\tau_{k}^{a_{k}}x_{0}^{(k)};\tau_{k}^{-b_{k}}x_{1}^{ (k)}) \tag{3.2.1}\] \[(y_{0}^{(n+1-k)};y_{1}^{(n+1-k)}) \longmapsto(\tau_{k}^{-c_{k}}y_{0}^{(n+1-k)};\tau_{k}^{d_{k}}y_{1 }^{(n+1-k)}) \tag{3.2.2}\]
_for any choice of positive integers \(a_{k},b_{k},c_{k},d_{k}\)._
Proof.: Similarly to the proof of Lemma 1.18 in [1], we see that each locally free sheaf \(\mathcal{F}_{i}^{(k)}\) on \(X\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) has a canonical \(G\)-linearisation. There is an induced \(G\)-action on the projective product \(\prod_{i,k}\mathbb{P}(\mathcal{F}_{i}^{(k)})\), which is equivariant under the embedding
\[X[n]\longleftrightarrow\prod_{i,k}\mathbb{P}(\mathcal{F}_{i}^{(k)}).\]
The \(G\)-action on each \(\mathbb{P}(\mathcal{F}_{i}^{(k)})\) lifts to a \(G\)-action on the corresponding vector bundle, which gives us a canonical linearisation of the line bundle \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{i}^{(k)})}(1)\). Locally, the actions on \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(1)\) and \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{2}^{(k)})}(1)\) are given respectively by
\[(x_{0}^{(k)};\tau_{k}^{-1}x_{1}^{(k)})\quad\text{and}\quad(y_{0}^{(k)};\tau_{ n+1-k}y_{1}^{(k)}).\]
We therefore may define the lifts (3.2.1) and (3.2.2) on the line bundles \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(a_{k}+b_{k})\) and \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})}(c_{k}+d_{k})\) respectively. We then pull back each \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(a_{k}+b_{k})\) and
\(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{2}^{(k)})}(c_{n+1-k}+d_{n+1-k})\) to \(\prod_{i,k}\mathbb{P}(\mathcal{F}_{i}^{(k)})\) and form their tensor product to obtain a \(G\)-linearised line bundle, which we denote by \(\mathcal{L}\).
Each such line bundle \(\mathcal{L}\) which can be constructed in this way will induce a \(G\)-linearised line bundle \(\mathcal{M}\) on \(H_{[n]}^{m}\). This, in turn, will yield a GIT stability condition on \(H_{[n]}^{m}\).
## 4 GIT stability
In this section, we set up some results analogous to those of [10] to describe various GIT stability conditions on the scheme \(X[n]\) with respect to the possible choices of \(G\)-linearised line bundles described in the previous section. In particular, we show that these stability conditions do not depend on the scheme structure of the length \(m\) zero-dimensional subschemes, but instead can be reduced to combinatorial criteria on configurations of \(n\) points.
### Hilbert-Mumford criterion
In this section, we shall recall the definition of Hilbert-Mumford invariants and give a numerical criterion for stability and semi-stability in terms of these invariants.
Let \(H\) be a reductive group acting on a scheme \(S\), which is proper over an algebraically closed field \(k\). Let \(L\) be a \(H\)-linearised ample line bundle. Then a _1-parameter subgroup_ of \(H\) (denoted 1-PS for convenience) is defined to be a homomorphism
\[\lambda\colon\mathbb{G}_{m}\to H.\]
Now let \(P\) be any point in \(S\). For \(\tau\in\mathbb{G}_{m}\), we denote by \(P_{0}\) the limit of \(\tau P\) as \(\tau\) tends towards \(0\) if such a limit exists. Then let \(\mu^{L}(\lambda,P)\) be the negative of the weight of the \(\mathbb{G}_{m}\)-action on the fibre \(L(P_{0})\). We call \(\mu^{L}(\lambda,P)\) a _Hilbert-Mumford invariant_.
In our case we will want to think of \(H\) as being our group \(G\), of \(S\) as being the relative Hilbert scheme of points \(H_{[n]}^{m}\) and of \(L\) as being the line bundle \(\mathcal{M}\) on \(H_{[n]}^{m}\), which we define in the next section. A 1-parameter subgroup of \(G\) will be given by a map
\[\lambda\colon\mathbb{G}_{m}\to G,\quad\tau\mapsto(\tau^{s_{1}},\ldots,\tau^{s_ {n}}),\]
where \((s_{1},\ldots,s_{n})\in\mathbb{Z}^{n}\). The following result will allow us to use these invariants to determine stability and semi-stability in our GIT constructions. It is a relative version of the Hilbert-Mumford criterion (see Mumford, Fogarty and Kirwan [11]) proven by Gulbrandsen, Halle and Hulek in [10].
**Theorem 4.1.1**.: _Let \(k\) be an algebraically closed field and \(f\colon S\to B\) a projective morphism of \(k\)-schemes. Assume \(B=\operatorname{Spec}A\) is noetherian and \(B\) is of finite type over \(k\). Let \(H\) be an affine, linearly reductive group over \(k\) acting on \(S\) and \(B\) such that \(f\) is equivariant and let \(L\) be an ample \(H\)-linearised line bundle on \(S\). Suppose \(P\in S\) is a closed point. Then \(P\) is stable (or semistable) if and only if \(\mu^{L}(\lambda,P)>0\) (or \(\geq 0\)) for every non-trivial 1-PS \(\lambda\colon\mathbb{G}_{m}\to H\)._
### Action of 1-parameter subgroup
Existence of limits under action of a 1-PS.Let \(P\) be any point in \(X[n]\) and let \(p_{n}\colon X[n]\to C[n]\) be the projection to the base. As stated in [1], the limit \(P_{0}\) of \(P\) under a 1-PS as defined above exists if and only if its projection onto the base, \(p_{n}(P)\in C[n]\), has a limit. The \(G\)-action on the base is a pullback of the action on \(\mathbb{A}^{n+1}\) and the corresponding action of a 1-PS is
\[t_{1} \longmapsto\tau^{-s_{1}}t_{1},\] \[t_{k} \longmapsto\tau^{s_{k-1}-s_{k}}t_{k},\quad\text{for }1<k\leq n,\] \[t_{n+1} \longmapsto\tau^{s_{n}}t_{n+1}.\]
The projection \(p_{n}(P)\) of the point \(P\) to the base has a limit as \(\tau\) tends to zero if and only if each power of \(\tau\) in the action is nonnegative on the nonzero basis directions \(t_{i}\), i.e. if and only if
\[0\geq s_{1}\geq\ldots\geq s_{n+1}\geq 0, \tag{4.2.1}\]
where each inequality from left to right must hold if \(t_{1},\ldots,t_{n+1}\) is nonzero respectively. Thus we obtain boundedness conditions on the weights \(s_{i}\) dependent on where \(P\) lies over the base. In particular, when \(t_{i}\neq 0\) for all \(i\), this implies that \(s_{i}=0\) for all \(i\), so the 1-PS are trivial and all points are trivially semistable.
Lifts of 1-PS action to the line bundle.Let \(\mathcal{L}\) be a line bundle as described in Lemma 3.2.2. Assume that locally the lifts to \(\mathcal{L}\) of the \(G\)-action on each \(\mathbb{P}^{1}\) corresponding to a \(\Delta^{(k)}_{1}\) and on each \(\mathbb{P}^{1}\) corresponding to a \(\Delta^{(n+1-k)}_{2}\) are given by
\[(x_{0}^{(k)};x_{1}^{(k)}) \longmapsto(\tau_{k}^{a_{k}}x_{0}^{(k)};\tau_{k}^{-b_{k}}x_{1}^{( k)})\] \[(y_{0}^{(n+1-k)};y_{1}^{(n+1-k)}) \longmapsto(\tau_{k}^{-c_{k}}y_{0}^{(n+1-k)};\tau_{k}^{d_{k}}y_{1 }^{(n+1-k)})\]
for some choice of positive integers \(a_{k},b_{k},c_{k},d_{k}\). Then the corresponding lifts of the 1-PS action to \(\mathcal{L}\) are given by
\[(x_{0}^{(k)};x_{1}^{(k)}) \longmapsto(\tau^{a_{k}s_{k}}x_{0}^{(k)}:\tau^{-b_{k}s_{k}}x_{1}^ {(k)}),\] \[(y_{0}^{(n+1-k)};y_{1}^{(n+1-k)}) \longmapsto(\tau^{-c_{k}s_{n+1-k}}y_{0}^{(k)}:\tau^{d_{k}s_{n+1-k} }y_{1}^{(k)}).\]
We will see in the next section that the Hilbert-Mumford invariants that interest us, which are the invariants relating to 1-PS subgroups of the induced action of \(G\) on \(H^{m}_{[n]}\) with associated line bundle \(\mathcal{M}\), can be calculated by simply adding the invariants of the points (with multiplicity) in \(X[n]\) which make up the support of an element of \(H^{m}_{[n]}\). Indeed, we will see that it is possible for this purpose to think of an element of \(H^{m}_{[n]}\) as just a union of points with multiplicity in \(X[n]\) and forget about its scheme structure.
### Bounded and combinatorial weights
In this section, we explain the relation between what [1] call the bounded and combinatorial weights of the Hilbert-Mumford invariants.
Keeping the notation as consistent as possible with [1], let
\[Z_{[n]}^{m}\subset H_{[n]}^{m}\times_{C[n]}X[n]\]
be the universal family, with first and second projections \(p\) and \(q\). The line bundle
\[\mathcal{M}_{l}\coloneqq\det p_{*}(q^{*}\mathcal{L}^{\otimes l}|_{Z_{[n]}^{m}})\]
is relatively ample when \(l\gg 0\) and is \(G\)-linearised, exactly as in Section 2.2.1 of [1].
**Relationship between bounded and combinatorial weights.** The following lemmas describe how the Hilbert-Mumford invariant can be decomposed into a sum of invariants.
**Lemma 4.3.1**.: _Given a point \([Z]\in H_{[n]}^{m}\) and a 1-PS \(\lambda_{s}\) given by \((s_{1},\ldots,s_{n})\in\mathbb{Z}^{n}\), denote the limit of \(\lambda_{s}(\tau)\cdot Z\) as \(\tau\) tends to zero by \(Z_{0}\). The Hilbert-Mumford invariant can be decomposed into a sum_
\[\mu^{\mathcal{M}_{l}}(Z,\lambda_{s})=\mu^{\mathcal{M}_{1}}_{b}(Z,\lambda_{s}) +l\cdot\mu^{\mathcal{M}_{1}}_{c}(Z,\lambda_{s})\]
_of the bounded weight \(\mu^{\mathcal{M}_{l}}_{b}(Z,\lambda_{s})\), coming from the scheme structure of \(Z_{0}\), and the combinatorial weight \(\mu^{\mathcal{M}_{l}}_{c}(Z,\lambda_{s})\), coming from the weights of the 1-PS action on \(\mathcal{L}\)._
Proof.: The Hilbert-Mumford invariant \(\mu^{\mathcal{M}_{l}}(Z,\lambda_{s})\) is given by the negative of the weight of the \(\mathbb{G}_{m}\)-action on the line bundle \(\mathcal{M}_{l}\) at the point \(Z_{0}\). At the point \(Z_{0}\), the line bundle \(\mathcal{M}_{l}\) is given by \(\det(H^{0}(\mathcal{O}_{Z_{0}}\otimes\mathcal{L}^{\otimes l}))\). We can write \(Z_{0}\) as a union of length \(n_{P}\) zero-dimensional subschemes \(\bigcup_{P}Z_{0,P}\) supported at points \(P\). Let \(\mathcal{L}^{\otimes l}(P)\) denote the fibre of \(\mathcal{L}^{\otimes l}\) at \(P\). Following [1], there is an isomorphism
\[H^{0}(\mathcal{O}_{Z_{0}}\otimes\mathcal{L}^{\otimes l})\cong\bigoplus_{P} \bigl{(}H^{0}(\mathcal{O}_{Z_{0},P})\otimes\mathcal{L}^{\otimes l}(P)\bigr{)}.\]
Then, by taking determinants, as in [1], we get
\[\bigwedge^{m}H^{0}(\mathcal{O}_{Z_{0}}\otimes\mathcal{L}^{\otimes l})\cong \Bigl{(}\bigwedge^{m}H^{0}(\mathcal{O}_{Z_{0}})\Bigr{)}\otimes\Bigl{(} \bigotimes_{P}\mathcal{L}^{\otimes ln_{P}}(P)\Bigr{)}.\]
which allows us to write the invariant \(\mu^{\mathcal{M}_{l}}(Z,\lambda_{s})\) as a sum of its _bounded weight_\(\mu^{\mathcal{M}_{l}}_{b}(Z,\lambda_{s})\), coming from \(\bigwedge^{m}H^{0}(\mathcal{O}_{Z_{0}})\) in the above, and its _combinatorial weight_\(\mu^{\mathcal{M}_{l}}_{c}(Z,\lambda_{s})\), coming from \(\bigotimes_{P}\mathcal{L}^{\otimes ln_{P}}(P)\). It is clear also that
\[\mu^{\mathcal{M}_{l}}_{b}(Z,\lambda_{s})=\mu^{\mathcal{M}_{1}}_{b}(Z,\lambda_ {s}),\]
since the bounded weight does not depend on the value of \(l\), and
\[\mu^{\mathcal{M}_{l}}_{c}(Z,\lambda_{s})=l\cdot\mu^{\mathcal{M}_{1}}_{c}(Z, \lambda_{s}).\]
Hence, we have
\[\mu^{\mathcal{M}_{l}}(Z,\lambda_{s})=\mu^{\mathcal{M}_{1}}_{b}(Z,\lambda_{s}) +l\cdot\mu^{\mathcal{M}_{1}}_{c}(Z,\lambda_{s}).\]
Note that, whereas the combinatorial weight depends on the choice of linearised line bundle, the bounded weight does not. Similarly to [10], we can show that the bounded weight, as its name suggests, can be given an upper bound.
_Terminology._ In the proofs of the next results, we will say that a point of the support of a subscheme is on a certain _side_ of a \(\Delta\)-component to describe whether it lies on the \((0:1)\) or \((1:0)\) side of the corresponding \(\mathbb{P}^{1}\).
The following result is based on Lemma 2.3 of [10], with some slight modifications to suit our setting.
**Lemma 4.3.2**.: _Let \(\mu_{b}^{\mathcal{M}_{1}}(Z,\lambda_{s})\) be the bounded weight of \([Z]\in H_{[n]}^{m}\) and \(s\in\mathbb{Z}^{n}\) such that the limit of \(\lambda_{s}(\tau)\cdot Z\) as \(\tau\) goes to zero exists. Then_
\[\mu_{b}^{\mathcal{M}_{1}}(Z,\lambda_{s})=\sum_{i=1}^{n}b_{i}s_{i},\]
_where \(|b_{i}|\leq 2m^{2}\) for every \(i\)._
Proof.: Let \(Z_{0}\) be the limit point of \(Z\) with respect to some \(s\in\mathbb{Z}^{n}\), where \(Z_{0}\) is supported at points \(Q_{i}\in X[n]\). Since \(Z_{0}\) is a limit point of the action, each \(Q_{i}\) must be a \(\mathbb{G}_{m}\)-fixpoint. And since \(Z_{0,Q_{i}}\) is a finite local scheme, we can work with the local coordinates we set up earlier.
Following our previous notation, let \(n_{Q_{i}}\) denote the multiplicity of the scheme \(Z_{0}\) at the point \(Q_{i}\). The coordinate ring of \(Z_{0,Q_{i}}\) is then generated by \(n_{Q_{i}}\) monomials in the variables \(x,y,z,t_{1},\ldots,t_{n+1}\) and \(x_{0}^{(k)}/x_{1}^{(k)}\) or \(x_{1}^{(k)}/x_{0}^{(k)}\) depending on which side of \(\Delta_{1}^{(k)}\) the point \(Q_{i}\) lies, and \(y_{0}^{(k)}/y_{1}^{(k)}\) or \(y_{1}^{(k)}/y_{0}^{(k)}\) depending on which side of \(\Delta_{2}^{(k)}\) the point \(Q_{i}\) lies. Note that the coordinate ring of \(Z_{0,Q_{i}}\) will only contain monomials in the variable \(x_{0}^{(k)}/x_{1}^{(k)}\) or \(x_{1}^{(k)}/x_{0}^{(k)}\) if \(Q_{i}\in\Delta_{1}^{(k)}\), and similarly for the variable \(y_{0}^{(k)}/y_{1}^{(k)}\) or \(y_{1}^{(k)}/y_{0}^{(k)}\). Moreover, if \(Q_{i}\in(\Delta_{1}^{(k)})^{\circ}\cup(\Delta_{2}^{(n+1-k)})^{\circ}\), then this means that \(s_{k}=0\) as \(Z_{0}\) is the limit of the 1-PS action. So the weight of the \(\mathbb{G}_{m}\)-action on a \(\Delta\)-component will be nontrivial only if \(Q_{i}\) lies on the boundary of this component.
The weight \(b_{k}\) restricted to the point \(Q_{i}\) is given by adding the multiplicity of \(x_{0}^{(k)}/x_{1}^{(k)}\) times \(s_{k}\) or that of \(x_{1}^{(k)}/x_{0}^{(k)}\) times \(-s_{k}\) (depending on which side of \(\Delta_{1}^{(k)}\) the point \(Q_{i}\) lies) in each monomial, plus the multiplicity of \(y_{0}^{(n+1-k)}/y_{1}^{(n+1-k)}\) times \(-s_{k}\) or that of \(y_{1}^{(n+1-k)}/y_{0}^{(n+1-k)}\) times \(s_{k}\) (depending on which side of \(\Delta_{2}^{(n+1-k)}\) the point \(Q_{i}\) lies) in each monomial. Each monomial has degree at most \(n_{Q_{i}}\). The parts of \(b_{k}\) coming from the actions on \(\Delta_{1}^{(k)}\) and \(\Delta_{2}^{(n+1-k)}\) therefore both have absolute value at most \(m^{2}\), so \(|b_{k}|\leq 2m^{2}\).
Let us discuss now how the bounded weight affects the overall stability condition. The following lemma is immediate from [10], but we recall their proof here for convenience.
**Lemma 4.3.3**.: _Let \(Z\) be a length \(m\) zero-dimensional subscheme in a fibre of \(X[n]\). Assume that, for all \(s\in\mathbb{Z}^{n}\) such that the limit \(\lambda_{s}(\tau)\cdot Z\) as \(\tau\) tends to zero exists, the combinatorial weight can be written as_
\[\mu_{c}^{\mathcal{M}_{1}}(Z,\lambda_{s})=\sum_{i=1}^{n}c_{i}s_{i},\]
_where \(c_{i}s_{i}\geq 0\) with equality if and only if \(s_{i}=0\). Then \(Z\) is stable with respect to the \(G\)-linearised line bundle \(\mathcal{M}_{l}\) on \(H^{m}_{[n]}\) for some large enough \(l\)._
Proof.: As we have shown that the bounded weight can be expressed as
\[\mu_{b}^{\mathcal{M}_{1}}(Z,\lambda_{s})=\sum_{i=1}^{n}b_{i}s_{i},\]
where \(|b_{i}|\leq 2m^{2}\), and recalling that the Hilbert-Mumford invariant can be expressed as
\[\mu^{\mathcal{M}_{l}}(Z,\lambda_{s})=\mu_{b}^{\mathcal{M}_{1}}(Z,\lambda_{s} )+l\cdot\mu_{c}^{\mathcal{M}_{1}}(Z,\lambda_{s}),\]
it is just a matter of choosing a big enough value of \(l\) to make the combinatorial weight overpower the bounded weight. This allows us effectively to treat the bounded weight as negligible and ignore it in our computations.
_Remark 4.3.4_.: The assumption of Lemma 4.3.3 does not hold in general for all possible \(G\)-linearised line bundles on \(H^{m}_{[n]}\).
A criterion for positive combinatorial weights.Let \(Z\) be a length \(m\) zero-dimensional subscheme in a fibre of \(X[n]\). With the following lemmas, we shall establish that if there is at least one point of the support of \(Z\) in the union \((\Delta_{1}^{(k)})^{\circ}\cup(\Delta_{2}^{(n+1-k)})^{\circ}\) for every \(k\) (where these \(\Delta\)-components are not necessarily expanded out), then there exists a GIT stability condition which makes \(Z\) stable. We start by showing that for such a subscheme \(Z\) there exists a \(G\)-linearised line bundle \(\mathcal{M}\) on \(H^{m}_{[n]}\) such that the corresponding combinatorial weight will be strictly positive. We will then use Lemma 4.3.3 to show that \(Z\) is stable in the corresponding GIT stability.
_Remark 4.3.5_.: Note, here, that such a \(Z\) will not necessarily be smoothly supported, nor will every point of the support of \(Z\) necessarily be contained in a \(\Delta\)-component.
**Lemma 4.3.6**.: _Let \(Z\) be in a fibre of \(X[n]\) as above. If there is at least one point of the support of \(Z\) in the union \((\Delta_{1}^{(k)})^{\circ}\cup(\Delta_{2}^{(n+1-k)})^{\circ}\) for every \(k\), then there exists a \(G\)-linearised line bundle on \(H^{m}_{[n]}\) with respect to which the combinatorial weight of \(Z\) is strictly positive for every nontrivial 1-PS \(\lambda_{s}\) such that the limit of \(\lambda_{s}(\tau)\cdot Z\) as \(\tau\) tends to zero exists._
Proof.: We will construct a \(G\)-linearised line bundle \(\mathcal{L}\) on \(X[n]\) as in Lemma 3.2.2, by specifying lifts of the \(G\)-action on each \(\mathbb{P}(\mathcal{F}_{1}^{(k)})\) and \(\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})\) to line bundles \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(a_{k}+b_{k})\) and \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})}(c_{k}+d_{k})\) for some chosen values \(a_{k},b_{k},c_{k},d_{k}\in\mathbb{Z}_{\geq 0}\).
Let \(k\in\{1,\ldots,n\}\). If there is some point of the support of \(Z\), denoted \(P\), in \((\Delta_{1}^{(k)})^{\circ}\subseteq\pi^{*}(Y_{1}\cap Y_{3})\), and if \(m^{\prime}\) points of the support lie on the \((1:0)\) side of \(\Delta_{1}^{(k)}\), then we will want the lift of the \(G\)-action on \(\mathbb{P}(\mathcal{F}_{1}^{(k)})\) to \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(a_{k}+b_{k})\) to be locally given by
\[(x_{0}^{(k)};x_{1}^{(k)})\longmapsto(\tau_{k}^{m(m-m^{\prime})}x_{0}^{(k)}; \tau_{k}^{-m(m^{\prime}+1)}x_{1}^{(k)})\in\mathbb{A}^{2}.\]
This lift is therefore defined on \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(m^{2}+m)\), i.e. we have chosen \(a_{k}=m(m-m^{\prime})\) and \(b_{k}=m(m^{\prime}+1)\). We will then choose \(c_{k}=0\) and \(d_{k}=1\), so that the action on \(\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})\) lifts to \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})}(1)\) and it is locally given by
\[(y_{0}^{(n+1-k)};y_{1}^{(n+1-k)})\longmapsto(y_{0}^{(n+1-k)};\tau_{k}y_{1}^{( n+1-k)})\in\mathbb{A}^{2}.\]
If there is no point of the support of \(Z\) in \(\Delta_{1}^{(k)}\), we set the lift of the \(G\)-action on \(\mathbb{P}(\mathcal{F}_{1}^{(k)})\) to \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{1}^{(k)})}(1)\) to be locally given by
\[(x_{0}^{(k)};x_{1}^{(k)})\longmapsto(\tau_{k}x_{0}^{(k)};x_{1}^{(k)})\in \mathbb{A}^{2},\]
i.e. we have chosen \(a_{k}=1\) and \(b_{k}=0\). In this case there must be at least one point of the support in \((\Delta_{2}^{n+1-k})^{\circ}\). Let \(m^{\prime\prime}\) be the number of points of the support on the \((1:0)\) side of \(\Delta_{2}^{(n+1-k)}\). We then set \(c_{k}=m(m-m^{\prime\prime})\) and \(d_{k}=m(m^{\prime\prime}+1)\), i.e. we have a lift of the \(G\)-action on \(\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})\) to \(\mathcal{O}_{\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})}(m^{2}+m)\), locally given by
\[(y_{0}^{(n+1-k)};y_{1}^{(n+1-k)})\longmapsto(\tau_{k}^{-m(m-m^{\prime\prime}) }y_{0}^{(n+1-k)};\tau_{k}^{-m(m^{\prime\prime}+1)}y_{1}^{(n+1-k)})\in\mathbb{ A}^{2}.\]
Repeating this process over all \(k\in\{1,\ldots,n\}\) will give us a description of \(\mathcal{L}\) and we may form the \(G\)-linearised line bundle \(\mathcal{M}\) from this line bundle in the way described at the start of this section. For more details on why this yields a positive combinatorial weight, see the proof of the following lemma. Note that this is not the only GIT stability condition for which \(Z\) is stable.
**Lemma 4.3.7**.: _Let \(Z\) be as in the statement of Lemma 4.3.6 and let \(\mathcal{M}\) be a \(G\)-linearised line bundle constructed as in the proof of Lemma 4.3.6. Then, for any \(s\in\mathbb{Z}^{n}\), the combinatorial weight can be written_
\[\mu_{c}^{\mathcal{M}}(Z,\lambda_{s})=\sum_{i=1}^{n}c_{i}s_{i},\]
_where \(c_{i}s_{i}\geq 0\) with equality if and only if \(s_{i}=0\)._
Proof.: It is clear that the combinatorial weight may be written as a sum
\[\mu_{c}^{\mathcal{M}}(Z,\lambda_{s})=\sum_{i=1}^{n}c_{i}s_{i}.\]
Now, let us take any \(k\in\{1,\ldots,n\}\). First, let us assume that there is at least one point of the support in \((\Delta_{1}^{(k)})^{\circ}\subseteq\pi^{*}(Y_{1}\cap Y_{3})\) and denote by \(m^{\prime}\) the number of points of the support on the \((1:0)\) side of \(\Delta_{1}^{(k)}\). Then, if \(s_{k}>0\),
\[c_{k}s_{k}\geq(-m^{\prime}m(m-m^{\prime})+(m-m^{\prime})m(m^{\prime}+1))s_{k} -(m^{\prime})s_{k}=(m^{2}-m^{\prime})s_{k}\geq 0.\]
Here, \(m^{2}\) corresponds to the weight coming from \(\mathbb{P}(\mathcal{F}_{1}^{(k)})\) and \(m^{\prime}\) corresponds to the weight coming from \(\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})\). The value \(m^{\prime}\) arises from the fact that there are at most \(m^{\prime}\) points of the support on the \((0:1)\) side of \(\Delta_{2}^{(n+1-k)}\). And since \(m^{\prime}\) can be at most \(m-1\), we have that \(m^{2}-m^{\prime}>0\).
Now, if \(s_{k}<0\), then
\[c_{k}s_{k}\geq(-(m^{\prime}+1)m(m-m^{\prime})+(m-m^{\prime}-1)m(m^{\prime}+1)) s_{k}+0=-(m^{\prime}+1)ms_{k}\geq 0,\]
where again the two terms correspond to the weights coming from \(\mathbb{P}(\mathcal{F}_{1}^{(k)})\) and \(\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})\). As \((m^{\prime}+1)m>0\), this gives the desired answer.
Finally, if there is no point of the support in \((\Delta_{1}^{(k)})^{\circ}\subseteq\pi^{*}(Y_{1}\cap Y_{3})\), we can make a very similar argument, as the weight coming from \(\mathbb{P}(\mathcal{F}_{1}^{(k)})\) is overpowered by the weight coming from \(\mathbb{P}(\mathcal{F}_{2}^{(n+1-k)})\) in the line bundle \(\mathcal{M}\) we set up.
### Semistable locus and GIT quotient
**Lemma 4.4.1**.: _Let \(Z\) be as in the statement of Lemma 4.3.6. Then there exists a GIT stability condition on \(H_{[n]}^{m}\) which makes \(Z\) stable._
Proof.: This follows from Lemmas 4.3.3 and 4.3.7. Indeed, by Lemma 4.3.3, if the combinatorial weight can be written in the form
\[\mu_{c}^{\mathcal{M}}(Z,\lambda_{s})=\sum_{i=1}^{n}c_{i}s_{i},\]
where \(c_{i}s_{i}\geq 0\) with equality if and only if \(s_{i}=0\), then we may choose a high enough tensor power \(l\) of \(\mathcal{M}\) such that \(Z\) is stable if and only if the combinatorial weight is strictly positive. But this condition is satisfied by Lemma 4.3.7.
**Lemma 4.4.2**.: _Let \(Z\) be a length \(m\) zero-dimensional subscheme in a fibre of \(X[n]\), such that no point of the support is contained in the union \((\Delta_{1}^{(k)})^{\circ}\cup(\Delta_{2}^{(n+1-k)})^{\circ}\) for some \(k\) (these components may be expanded or not in the fibre). Then there exists no GIT stability condition on \(H_{[n]}^{m}\) with respect to the group \(G\) which makes \(Z\) stable._
Proof.: Let us choose an arbitrary \(G\)-linearised line bundle \(\mathcal{M}\), not necessarily constructed as above, with respect to which \(Z\) has Hilbert-Mumford invariant
\[\mu^{\mathcal{M}}(Z,\lambda_{s})=\sum_{i=1}^{n}a_{i}s_{i},\]
for some \(s\in\mathbb{Z}^{n}\) such that the limit of \(\lambda_{s}(\tau)\cdot Z\) as \(\tau\) tends to zero exists. Either \(a_{k}=0\), in which case \(Z\) cannot be stable (it will at best be semistable) with respect to the stability condition given by the chosen linearisation, or \(a_{k}\neq 0\).
If \(\Delta_{1}^{(k)}\) and \(\Delta_{2}^{(n+1-k)}\) are expanded out in the fibre, then \(s_{k}\) is not bounded above or below by \(0\) or by any weights acting nontrivially outside of these components. Moreover, as no point of the support of \(Z\) are contained in \((\Delta_{1}^{(k)})^{\circ}\cup(\Delta_{2}^{(n+1-k)})^{\circ}\), we know that
acts trivially on all points of the support. The integer \(a_{k}\) is therefore independent of the value of \(s_{k}\); different values of \(s_{k}\) will not change \(a_{k}\). If \(a_{k}>0\), we may choose \(s_{k}\) to be negative with large enough absolute value to destabilise \(Z\). Similarly, if \(a_{k}<0\), we may choose \(s_{k}\) to be positive and large enough to destabilise \(Z\).
Finally, if \(\Delta_{1}^{(k)}\) and \(\Delta_{2}^{(n+1-k)}\) are not expanded out in the fibre, either \(t_{l}\neq 0\) for \(l\geq k\) or \(t_{l}\neq 0\) for \(l\leq k\). If \(t_{l}\neq 0\) for \(l\geq k\), then \(\Delta_{1}^{(k)}=Y_{1}\) and \(\Delta_{2}^{(n+1-k)}=Y_{1}\cup Y_{3}\). All points of the support of \(Z\) must therefore be on the \((1:0)\) side of \(\Delta_{1}^{(k)}\) and on the \((0:1)\) side of \(\Delta_{2}^{(n+1-k)}\), which implies that \(a_{k}<0\). But by the condition (4.2.1), we have \(s_{k}\geq 0\), and we can therefore choose \(s_{k}\) large enough to destabilise \(Z\). A very similar argument can be made if instead \(t_{l}\neq 0\) for \(l\leq k\).
**Theorem 4.4.3**.: _Let \(Z\) be a length \(m\) zero-dimensional subscheme in a fibre of \(X[n]\). Then there exists a GIT stability condition on \(H^{m}_{[n]}\) which makes \(Z\) stable if and only if there is at least one point of the support of \(Z\) in \((\Delta_{1}^{(k)})^{\circ}\cup(\Delta_{2}^{(n+1-k)})^{\circ}\) for every \(k\)._
Proof.: This follows directly from Lemmas 4.4.1 and 4.4.2.
We can now describe the GIT quotients resulting from these constructions. Let
\[A[n]\coloneqq H^{0}(C[n],\mathcal{O}_{C[n]}).\]
Then we recall from Lemma 3.1.13, the isomorphism
\[H^{0}(C[n],\mathcal{O}_{C[n]})^{G}\cong k[t].\]
For all choices of linearised line bundle described in the above, the GIT quotient on the base therefore behaves as follows
\[C[n]/\!/G=\operatorname{Spec}A[n]/\!/G=\operatorname{Spec}(A[n]^{G})\cong \mathbb{A}^{1}.\]
Now let us denote by \(H^{m,s}_{[n],\mathcal{M}}\) the locus of GIT stable subschemes in \(H^{m}_{[n]}\) with respect to the stability condition determined by one of the choices of \(G\)-linearised line bundle \(\mathcal{M}\) as constructed in Section 4.3 and let
\[I^{m}_{[n],\mathcal{M}}\coloneqq H^{m,s}_{[n],\mathcal{M}}/\!/G\]
denote the corresponding GIT quotient.
**Theorem 4.4.4**.: _The GIT quotients \(I^{m}_{[n],\mathcal{M}}\) thus constructed are projective over_
\[\operatorname{Spec}(A[n]^{G})\cong\mathbb{A}^{1}.\]
Proof.: This result follows directly from the relative Hilbert-Mumford criterion of [10].
## 5 Stack perspective
In this section, we generalise the scheme construction of Section 3 and define the analogous stack of expansions and its family \(\mathfrak{X}\to\mathfrak{C}\). As mentioned before, we impose additional equivalences in the stack, which have the effect of setting any two fibres with the same expanded components to be equivalent. We examine the loci of GIT stable points again on this stack and discuss their relation with stability conditions of Li and Wu and of Maulik and Ranganathan ([11], [12]). Finally, we construct a proper Deligne-Mumford stack which we will show to be isomorphic to a choice of underlying algebraic stack obtained through the Maulik-Ranganthan construction. We use the word underlying here because what is constructed in [12] is a logarithmic algebraic stack and we impose no logarithmic structure on our space.
### Stacks and stability conditions
Before we describe the expanded degenerations as stacks, we comment on the role of stacks in this problem and on the stability conditions defined in Section 5.3.
**Base change.** Our aim is to construct degenerations of Hilbert schemes of points as good moduli spaces. In the proofs of Propositions 6.1.2 and 6.1.4, we will use the valuative criterion to prove universal closure and separatedness. We will see that this argument holds only up to base change, which is why it is necessary for us to work with stacks instead of schemes.
**Non-separated GIT quotient stacks.** Although the GIT quotients \(I^{m}_{[n],\mathcal{M}}\) are projective and thus proper over \(\mathbb{A}^{1}\), their corresponding stack quotients are not necessarily proper. Indeed, the GIT quotient does not see the orbits of the group action themselves but the closures of these orbits. For example in Figure 7, the red pair of points and the blue pair of points are in the same orbit closure, so the GIT quotient considers them as equivalent, while the corresponding stack quotient regards them as belonging to separate orbits. This means that, in the stack, allowing for both pairs will break separatedness.
In the following sections, when studying quotient stacks, we will therefore want to consider the sublocus of the GIT stable stable locus containing only length \(m\) zero-dimensional subschemes which are smoothly supported in a given fibre of \(X[n]\). Building a compactification in which limits are represented by smoothly supported subschemes will also be useful for future applications as it allows us to break down the problem of
Figure 7: Non-separatedness in GIT stable locus.
a Hilbert scheme of \(m\) points on a singular surface into products of Hilbert schemes of fewer than \(m\) points on smooth components.
**Patching together GIT stability conditions.** No single GIT quotient \(I^{m}_{[n],\mathcal{M}}\) contains all desired limits as smoothly supported subschemes. Therefore in the stack construction, the stability condition we define will draw on these local quotients, but globally will not correspond to one single GIT stability condition. We now define a notion that we will use in the following sections.
**Definition 5.1.1**.: We say that a fibre in some expanded degeneration \(X[n]\) has _base codimension_\(k\) if exactly \(k\) basis directions vanish at this fibre. This is independent of the value \(n\).
**Making the expanded degenerations large enough.** Finally, if we construct a unique GIT quotient in which not all limit subschemes are smoothly supported then the limits given by orbit closures containing only subschemes with singular support will not lie in a fibre of the expected base codimension. This gives an intuition that the degeneration we have chosen is too small. That being said, it can be useful to think about this GIT quotient if what we are trying to do is simply to resolve singularities in a way that preserves some good properties of the space, e.g. in the context of constructing minimal models for type III degenerations of Hilbert schemes of points on K3 surfaces.
### Expanded construction for stacks
In this section we construct a stack of expansions \(\mathfrak{C}\) and family over it \(\mathfrak{X}\), keeping our notation as close as possible to that of [15].
**The stack \(\mathfrak{C}\).** In the following we define the stack \(\mathfrak{C}\) identically to the stack of expanded degenerations defined by Li and Wu. For convenience, we recall the details of this construction here.
Let us consider \(\mathbb{A}^{n+1}\) with its natural torus action \(\mathbb{G}^{n}_{m}\) as defined above. We then impose some additional relations given by a collection of isomorphisms which we describe in the following. As before, we label elements of the base as \((t_{1},\ldots,t_{n+1})\). We start by defining the set
\[[n+1]\coloneqq\{1,\ldots,n+1\}.\]
Let \(I\subseteq[n+1]\) and \(I^{\circ}=[n+1]-I\) be the complement of \(I\). For \(|I|=r+1\), let
\[\operatorname{ind}_{I}\colon[r+1]\longrightarrow I\subset[n+1]\]
and
\[\operatorname{ind}_{I^{\circ}}\colon[n-r]\longrightarrow I^{\circ}\subset[n+1]\]
be the order preserving isomorphisms. Let
\[\mathbb{A}^{n+1}_{I}=\{(t)\in\mathbb{A}^{n+1}|\;t_{i}=0,\;t_{i}\in I\} \subset\mathbb{A}^{n+1}\]
\[\mathbb{A}_{U(I)}^{n+1}=\{(t)\in\mathbb{A}^{n+1}|\ t_{i}\neq 0,\ t_{i}\in I^{ \circ}\}\subset\mathbb{A}^{n+1}.\]
Then we have the isomorphism
\[\widetilde{\tau}_{I}\colon(\mathbb{A}^{r+1}\times\mathbb{G}_{m}^{n-r}) \longrightarrow\mathbb{A}_{U(I)}^{n+1}\]
given by
\[(a_{1},\dots,a_{r+1},\sigma_{1},\dots,\sigma_{n-r})\longmapsto(t_{1},\dots,t_ {n+1}),\]
where
\[t_{k}=a_{l},\ \text{if}\ \operatorname{ind}_{I}(l)=k,\] \[t_{k}=\sigma_{l},\ \text{if}\ \operatorname{ind}_{I^{\circ}}(l)=k.\]
Then, given \(I,I^{\prime}\subset[n+1]\) such that \(|I|=|I^{\prime}|\), we define an isomorphism
\[\widetilde{\tau}_{I,I^{\prime}}=\widetilde{\tau}_{I}\circ\widetilde{\tau}_{I ^{\prime}}^{-1}\colon\mathbb{A}_{U(I^{\prime})}^{n+1}\longrightarrow\mathbb{A }_{U(I)}^{n+1}.\]
Recall from Section 3.1 that we had natural inclusions (3.1.3)
\[C[n]\lhook C[n+1],\]
which called _standard embeddings_ in [15].
Finally, we define \(\mathfrak{U}^{n}\) to be the quotient \([\mathbb{A}^{n+1}/{\sim}]\) by the equivalences generated by the \(\mathbb{G}_{m}^{n}\)-action and the equivalences \(\widetilde{\tau}_{I,I^{\prime}}\) for pairs \(I,I^{\prime}\) with \(|I|=|I^{\prime}|\). We can define open immersions
\[\mathfrak{U}^{n}\longrightarrow\mathfrak{U}^{n+1},\]
induced by the standard embeddings. Let \(\mathfrak{U}\coloneqq\underset{\rightarrow}{\lim}\,\mathfrak{U}^{n}\) be the direct limit over \(n\) and let \(\mathfrak{C}\coloneqq C\times_{\mathbb{A}^{1}}\mathfrak{U}\).
**The stack \(\mathfrak{X}\).** Let \(X[n]\to C[n]\) be as in Section 3 and recall that \(\pi\colon X[n]\to X\) is the projection to the original family. Let
\[\overline{\tau}_{I}\colon C[m]\lhook C[n]\]
be the standard embedding. Then the induced family \((\overline{\tau}_{I}^{*}X[n],\overline{\tau}_{I}^{*}\pi)\) is isomorphic to \((X[m],\pi)\) over \(C[m]\). The equivalences on \(\mathfrak{U}^{n}\) lift to \(C\)-isomorphisms of fibres.
We define \(\mathfrak{X}^{n}\) to be the quotient \([X[n]/{\sim}]\) by the equivalences generated by the \(\mathbb{G}_{m}^{n}\)-action and equivalences lifted from \(\mathfrak{U}^{n}\). There are natural immersions of stacks
\[\mathfrak{X}^{n}\longrightarrow\mathfrak{X}^{n+1},\]
induced by the immersions \(\mathfrak{U}^{n}\rightarrow\mathfrak{U}^{n+1}\). Finally, we define \(\mathfrak{X}=\underset{\rightarrow}{\lim}\,\mathfrak{X}^{n}\) to be the direct limit over \(n\). It is an Artin stack.
### Stability conditions.
Restricting to the smoothly supported locus.We start by examining some stability conditions on the scheme \(H^{m}_{[n]}\). We have defined several \(G\)-linearised line bundles \(\mathcal{M}\) on this space. As before, let us denote by \(H^{m,ss}_{[n],\mathcal{M}}\) and \(H^{m,s}_{[n],\mathcal{M}}\) the corresponding GIT semistable and stable loci respectively. As discussed in Section 5.1, considering the GIT stable locus does not give us a separated quotient stack, among other reasons because it contains some subschemes which are not smoothly supported.
**Proposition 5.3.1**.: _The restrictions of the loci \(H^{m,ss}_{[n],\mathcal{M}}\) and \(H^{m,s}_{[n],\mathcal{M}}\) to the loci of smoothly supported subschemes are \(G\)-invariant open subschemes._
Proof.: Recall that the relative Hilbert scheme of \(m\) points on \(X[n]\to C[n]\), which we denoted \(H^{m}_{[n]}\), is the scheme which represents the functor
\[h\colon\operatorname{\underline{k}-\operatorname{Sch}^{op}}\longrightarrow \operatorname{\underline{Sets}},\]
where \(\operatorname{\underline{k}-\operatorname{Sch}^{op}}\) is the category of \(k\)-schemes. This functor associates to any \(k\)-scheme \(B\) the set of flat families over \(B\) of subschemes of fibres of \(X[n]\) over \(C[n]\). Restricting \(X[n]\) to the smooth locus of its fibres yields a family of open subschemes \(X[n]^{\operatorname{sm}}\) over \(C[n]\), and we can similarly define a Hilbert functor \(h_{\operatorname{sm}}\) on this family. There is a morphism from the corresponding Hilbert scheme \(H^{m}_{[n],\operatorname{sm}}\coloneqq\operatorname{Hilb}(X[n]^{\operatorname {sm}}/C[n])\) to \(H^{m}_{[n]}\) which is clearly a monomorphism and it is etale since deformations of smoothly supported subschemes are smoothly supported. We could also note that the complement of \(H^{m}_{[n],\operatorname{sm}}\) in \(H^{m}_{[n]}\) is closed by the valuative criterion since the limit of any subscheme with part of its support in the singular locus of a fibre must also have part of its support in the singular locus of a fibre.
We remark that since the smooth locus of the fibres of \(X[n]\) is \(G\)-invariant, restricting the functor to this locus preserves the \(G\)-invariance. The restrictions of the semistable and stable loci to the loci of smoothly supported subschemes therefore yield \(G\)-invariant open subschemes.
_Notation._ We denote by \(H^{m,s}_{[n],\mathcal{M},\operatorname{sm}}\) and \(H^{m,ss}_{[n],\mathcal{M},\operatorname{sm}}\) the loci of GIT stable and semistable subschemes which are smoothly supported.
We have the following inclusions:
\[H^{m,s}_{[n],\mathcal{M},\operatorname{sm}}\subset H^{m,ss}_{[n],\mathcal{M}, \operatorname{sm}}\subset H^{m,ss}_{[n],\mathcal{M}}.\]
Li-Wu stability.We recall here the notion of stability used in [15], in order to compare it with the GIT stability and construct an appropriate stability condition for this case.
**Definition 5.3.2**.: Let \(X[n]_{0}\) be a fibre of \(X[n]\) over a closed point and let \(D\) denote the singular locus of \(X[n]_{0}\). A subscheme \(Z\) in \(X[n]_{0}\) is said to be _admissible_ if the morphism
\[\mathcal{I}_{Z}\otimes\mathcal{O}_{D}\to\mathcal{O}_{D}\]
is injective, where \(\mathcal{I}_{Z}\) is the ideal sheaf of \(Z\), i.e. when \(Z\) is normal to \(D\). A family \(Z\) in \(X[n]\) is _admissible_ if it is admissible in every fibre over a closed point. A subscheme \(Z\) in \(\mathfrak{X}\) is _Li-Wu stable_ (LW stable) if and only if it is admissible and has finite automorphism group.
For a length \(m\) zero-dimensional scheme \(Z\) in \(\mathfrak{X}\), the admissible condition will mean that none of the points in the support of \(Z\) lie in the singular locus of the given fibre. The finite automorphism condition means that the stabiliser of \(Z\) with respect to the torus action we defined on the blow-ups must be finite. Denote the LW stable locus by \(H^{m}_{[n],\mathrm{LW}}\). Note that we have an inclusion
\[H^{m,s}_{[n],\mathcal{M},\mathrm{sm}}\subset H^{m}_{[n],\mathrm{LW}} \tag{5.3.1}\]
for all \(G\)-linearised line bundles \(\mathcal{M}\) on \(H^{m}_{[n]}\) since, if points are GIT stable, they must have finite stabilisers. This inclusion no longer holds for the GIT semistable locus. This is a strict inclusion as the LW stability is clearly a weaker condition than the GIT strict stability with smooth support.
Modified GIT stability.As stated above, we only want to allow length \(m\) zero-dimensional subschemes to be stable if their support lies in the smooth locus of a fibre. However, restricting the GIT stability condition to this locus makes the space of stable subschemes no longer universally closed. Indeed, there is no single GIT condition which can represent all desired length \(m\) zero-dimensional subschemes as smoothly supported subschemes. We must therefore define a modified GIT stability condition which patches together several GIT stability conditions in order to obtain the desired stable locus.
**Definition 5.3.3**.: Let \([Z]\in H^{m}_{[n]}\). We say that \(Z\) is _weakly strictly stable_ (WS stable) if there exists a \(G\)-linearised ample line bundle on \(H^{m}_{[n]}\) with respect to which \(Z\) is stable. We denote the WS stable locus in \(H^{m}_{[n]}\) by \(H^{m}_{[n],\mathrm{WS}}\). We shall denote by \(H^{m}_{[n],\mathrm{SWS}}\) the locus of WS stable smoothly supported subschemes.
We may write \(H^{m}_{[n],\mathrm{SWS}}\) as the union
\[H^{m}_{[n],\mathrm{SWS}}\coloneqq\bigcup_{\mathcal{M}}H^{m,s}_{[n],\mathcal{M },\mathrm{sm}}\]
over all choices of \(G\)-linearised line bundle \(\mathcal{M}\). It is then clear that we have an inclusion \(H^{m}_{[n],\mathrm{SWS}}\subset H^{m}_{[n],\mathrm{LW}}\) by the inclusion (5.3.1). We will now want to compare these stability conditions on the stack \(\mathfrak{X}\), so we will need to extend our definition of WS stability to this stack.
Given a \(C\)-scheme \(S\), an object of \(\mathfrak{X}(S)\) is a pullback family \(\xi^{*}X[n]\) for a morphism
\[\xi\colon S\to C[n].\]
Now we describe WS stability on the stack \(\mathfrak{X}\).
**Definition 5.3.4**.: A pair \((Z,\mathcal{X})\), where \(Z\) is a family of length \(m\) zero-dimensional subschemes in \(\mathcal{X}\in\mathfrak{X}(S)\), is said to be _WS stable_ if and only if \(\mathcal{X}\coloneqq\xi^{*}X[n]\) for some morphism \(\xi\colon S\to C[n]\) and there exists some \(G\)-linearised ample line bundle on \(H^{m}_{[n]}\) which makes \(Z\) be GIT stable. We will say that \(Z\) is SWS stable if it is smoothly supported and is WS stable.
_Remark 5.3.5_.: Note that we are slightly abusing notation in the above definition, by asking for \(Z\) to be GIT stable in \(H^{m}_{[n]}\), when \(Z\) is defined in \(\mathcal{X}\), and it is in fact \(\xi_{*}Z\) which must be GIT stable in \(H^{m}_{[n]}\). This is a harmless simplification as it will always be clear from context what we mean. We continue to use it throughout the work for convenience, especially where the map \(\xi\) has not been specified.
**Stacks of stable objects.** Let us denote by \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) and \(\mathfrak{M}^{m}_{\mathrm{LW}}\) the stacks of SWS and LW stable length \(m\) zero-dimensional subschemes in \(\mathfrak{X}\) respectively. Let \(S\) be a \(C\)-scheme. An object of \(\mathfrak{M}^{m}_{\mathrm{SWS}}(S)\) is defined to be a pair \((Z,\mathcal{X})\), where \(\mathcal{X}\in\mathfrak{X}(S)\) and \(Z\) is an \(S\)-flat SWS stable family in \(\mathcal{X}\). Similarly, an object of \(\mathfrak{M}^{n}_{\mathrm{LW}}(S)\) is a pair \((Z,\mathcal{X})\), where \(\mathcal{X}\in\mathfrak{X}(S)\) and \(Z\) is an \(S\)-flat LW stable family in \(\mathcal{X}\).
_Remark 5.3.6_.: Note that it does not make sense in general to speak of Maulik-Ranganathan stability (MR stability) without defining an appropriate notion of tube components on our stacks as in Section 2.2. In this specific setting, however, we will see that there is no need to specify tube components as the stacks \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) and \(\mathfrak{M}^{m}_{\mathrm{LW}}\) are already proper. The LW stability which we extended to our situation will therefore be equivalent to MR stability on \(\mathfrak{X}\). In this setting we may therefore use both terminologies interchangeably.
As is briefly discussed in Section 1.3, we can also make constructions, equivalent to some of the constructions of Maulik and Ranganathan, which require choices of representatives of limit subschemes and labelling of components as tube components. As the construction we make here requires the minimal amount of choice (the only choice was in choosing to blow up \(Y_{1}\) and \(Y_{2}\) but not \(Y_{3}\) at the very start) we shall refer to it as the canonical construction.
## 6 The canonical moduli stack
In this section we show that the stacks \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) and \(\mathfrak{M}^{m}_{\mathrm{LW}}\) are proper and Deligne-Mumford and that they are in fact isomorphic.
### Properness and Deligne-Mumford property
In this section, we show that the stacks \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) and \(\mathfrak{M}^{m}_{\mathrm{LW}}\) are universally closed, separated and have finite automorphisms. Before we give these proofs, we make the following definition.
**Definition 6.1.1**.: Let \(S\coloneqq\operatorname{Spec}R\to C\), where \(R\) is some discrete valuation ring and let \(\eta\) denote the generic point of \(S\). Now, let \((Z,\mathcal{X})\) be a pair where \(\mathcal{X}\in\mathfrak{X}(S)\) and \(Z\) is an \(S\)-flat family of length \(m\) zero-dimensional subschemes in \(\mathcal{X}\). Let \(S^{\prime}\to S\) be some finite base change and denote the generic and closed points of \(S^{\prime}\) by \(\eta^{\prime}\) and \(\eta^{\prime}_{0}\) respectively. We say that a pair \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) is an _extension_ of \((Z_{\eta},\mathcal{X}_{\eta})\) if there exists such a base change and \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) is the restriction to \(\eta^{\prime}_{0}\) of some \(S\)-flat family \((Z^{\prime},\mathcal{X}^{\prime})\) with \(\mathcal{X}^{\prime}\in\mathfrak{X}(S^{\prime})\) such that \(Z_{\eta}\times_{\eta}\eta^{\prime}\cong Z^{\prime}_{\eta^{\prime}}\) and \(\mathcal{X}_{\eta}\times_{\eta}\eta^{\prime}\cong\mathcal{X}^{\prime}_{\eta^ {\prime}}\).
**Proposition 6.1.2**.: _The stack \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) is universally closed._
Proof.: Let \(S\coloneqq\operatorname{Spec}R\to C\), where \(R\) is some discrete valuation ring with uniformising parameter \(w\) and quotient field \(k\). We denote by \(\eta\) and \(\eta_{0}\) the generic and closed points of \(S\) respectively. Let \((Z,\mathcal{X})\) be an \(S\)-flat family of length \(m\) zero-dimensional subschemes such that \(\mathcal{X}\coloneqq\xi^{*}X[r]\in\mathfrak{X}(S)\) for some morphism \(\xi\colon S\to C[r]\) and \((Z_{\eta},\mathcal{X}_{\eta})\in\mathfrak{M}^{m}_{\mathrm{SWS}}(\eta)\). Additionally, we assume that all basis directions are invertible at \(\xi(\eta)\), i.e. \(\mathcal{X}_{\eta}\) has base codimension zero. As mentioned at the end of this proof, the other case is treated in
Proposition 6.1.8. We show that there exists a finite base change \(S^{\prime}\coloneqq\operatorname{Spec}R^{\prime}\to S\), for some discrete valuation ring \(R^{\prime}\) and a pair \((Z^{\prime},\mathcal{X}^{\prime})\in\mathfrak{M}^{m}_{\text{SWS}}(S^{\prime})\) satisfying the following condition. We denote by \(\eta^{\prime}\) and \(\eta^{\prime}_{0}\) the generic and closed points of \(S^{\prime}\) respectively. Then \(S^{\prime}\) and \((Z^{\prime},\mathcal{X}^{\prime})\) are chosen such that we have an equivalence \(\mathcal{X}^{\prime}_{\eta^{\prime}}\cong\mathcal{X}_{\eta}\times_{\eta}\eta^ {\prime}\) which induces an equivalence \(Z^{\prime}_{\eta^{\prime}}\cong Z_{\eta}\times_{\eta}\eta^{\prime}\).
Let \(\mathfrak{X}(S)\) be defined by the equation \(xyz=cw^{h}\), where \(w\) is the uniformising parameter of \(R\) as above and \(c\) is a unit in \(R\). The subscheme \(Z\) is a union of irreducible components \(Z_{i}\) whose defining equations we will want to express in terms of the uniformising parameter. We therefore start by taking an appropriate base change \(S^{\prime}\coloneqq\operatorname{Spec}R^{\prime}\to S\), which maps \(u^{k}\to w^{h}\), where \(u\) is the uniformising parameter of \(R^{\prime}\) and where \(u\) is chosen such that each \(Z_{i}\) can be written locally in terms of its \(x,y\) and \(z\) coordinates as
\[\{(c_{i,1}u^{e_{i,1}},c_{i,2}u^{e_{i,2}},c_{i,3}u^{e_{i,3}})\}, \tag{6.1.1}\]
for some \(e_{i,j}\in\mathbb{Z}\) and some units \(c_{i,j}\) in \(R^{\prime}\). Note that \(\mathfrak{X}(S^{\prime})\) is defined by the equation \(xyz=cu^{k}\) and we therefore have the equality
\[c_{i,1}c_{i,2}c_{i,3}u^{e_{i,1}+e_{i,2}+e_{i,3}}=cu^{k}\]
for all \(i\).
_Tropical perspective._ The choice of uniformising parameter \(w\) corresponds to a choice of height of the triangle \(\operatorname{trop}(X_{0})\) within \(\operatorname{trop}(X)\). We may then examine the rays in \(\operatorname{trop}(X)\) corresponding to the image \(\operatorname{trop}(Z_{\eta})\) of \(Z\) under the tropicalisation map (see Section 2.2) for definitions. If the vertices given by \(\operatorname{trop}(Z_{\eta})\cap\operatorname{trop}(X_{0})\) do not already lie on integral points of the cone, then we must change the height of the triangle within \(\operatorname{trop}(X)\) until the intersection vertices \(\operatorname{trop}(Z_{\eta})\cap\operatorname{trop}(X_{0})\) become integral. This dictates exactly what base change \(S^{\prime}\to S\) to make as the edge length of the new triangle is given by \(e_{i,1}+e_{i,2}+e_{i,3}\). The integral intersection vertices on this triangle corresponding to each \(Z_{i}\) will be given by \((e_{i,1},e_{i,2},e_{i,3})\).
We now form an ordered list \((d_{1}u^{e_{1}},\dots,d_{2m}u^{e_{2m}})\), where we arrange all values \(c_{i,1}u^{e_{i,1}}\) and \((c_{i,2})^{-1}cu^{k-e_{i,2}}\) from smallest to largest power of \(u\). If two terms have the same power of \(u\), we may place them in any order. We shall now inductively construct an element \((t_{1},\dots,t_{n+1})\) of \(\mathbb{A}^{n+1}\) determining a morphism \(\xi\colon S^{\prime}\to C[n]\) such that the pullback \(\xi^{*}X[n]\) defines the family \(\mathcal{X}^{\prime}\). We start by setting
\[(t_{1},t_{2})=(d_{1}u^{e_{1}},(d_{1}u^{e_{1}})^{-1}cu^{k}).\]
If \(e_{1}=e_{2}\), then we do not include \(d_{2}u^{e_{2}}\) and move on to \(e_{3}\). If \(e_{1}\neq e_{2}\), however, we set
\[(t_{1},t_{2},t_{3})=(d_{1}u^{e_{1}},(d_{1}u^{e_{1}})^{-1}d_{2}u^{e_{2}},(d_{2}u ^{e_{2}})^{-1}cu^{k}).\]
We continue to iterate this process in the following way. Assume we have \((t_{1},\dots,t_{j})\), where \(t_{j}=(d_{l}u^{e_{l}})^{-1}cu^{k}\). Then, if \(e_{l+1}\neq e_{l}\), we write
\[(t_{1},\dots,t_{j},t_{j+1})=(d_{1}u^{e_{1}},\dots,(d_{l}u^{e_{l}})^{-1}d_{l+1} u^{e_{l+1}},(d_{l+1}u^{e_{l+1}})^{-1}cu^{k}),\]
and if \(e_{l+1}=e_{l}\), then we move on to \(l+2\) without including \(d_{l+1}u^{e_{l+1}}\) in the expression. We iterate this until we find
\[(t_{1},\dots,t_{n+1})=(f_{1}u^{g_{1}},\dots,f_{n+1}u^{g_{n+1}}) \tag{6.1.2}\]
which has exactly one entry for each different power of \(u\) in the list \((d_{1}u^{e_{1}},\dots,d_{2m}u^{e_{2m}})\).
We now denote by \(\pi_{n}\colon C[n]\to\mathbb{A}^{n+1}\) the natural projection. The morphism \(\xi\colon S^{\prime}\to C[n]\) is defined by the condition that
\[\pi_{n}\circ\xi=(f_{1}u^{g_{1}},\dots,f_{n+1}u^{g_{n+1}}). \tag{6.1.3}\]
We may then define \(\mathcal{X}^{\prime}\coloneqq\xi^{*}X[n]\) and let \(Z^{\prime}\coloneqq Z\times_{\mathcal{X}}\mathcal{X}^{\prime}\). We show now that this satisfies all the necessary conditions.
Since \(\mathcal{X}\in\mathfrak{X}(S)\) is a pullback \(\mathcal{X}=\xi^{*}X[r]\) for some \(r\), where \(\xi\colon S\to C[r]\) is given by a similar expression to (6.1.3), then we have that \(\mathcal{X}^{\prime}_{\eta^{\prime}}\cong\mathcal{X}_{\eta}\times_{\eta}\eta^ {\prime}\). Indeed, over the generic point, the uniformising parameter is invertible and any two expressions \((t_{1},\dots,t_{l})\) and \((t_{1},\dots,t_{l^{\prime}})\) are equivalent in \(\mathfrak{C}\) up to the equivalences of this stack if they have the same product \(t_{1}\cdots t_{l}=t_{1}\cdots t_{l^{\prime}}\). But in our case this product was chosen to be identical up to the base change factor.
Moreover, the expression (6.1.3) is chosen precisely to give an \(S^{\prime}\)-flat extension of \(Z\times_{\eta}\eta^{\prime}\) where all points of the support of this extension lie in the smooth locus of the fibre \(\mathcal{X}^{\prime}_{\eta^{\prime}_{0}s}\). Finally, the expression (6.1.3) ensures that we have expanded out the \(\Delta\)-components in the fibre \(\mathcal{X}^{\prime}_{\eta_{0}}\) in such a way that every expanded \(\Delta\)-component in this fibre contains some point of the support of \(Z^{\prime}\). By Theorem 4.4.3, such a configuration will be stable with respect to some GIT stability condition on \(H^{m}_{[n]}\).
The above discussion shows that if \((Z_{\eta},\mathcal{X}_{\eta})\) is pulled back from a fibre above a point \((t_{1},\dots,t_{n+1})\) in some \(C[n]\) whose entries are all invertible, then \((Z_{\eta},\mathcal{X}_{\eta})\) has an SWS stable extension. See Proposition 6.1.8 for a proof that there exists an extension if \(\mathcal{X}_{\eta}\) is a modified special fibre, i.e. if some of the entries of \((t_{1},\dots,t_{n+1})\) are not invertible.
**Corollary 6.1.3**.: _The stack \(\mathfrak{M}^{m}_{\mathrm{LW}}\) is universally closed._
Proof.: As every SWS stable subscheme must be LW stable, the existence of limits in \(\mathfrak{M}^{m}_{\mathrm{LW}}\) follows from the existence of limits in \(\mathfrak{M}^{m}_{\mathrm{SWS}}\).
**Proposition 6.1.4**.: _The stacks \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) and \(\mathfrak{M}^{m}_{\mathrm{LW}}\) are separated._
Proof.: Let \(S\coloneqq\operatorname{Spec}R\to C\), where \(R\) is a discrete valuation ring with uniformising parameter \(u\). Let \(\eta\) denote the generic point of \(S\) and \(\eta_{0}\) its closed point. Now, assume that there are two pairs \((Z,\mathcal{X})\) and \((Z^{\prime},\mathcal{X}^{\prime})\) in \(\mathfrak{M}^{m}_{\mathrm{SWS}}(S)\) such that \((Z_{\eta},\mathcal{X}_{\eta})\cong(Z^{\prime}_{\eta},\mathcal{X}^{\prime}_{ \eta})\). We will show that it must follow that \((Z_{\eta_{0}},\mathcal{X}_{\eta_{0}})\cong(Z^{\prime}_{\eta_{0}},\mathcal{X}^{ \prime}_{\eta_{0}})\). Similarly to the proof of Proposition 6.1.2, we assume that \(\mathcal{X}_{\eta}\) has base codimension zero. The other case is treated in Proposition 6.1.8.
We may assume that \(S\) is chosen so that the \(i\)-th irreducible component of \(Z\) is given in terms of its local coordinates \(x,y\) and \(z\) by
\[\{(c_{i,1}u^{e_{i,1}},c_{i,2}u^{e_{i,2}},c_{i,3}u^{e_{i,3}})\}, \tag{6.1.4}\]
and the \(i\)-th irreducible component of \(Z^{\prime}\) is given in terms of its local coordinates \(x\), \(y\) and \(z\) by
\[\{(d_{i,1}u^{f_{i,1}},d_{i,2}u^{f_{i,2}},d_{i,3}u^{f_{i,3}})\}. \tag{6.1.5}\]
Since the equivalences of the stack fix \(x\), \(y\) and \(z\) and we know that \((Z_{\eta},\mathcal{X}_{\eta})\cong(Z^{\prime}_{\eta},\mathcal{X}^{\prime}_{ \eta})\), it must therefore follow that \(Z\) and \(Z^{\prime}\) have the same number of irreducible components. Moreover, if these components are labelled in a compatible way, then \(c_{i,1}=d_{i,1}\) and \(e_{i,1}=f_{i,1}\) for all \(i\). But now, by flatness, each \(Z_{i}\) and \(Z^{\prime}_{i}\) component must satisfy the equations
\[x =c_{i,1}u^{e_{i,1}}, \tag{6.1.6}\] \[y =c_{i,2}u^{e_{i,2}},\] (6.1.7) \[z =c_{i,3}u^{e_{i,3}}, \tag{6.1.8}\]
also above the closed point. If more than one element of the set \(\{e_{i,1},e_{i,2},e_{i,3}\}\) is nonzero, then this implies that either \(Z_{i}\) and \(Z^{\prime}_{i}\) are not smoothly supported or are supported in a component blown-up along the vanishings of both sides of the above components. The stability condition forces \(Z_{i}\) and \(Z^{\prime}_{i}\) to be smoothly supported, so the latter must be true. Moreover, since in our construction we have chosen to do our blow-ups along the vanishing of \(x\) and the vanishing of \(y\), this implies that \(Z_{i}\) and \(Z^{\prime}_{i}\) must be supported in a component blown up along the ideals \(\langle x,cu^{e_{i,1}}\rangle\) and \(\langle y,c^{\prime}u^{e_{i,2}}\rangle\) over the closed point \(\eta_{0}\), for some units \(c\) and \(c^{\prime}\) in \(R\).
Note that different values of \(c\) and \(c^{\prime}\) will cause the relevant points of the support of \(Z_{i}\) and \(Z^{\prime}_{i}\) to take on different values in the interior of the \(\mathbb{P}^{1}\) introduced by each blow-up. Since the \(\mathbb{G}_{m}\)-action imposed on the \(\mathbb{P}^{1}\) identifies all points within the interior of a \(\mathbb{P}^{1}\), this choice makes no difference.
Notice also that blowing up along \(\langle x,cu^{e_{i,1}}\rangle\) and blowing up along \(\langle yz,(cu^{e_{i,1}})^{-1}du^{k}\rangle\), where \(\mathfrak{X}(S)\) is defined by the equation \(xyz=du^{k}\), are the same. This allows us to obtain the equation (6.1.8).
We have established that both \(Z_{i}\) and \(Z^{\prime}_{i}\) must be supported in the blown-up components described above for all \(i\) such that more than one element of the set \(\{e_{i,1},e_{i,2},e_{i,3}\}\) is nonzero. We also know that by the stability conditions the pairs \((Z_{\eta_{0}},\mathcal{X}_{\eta_{0}})\) and \((Z^{\prime}_{\eta_{0}},\mathcal{X}^{\prime}_{\eta_{0}})\) cannot have an expanded component containing no point of the support. Let \(\pi_{n}\colon C[n]\to\mathbb{A}^{n+1}\) denote the natural projection, as above. It follows that the morphism
\[\pi_{n}\circ\xi=(h_{1}u^{g_{1}},\dots,h_{n}u^{g_{n}})\colon S\to C[n]\to \mathbb{A}^{n+1} \tag{6.1.9}\]
defining the family \(\mathcal{X}=\xi^{*}X[n]\) is uniquely determined up to the choices of units \(h_{i}\) in \(R\) and embeddings by the standard embeddings. If the family \(\mathcal{X}^{\prime}\) is defined by a morphism as in (6.1.9) but with different nonzero \(g_{i}\) values, then either \(Z^{\prime}_{\eta_{0}}\) is not smoothly supported in \(\mathcal{X}^{\prime}_{\eta_{0}}\) or \(\mathcal{X}^{\prime}_{\eta_{0}}\) has an expanded component containing no point of the support of \(Z^{\prime}_{\eta_{0}}\). This shows uniqueness of limits.
Existence and uniqueness of limits for special objects.We need to establish some definitions before we prove the following auxiliary result on existence and uniqueness of
limits for special elements, i.e. when the fibre \(\mathcal{X}_{\eta}\) over the generic point of \(S\) is a modified special fibre itself.
Let \(S\coloneqq\operatorname{Spec}R\) for some discrete valuation ring \(R\), let \(\eta\) be its generic point and take \((Z_{\eta},\mathcal{X}_{\eta})\in\mathfrak{N}_{\mathrm{SWS}}^{m}(\eta)\) (or \(\mathfrak{N}_{\mathrm{LW}}^{m}(\eta)\)). Here \(\eta\) is not necessarily pulled back from a point in \(C[n]\) with only invertible basis directions (i.e. \(\mathcal{X}_{\eta}\) may be a modified special fibre). We can consider the image \(\operatorname{trop}(Z_{\eta})\) under the tropicalisation map given in Section 2.2 as a collection of rays in \(\operatorname{trop}(X)\).
Again, here we are abusing notation slightly: the object we are considering is actually the image of \(\xi_{*}(Z_{\eta})\) under the tropicalisation map, where \(\xi\colon S\to C[n]\). Similarly, we will write \(\operatorname{trop}(\mathcal{X}_{\eta})\) to mean the tropicalisation of the pushforward along \(\xi\).
In order to construct an extension \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) of \((Z_{\eta},\mathcal{X}_{\eta})\) such that \(Z^{\prime}_{\eta^{\prime}_{0}}\) is smoothly supported in \(\mathcal{X}^{\prime}_{\eta^{\prime}_{0}}\), each ray making up \(\operatorname{trop}(Z_{\eta})\) (or, equivalently, the corresponding vertex in \(\operatorname{trop}(X_{0})\)) must correspond to a nonempty bubble in \(\mathcal{X}^{\prime}_{\eta^{\prime}_{0}}\). This effectively determines all nonempty bubbles which must exist in \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\), but in order for these rays to appear as part of a polyhedral subdivision of \(\operatorname{trop}(X)\), we might need to add more rays (or vertices in \(\operatorname{trop}(X_{0})\)) corresponding to the empty bubbles in the pair \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\).
**Definition 6.1.5**.: In the notation of the above paragraph, we will call \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) an _associated pair_ for a collection of rays in \(\operatorname{trop}(X)\) (or vertices in \(\operatorname{trop}(X_{0})\)) if these rays (or vertices) correspond exactly to the non-empty bubbles in \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) in the manner described above.
For \(I\subset[n]\), we denote by \(X[n]_{I}\) the fibres where \(t_{i}\) vanish for all \(i\in I\). Now we define the necessary condition for compatibility of limits in the stacks of stable objects.
**Definition 6.1.6**.: Let \((Z_{\eta},\mathcal{X}_{\eta})\in\mathfrak{M}_{\mathrm{SWS}}^{m}(\eta)\) (or \(\mathfrak{M}_{\mathrm{LW}}^{m}(\eta)\)) be any pair over the generic point of some \(S\coloneqq\operatorname{Spec}R\), for some discrete valuation ring \(R\) as before. Moreover, let \(\mathcal{X}_{\eta}\) be the generic fibre of \(\mathcal{X}\coloneqq\xi^{*}X[n]_{I}\) for some nonempty set \(I\), i.e. \(\mathcal{X}_{\eta}\) is pulled back from some modified special fibre. If, for any associated pair \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) of \(\operatorname{trop}(Z_{\eta})\) in \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) (or \(\mathfrak{M}_{\mathrm{LW}}^{m}\)), the tropicalisation \(\operatorname{trop}(\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) is a subdivision of \(\operatorname{trop}(\mathcal{X}_{\eta})\), then we say that \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) (or \(\mathfrak{M}_{\mathrm{LW}}^{m}\)) is _tropically compatible_.
**Lemma 6.1.7**.: _Let \((Z_{\eta},\mathcal{X}_{\eta})\in\mathfrak{M}_{\mathrm{SWS}}^{m}(\eta)\) (or \(\mathfrak{M}_{\mathrm{LW}}^{m}(\eta)\)) be as above. Then \(\operatorname{trop}(Z_{\eta})\) has a unique associated pair which is SWS (or LW) stable._
Proof.: This is clear from the construction of \(X[n]\). Given any configuration of vertices in \(\operatorname{trop}(X_{0})\), we have allowed, by our restrictive choice of blow-ups in the construction of \(\mathfrak{X}\), exactly one way of adding edges to the triangle \(\operatorname{trop}(X_{0})\) such that each of these vertices land on the intersection of at least two edges and such that the corresponding extension of \((Z_{\eta},\mathcal{X}_{\eta})\) is stable.
Let \(S\coloneqq\operatorname{Spec}R\to C\), where \(R\) is a discrete valuation ring and \(\eta\) is the generic point of \(S\). Let \((Z_{\eta},\mathcal{X}_{\eta})\in\mathfrak{M}_{\mathrm{SWS}}^{m}(\eta)\) (or \(\mathfrak{M}_{\mathrm{LW}}^{m}(\eta)\)). We have shown in the proofs of Propositions 6.1.4 and 6.1.2 that if \(\mathcal{X}_{\eta}\) is pulled back from a fibre in some \(X[n]\) over a point \((t_{1},\dots,t_{n+1})\) whose entries are all nonzero, then \((Z_{\eta},\mathcal{X}_{\eta})\) has a stable extension in \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) (or \(\mathfrak{M}_{\mathrm{LW}}^{m}\)). We now prove the following statement, to complete the proofs that \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) and \(\mathfrak{M}_{\mathrm{LW}}^{m}\) are universally closed and separated.
**Proposition 6.1.8**.: _In the notation of the above paragraph, let \((Z_{\eta},{\cal X}_{\eta})\in\mathfrak{M}^{m}_{\rm SWS}(\eta)\) (or \(\mathfrak{M}^{m}_{\rm LW}(\eta)\)) and assume that \({\cal X}_{\eta}\) is the pullback of \(X[n]_{I}\) over the generic point along some morphism \(\xi\colon S\to C[n]_{I}\) and some nonempty set \(I\). The fibre \({\cal X}_{\eta}\) is therefore a modified special fibre. Then there exists an SWS (or LW) stable extension of \((Z_{\eta},{\cal X}_{\eta})\)._
Proof.: We split the proof into the following two cases. The first case is where a point \(P\) of the support of \(Z_{\eta}\) has one or more of its local coordinates \(x,y\) or \(z\) tending to zero. The second case is where a point \(P\) of the support of \(Z_{\eta}\) has fixed \(x,y\) and \(z\) values but one or more of its \((x_{0}^{(i)}:x_{1}^{(i)})\) or \((y_{0}^{(i)}:y_{1}^{(i)})\) coordinates tends towards \((1:0)\) or \((0:1)\).
We start by proving existence and uniqueness of limits in the first case using the valuative criterion. Let \(V\) denote the irreducible component of \({\cal X}_{\eta}\) in the interior of which \(P\) lies. Notice that since \(P\) tends towards a codimension greater or equal to one stratum of \({\cal X}\), then in order for its limit to be smoothly supported in an extension of \((Z_{\eta},{\cal X}_{\eta})\), it will be necessary to expand out at least one \(\Delta\)-component in this extension. There exists a smoothing from the interior of \(V\) in the fibre over the generic point to the interior of this expanded \(\Delta\)-component in such an extension of \((Z_{\eta},{\cal X}_{\eta})\) if and only if this \(\Delta\)-component is equal to \(V\) in the fibre over the generic point. Moreover, if there is no such \(\Delta\)-component equal to \(V\), then none of the \(x,y\) or \(z\) coordinates can tend towards zero (because both sides of the defining equations must tend towards zero).
By Lemma 6.1.7, we know that there exists a unique SWS (or LW) stable associated pair \((Z^{\prime}_{\eta_{0}^{\prime}},{\cal X}^{\prime}_{\eta_{0}^{\prime}})\) for the image \({\rm trop}(Z_{\eta})\) of \(Z_{\eta}\) in \({\rm trop}(X)\). Precisely, this means that there exists some base change \(S^{\prime}\to S\) and some SWS (or LW) stable pair \((Z^{\prime}_{\eta_{0}^{\prime}},{\cal X}^{\prime}_{\eta_{0}^{\prime}})\) over the closed point \(\eta_{0}^{\prime}\) of \(S^{\prime}\) such that the non-empty bubbles of \({\cal X}^{\prime}_{\eta_{0}^{\prime}}\) correspond exactly to the rays in \({\rm trop}(Z_{\eta})\).
We must now show that the equivalences \({\cal X}^{\prime}_{\eta^{\prime}}\cong{\cal X}_{\eta}\times_{\eta}\eta^{\prime}\) and \(Z^{\prime}_{\eta^{\prime}}\cong Z_{\eta}\times_{\eta}\eta^{\prime}\) hold. But this follows from the fact that \(\mathfrak{M}^{m}_{\rm SWS}\) and \(\mathfrak{M}^{m}_{\rm LW}\) are tropically compatible by construction. Indeed, any modified special fibre in \({\mathfrak{X}}\) can be obtained from any modified fibre of lower base codimension by a sequence of blow-ups. Every vertex in \({\rm trop}({\cal X}_{\eta})\) is therefore a vertex in \({\rm trop}({\cal X}^{\prime}_{\eta_{0}^{\prime}})\), i.e. \({\cal X}^{\prime}_{\eta_{0}^{\prime}}\) can be seen as a blow-up of \({\cal X}_{\eta}\). This tells us that \((Z^{\prime}_{\eta_{0}^{\prime}},{\cal X}^{\prime}_{\eta_{0}^{\prime}})\) can be seen as the restriction to the closed point \(\eta_{0}^{\prime}\) of an \(S^{\prime}\)-flat family \((Z^{\prime},{\cal X}^{\prime})\in\mathfrak{M}^{m}_{\rm SWS}(S^{\prime})\) (or \(\mathfrak{M}^{m}_{\rm LW}(S^{\prime})\)) such that \({\cal X}^{\prime}_{\eta^{\prime}}\cong{\cal X}_{\eta}\times_{\eta}\eta^{\prime}\) and \(Z^{\prime}_{\eta^{\prime}}\cong Z_{\eta}\times_{\eta}\eta^{\prime}\).
Now, let us dicuss the second case. Denote again by \(V\) the irreducible component of \({\cal X}_{\eta}\) in the interior of which \(P\) lies. Firstly, let us assume that any other point of the support of \(Z_{\eta}\) lying in the interior of \(V\) shares the same equations as \(P\) up to multiplication by a constant. In particular, for all these points the same coordinates \((x_{0}^{(i)}:x_{1}^{(i)})\) or \((y_{0}^{(i)}:y_{1}^{(i)})\) will be fixed and the same will vary (recall that in this case we are assuming already that all \(x,y\) and \(z\) coordinates are fixed). By flatness, it therefore follows that all points of the support of \(Z_{\eta}\) which lie in the interior of \(V\) will tend to the interior of the same irreducible component in any extension of \((Z_{\eta},{\cal X}_{\eta})\). Any extension in which an additional \(\Delta\)-component is expanded out, i.e. in which an additional basis direction is set to zero, cannot be stable, since it would necessarily have an empty expanded component which would destabilise the pair.
Notice that any \((x_{0}^{(i)}:x_{1}^{(i)})\) or \((y_{0}^{(i)}:y_{1}^{(i)})\) which are fixed for one of these points contained in the interior of \(V\subset{\cal X}_{\eta}\) must be fixed for all of them. We may therefore
choose a representative \((Z^{\prime}_{\eta},\mathcal{X}^{\prime}_{\eta})\) in the same equivalence class as \((Z_{\eta},\mathcal{X}_{\eta})\) such that the coordinates \(x_{0}^{(i)}/x_{1}^{(i)}\), \(x_{1}^{(i)}/x_{0}^{(i)}\), \(y_{0}^{(i)}/y_{1}^{(i)}\) or \(y_{1}^{(i)}/y_{0}^{(i)}\) which are allowed to vary in \(Z\) are invertible only outside of \(V\subset\mathcal{X}^{\prime}_{\eta}\). Then the limit of \((Z^{\prime}_{\eta},\mathcal{X}^{\prime}_{\eta})\) is \((Z^{\prime}_{\eta},\mathcal{X}^{\prime}_{\eta})\) itself. And by the above this equivalence class gives us the only stable limit for such a family.
Now, if two points \(P_{0}\) and \(P_{1}\) of the support of \(Z_{\eta}\) which lie in \(V\subset\mathcal{X}_{\eta}\) have different defining equations up to multiplication by a constant, it means that there are some coordinates \((x_{0}^{(i)}:x_{1}^{(i)})\) or \((y_{0}^{(i)}:y_{1}^{(i)})\) which are fixed for \(P_{0}\) but not \(P_{1}\) and vice versa. This means that, as one or the other of these coordinates tends toward \((1:0)\) or \((0:1)\), the points \(P_{0}\) and \(P_{1}\) must tend towards different irreducible components. Similarly to the first case, by Lemma 6.1.7, we know that there exists a unique associated pair \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) for \(\operatorname{trop}(Z_{\eta})\) and by the tropical compatibility condition shown above, we know that every vertex in \(\operatorname{trop}(\mathcal{X}_{\eta})\) is a vertex in \(\operatorname{trop}(\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\). As before this shows that \((Z^{\prime}_{\eta^{\prime}_{0}},\mathcal{X}^{\prime}_{\eta^{\prime}_{0}})\) is the unique SWS (or LW) stable extension of \((Z_{\eta},\mathcal{X}_{\eta})\).
If \(V\subset\mathcal{X}_{\eta}\) contains more than two points of the support which have different equations up to multiplication by a constant, we can just repeat the steps of the previous paragraph until we find a stable extension. It will be unique as it will be the unique pair associated to \(\operatorname{trop}(Z_{\eta})\).
Deligne-Mumford property.Finally we show that both stacks of stable objects constructed have finite automorphisms.
**Proposition 6.1.9**.: _The stacks \(\mathfrak{M}^{m}_{\mathrm{LW}}\) and \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) have finite automorphisms._
Proof.: On the stack \(\mathfrak{M}^{m}_{\mathrm{LW}}\) this is immediate from the definition of LW stability. Since the SWS stable locus is a subset of the LW stable locus, it follows that \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) must also have finite automorphisms. Alternatively, one can recall that a GIT stable point must have finite stabiliser with respect to the relevant \(G\)-action.
Note that any equivalence on \(\mathfrak{X}\) lifted from an isomorphism \(\widetilde{\tau_{I,I^{\prime}}}\) does not fix any object unless \(\widetilde{\tau_{I,I^{\prime}}}\) is the identity map. This is clear from the fact that \(\widetilde{\tau_{I,I^{\prime}}}\) acts on a tuple in \(\mathbb{A}^{n+1}\) by changing the position of its zero entries while preserving the relative order of its nonzero entries. The only way to fix a tuple is to leave its zero entries in their original position, but any map \(\widetilde{\tau_{I,I^{\prime}}}\) which does this is just the identity map.
**Corollary 6.1.10**.: _The stacks \(\mathfrak{M}^{m}_{\mathrm{LW}}\) and \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) are Deligne-Mumford and proper._
Proof.: This follows directly from the results of this section.
### An isomorphism of stacks
We shall now show that the stacks \(\mathfrak{M}^{m}_{\mathrm{SWS}}\) and \(\mathfrak{M}^{m}_{\mathrm{LW}}\) are isomorphic. The following lemma is a standard result, quoted from [10].
**Lemma 6.2.1**.: _Let \(\mathfrak{W}\) and \(\mathfrak{Y}\) be Deligne-Mumford stacks of finite type over an algebraically closed field \(k\), and_
\[f\colon\mathfrak{W}\to\mathfrak{Y}\]
_be a representable etale morphism of finite type. Let \(|\mathfrak{W}(k)|\) denote the set of equivalence classes of objects in \(\mathfrak{W}(k)\) and similarly for \(|\mathfrak{Y}(k)|\)._
_Assume that \(|f|\colon|\mathfrak{W}(k)|\to|\mathfrak{Y}(k)|\) is bijective and that for every \(x\in\mathfrak{W}(k)\), \(f\) induces an isomorphism \(\operatorname{Aut}_{\mathfrak{W}}(x)\to\operatorname{Aut}_{\mathfrak{Y}}(f(x)).\) Then \(f\) is an isomorphism of stacks._
We may construct such a map \(f\colon\mathfrak{M}_{\mathrm{SWS}}^{m}\to\mathfrak{M}_{\mathrm{LW}}^{m}\), which we will show to have the required properties, in the following way. First recall from above that we have an inclusion \(H_{[n],\mathrm{SWS}}^{m}\subset H_{[n],\mathrm{LW}}^{m}\). Therefore the natural morphism \(H_{[n],\mathrm{LW}}^{m}\to\mathfrak{M}_{\mathrm{LW}}^{m}\) restricts to give a morphism \(H_{[n],\mathrm{SWS}}^{m}\to\mathfrak{M}_{\mathrm{LW}}^{m}\). This morphism is equivariant under the group action so must factor through the morphism
\[f\colon\mathfrak{M}_{\mathrm{SWS}}^{m}\longrightarrow\mathfrak{M}_{\mathrm{ LW}}^{m}.\]
**Lemma 6.2.2**.: _The function \(|f|\colon|\mathfrak{M}_{\mathrm{SWS}}^{m}(k)|\to|\mathfrak{M}_{\mathrm{LW}}^{m }(k)|\) induced by \(f\) is a bijection._
Proof.: As we have an inclusion of the SWS stable locus into the LW stable locus, we know that this map must be injective. It remains to show that it is surjective. Let us take any point in \(|\mathfrak{M}_{\mathrm{LW}}^{m}(k)|\). This is given by the equivalence class of a pair \((Z_{k},\mathcal{X}_{k})\), where \(Z_{k}\) is a length \(m\) zero-dimensional subscheme in a fibre \(\mathcal{X}_{k}\) over the point \(\operatorname{Spec}k\). If the pair \((Z_{k},\mathcal{X}_{k})\) is already SWS stable, then there is nothing left to prove. Otherwise, \((Z_{k},\mathcal{X}_{k})\) is LW stable but not SWS stable. This implies that there is at least a point of the support in each expanded \(\Delta\)-component, but there is at least one \(\Delta\)-component which is not expanded out which contains no point of the support. Let us say this \(\Delta\)-component is equal to \(Y_{i}\). But by the equivalences of the stack \(\mathfrak{X}\) such a fibre is equivalent to a fibre where \(Y_{i}\) is not equal to any \(\Delta\)-component. It will therefore be equivalent to a fibre in which every \(\Delta\)-component contains at least one point of the support, which gives us an SWS stable fibre-subscheme pair.
We will need also the following result from Alper and Kresch [1].
**Lemma 6.2.3**.: _Let \(\mathfrak{W}\) be a Deligne-Mumford stack with finite inertia, let \(\mathfrak{Y}\) be an algebraic stack with separated diagonal and let \(f\colon\mathfrak{W}\to\mathfrak{Y}\) be a morphism. Then the largest open substack \(\mathfrak{U}\) of \(\mathfrak{W}\) on which the restriction of \(f\) is a representable morphism enjoys the following characterisation: the geometric points of \(\mathfrak{U}\) are precisely those at which \(f\) induces an injective homomorphism of stabiliser group schemes._
Now we are in a position to prove the following theorem:
**Theorem 6.2.4**.: _The map \(f\colon\mathfrak{M}_{\mathrm{SWS}}^{m}\to\mathfrak{M}_{\mathrm{LW}}^{m}\) is an isomorphism of stacks._
Proof.: This can be seen by applying Lemma 6.2.1 to the map \(f\). In order to do this we must show that this morphism is representable, with the help of Lemma 6.2.3. It follows directly from the fact that \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) is a separated Deligne-Mumford stack that it has finite inertia. By Lemma 6.2.2, the first condition of Lemma 6.2.1 is satisfied. And the map \(f\) defined above must also induce a bijective homomorphism of stabilisers since the only elements which can stabilise a family \((Z,\mathcal{X})\) in \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) or \(\mathfrak{M}_{\mathrm{LW}}^{m}\) are elements of \(\mathbb{G}_{m}^{n}\) (the other equivalences on \(\mathfrak{X}\) do not stabilise any families as explained in Proposition 6.1.9) and, by construction, if a family \((Z,\mathcal{X})\) in \(\mathfrak{M}_{\mathrm{SWS}}^{m}\) has stabiliser \(\operatorname{Stab}_{(Z,\mathcal{X})}\subset\mathbb{G}_{m}^{n}\) then \(f((Z,\mathcal{X}))\) must have the same stabiliser in \(\mathbb{G}_{m}^{n}\). Lemma 6.2.1 therefore holds and \(f\) is an isomorphism of stacks. |
2303.01125 | Distilling Multi-Level X-vector Knowledge for Small-footprint Speaker
Verification | Even though deep speaker models have demonstrated impressive accuracy in
speaker verification tasks, this often comes at the expense of increased model
size and computation time, presenting challenges for deployment in
resource-constrained environments. Our research focuses on addressing this
limitation through the development of small footprint deep speaker embedding
extraction using knowledge distillation. While previous work in this domain has
concentrated on speaker embedding extraction at the utterance level, our
approach involves amalgamating embeddings from different levels of the x-vector
model (teacher network) to train a compact student network. The results
highlight the significance of frame-level information, with the student models
exhibiting a remarkable size reduction of 85%-91% compared to their teacher
counterparts, depending on the size of the teacher embeddings. Notably, by
concatenating teacher embeddings, we achieve student networks that maintain
comparable performance to the teacher while enjoying a substantial 75%
reduction in model size. These findings and insights extend to other x-vector
variants, underscoring the broad applicability of our approach. | Xuechen Liu, Md Sahidullah, Tomi Kinnunen | 2023-03-02T10:09:11Z | http://arxiv.org/abs/2303.01125v3 | # Distilling Multi-Level X-vector Knowledge for Small-footprint Speaker Verification
###### Abstract
Deep speaker models yield low error rates in speaker verification. Nonetheless, the high performance tends to be exchanged for model size and computation time, making these models challenging to run under limited conditions. We focus on small-footprint deep speaker embedding extraction, leveraging knowledge distillation. While prior work on this topic has addressed speaker embedding extraction at the utterance level, we propose to combine embeddings from various levels of the x-vector model (teacher network) to train small-footprint student networks. Results indicate the usefulness of frame-level information, with the student models being 85%-91% smaller than their teacher, depending on the size of the teacher embeddings. Concatenation of teacher embeddings results in student networks that reach comparable performance along with the teacher while utilizing a 75% relative size reduction from the teacher. The findings and analogies are further to other x-vector variants.
Xuechen Liu\({}^{1,2}\), Md Sahidullah\({}^{2}\), Tomi Kinnunen\({}^{1}\)\({}^{1}\)School of Computing, University of Eastern Finland, Joensuu, Finland
\({}^{2}\)Universite de Lorraine, CNRS, Inria, LORIA, F-54000, Nancy, France [email protected], [email protected], [email protected]
+
Footnote †: This work was partially supported by Inria Nancy Grand Est.
## 1 Introduction
_Automatic speaker verification_ (ASV) [1] aims at recognizing persons using their voice. Advancing upon classic statistical models such as _i-vectors_[2], _deep neural networks_ (DNNs) have emerged as a replacement of speaker embedding extractor, improving the ASV performance substantially [1]. Nowadays, it is the most common choice for learning speaker embedding representation.
Nevertheless, with the proliferating application of speech processing algorithms onto embedded devices with constrained computational resources, such as a smart assistant [3, 4], DNNs have various constraints in their practical deployment, including run-time efficiency, power consumption, and memory usage due to a large number of parameters [5]. This is a major issue when the embedded device has limited memory space or needs to operate with weak or no online access, ruling out cloud-based services. Therefore, lightweight models are required, which often come with a trade-off in recognition accuracy. Reducing the performance gap between the small and large DNN models is an important, yet challenging task in speech processing [6, 7, 8, 9].
Solutions to this problem can be broadly divided into two categories. The first one, **model down-scaling**, includes training more compact networks [10, 11], or quantizing the model [12]. For instance, [13] proposed parameter binarization for ASV models and [14] reduced the size of self-supervised speech models via parameter sharing. While achieving promising performance in particular tasks, these approaches may demand extensive engineering effort and subtle learning schemes such as model quantization and parameter tuning. The second category, **Knowledge distillation (KD)** - also known as _teacher-student_ (TS) learning [15, 16, 17, 5] - transfers the knowledge from a _teacher_ network (usually a pre-trained DNN) to a new, _student_ network. Compared to the first category, KD possesses task-specific design of student network and knowledge [9, 18] and thus may not resort too much engineering workloads and hyperparameter tuning, especially for the cases where pre-trained models are available (e.g. [19]1, [20]2). Nevertheless, producing effective knowledge for student learning is challenging [16], especially when the student network is expected to be significantly smaller than the teacher. While demonstrating reasonably good performance with the teacher models on both in-house and public benchmarks, earlier works on KD do not produce very small-footprint models for devices with a considerably significant limit on hardware.
Footnote 1: [https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb)
Footnote 2: [https://github.com/yuyq96/D-TDNN](https://github.com/yuyq96/D-TDNN)
In this work, we focus on developing on-device-level small-footprint ASV models via KD. When DNN was initially used for speaker embedding extraction, a question was how the embedding should be extracted [21]. Models such as an x-vector extractor with _time-delayed neural network_ (TDNN) use utterance-level (referred to as'segment-level' in [22]) speaker embeddings, which contains more abstract, higher-level information. Meanwhile, the frame-level information at earlier layers can still be useful and its successful application as bottleneck features has been studied [23, 24]. The earlier frame-level layers are studied in [25] in terms of holding informative speaker information, which is important for very small-footprint student models for on-device usage.
Therefore, while the use of multiple layers of speaker information [26, 27] and KD [17, 18] have been addressed in prior work, we are unaware of prior studies where they are combined for small-footprint ASV. Therefore, our study seeks to address the feasibility of this approach. We further a simple KD learning framework based on learning target distributions as described in [18, 28] with a designated similarity function. We report the performance in both separate and fused manners on a student network, depending on the dimension of the speaker embeddings. By reducing the model size remarkably from the teacher while maintaining comparable performance, we study the usefulness of various levels of information of the teacher network for the training of a small-footprint network via KD. Additionally, we investigate the effect of embedding-level fusion and extend the proposed methods to more advanced x |
2301.01748 | Cost-Sensitive Stacking: an Empirical Evaluation | Many real-world classification problems are cost-sensitive in nature, such
that the misclassification costs vary between data instances. Cost-sensitive
learning adapts classification algorithms to account for differences in
misclassification costs. Stacking is an ensemble method that uses predictions
from several classifiers as the training data for another classifier, which in
turn makes the final classification decision.
While a large body of empirical work exists where stacking is applied in
various domains, very few of these works take the misclassification costs into
account. In fact, there is no consensus in the literature as to what
cost-sensitive stacking is. In this paper we perform extensive experiments with
the aim of establishing what the appropriate setup for a cost-sensitive
stacking ensemble is. Our experiments, conducted on twelve datasets from a
number of application domains, using real, instance-dependent misclassification
costs, show that for best performance, both levels of stacking require
cost-sensitive classification decision. | Natalie Lawrance, Marie-Anne Guerry, George Petrides | 2023-01-04T18:28:07Z | http://arxiv.org/abs/2301.01748v1 | # Cost-Sensitive Stacking: an Empirical Evaluation
###### Abstract
Many real-world classification problems are cost-sensitive in nature, such that the misclassification costs vary between data instances. Cost-sensitive learning adapts classification algorithms to account for differences in misclassification costs. Stacking is an ensemble method that uses predictions from several classifiers as the training data for another classifier, which in turn makes the final classification decision.
While a large body of empirical work exists where stacking is applied in various domains, very few of these works take the misclassification costs into account. In fact, there is no consensus in the literature as to what cost-sensitive stacking is. In this paper we perform extensive experiments with the aim of establishing what the appropriate setup for a cost-sensitive stacking ensemble is. Our experiments, conducted on twelve datasets from a number of application domains, using real, instance-dependent misclassification costs, show that for best performance, both levels of stacking require cost-sensitive classification decision.
KeywordsCost-sensitive learning, classification, ensemble learning, stacked generalization, stacking, blending
## 1 Introduction
Cost-sensitive learning is relevant in many real-world classification problems, where different misclassification errors incur different costs. A prominent example is the field of medicine, where misdiagnosing an ill patient for a healthy one (a false negative) entails delayed treatment and potentially life-threatening consequences, while an error in the opposite direction (a false positive) would incur unnecessary medical examination costs and stress for the patient. Cost-sensitive classifiers can account for the differences in costs not only between different classes, but also between data instances, making instance-dependent cost-sensitive classification decisions.
Many cost-sensitive classifiers employ ensemble methods, which combine predictions from several classifiers to obtain better generalisation performance. Superiority of ensembles over individual classifiers is very well known and has been extensively studied ([8, 37]). Most cost-sensitive classification ensembles are homogeneous in nature, meaning their components are instantiated using the same learning algorithm.
_Stacked generalization_ or _stacking_[31] is a well known and widely applied heterogeneous ensemble, where the predictions of classifiers produced by different learning algorithms (the _base-learners_) are used as training inputs to another learning algorithm (the _meta-learner_) to produce a _meta classifier_, which makes the final classification decision. In the literature, the _base-_ and _meta-_ levels of stacking are also referred to _level-0_ and _level-1_.
Homogeneous cost-sensitive ensembles such as cost-sensitive boosting and bagging are widely studied and have been shown very successful [25]. Examples of cost-sensitive stacking, on the other hand, are scarce and unsystematic, representing for the most part applications to single domains, where the classifiers are trained on synthetic, class-dependent costs and are evaluated with cost-insensitive performance metrics. For a discussion on the importance of real costs for a proper evaluation see the work by [25]. In fact, there is currently no consensus as to how a cost-sensitive stacking ensemble is to be composed and at what stage (level-0 or level-1) cost-sensitive decision-making should be used. This can be clearly seen in Table 1, which gives an overview of existing cost-sensitive stacking literature. Stacking is typically made cost-sensitive simply through the application of a cost-sensitive classifier either at level-0 (CS-CiS), level-1 (CiS-CS) or at both levels of
the the ensemble (CS-CS), resulting in a total of three possible stacking setups. To the best of our knowledge, no comparison of all three setups on multiple domains with appropriate evaluation exists in the literature. Previous related work used arbitrary artificial costs in model training and evaluated cost-sensitive models using performance metrics that are either cost-invariant or that focus on the performance of only the positive class.
In this work we aim to fill this gap by providing a thorough comparison of various cost-sensitive stacking ensembles on multiple domains using real, instance-dependent costs and performance metrics appropriate for cost-sensitive problems.
### Our contributions
* The main contribution of this work is a rigorous empirical comparison of different setups of cost-sensitive stacking ensembles over multiple domains. We evaluate using appropriate performance metrics and attempt to establish best practice.
* Secondly, we introduce a novel cost-sensitive classifier combination method, inspired by MEC-voting and stacking, which we call _MEC-weighted-stacking_.
* Finally, we present a list of publicly available datasets with clearly defined instance-dependent misclassification costs. The costs are based either on the literature, or are defined by us based on both the literature and expert knowledge of the data providers. We also define instance-dependent costs for a well known 'credit-g' dataset from the UCI Machine learning repository, for which only class-dependent costs were available to date.
### Outline
The remainder of the paper is structured as follows. Section 2 presents an overview of the relevant literature. MEC-weighted stacking is introduced in Section 3. Our hypotheses to be tested, the experimental setup and the datasets used in the study are discussed in Section 4. Section 5 details the results of our extensive experiments, while the main outcomes and limitations are discussed in Section 6. Section 7 concludes the paper.
## 2 Related work
While stacking has been widely used in machine learning applications (the interested reader is invited to peruse the survey on stacking literature by [27]), few works are dedicated to the study of cost-sensitive stacking.
We identified in the literature three different cost-sensitive stacking setups: CiS-CS, CS-CiS or CS-CS, where the ensemble was made cost-sensitive simply through the application of a cost-sensitive classifier either at level-0, level-1 or at both levels of the ensemble. In most cases, the method used to make the classification cost-sensitive is the direct cost-sensitive decision as introduced by [35], also called DMECC [25].
One of the first papers to discuss stacking in a cost-sensitive context was [6]. The authors propose cost-insensitive level-0 and cost-sensitive level-1 stacking setup (_CiS-CS setup_), which was compared to a number of different classifier combinations schemes on 16 classification problems. The misclassification costs they used were artificially generated by randomly and uniformly sampling costs from on the interval \([1,10]\). Several other studies followed adopting the same CiS-CS stacking setup, however none of the studies explicitly reasoned or justified this choice.
Several more papers demonstrated similar examples of multiple-domain studies of CiS-CS stacking with arbitrary costs ([7, 33, 34]). These mainly differ in the type and the number of algorithms that are employed in the ensemble. We note that all of them used cost-insensitive metrics for classifier evaluation.
[19] considers a stacking setup, where level-0 classifiers were cost-sensitive while level-1 was cost-insensitive (_CS-CiS setup_). The misclassification costs were assumed to be equal to the inverse of the class priors. This approach is very commonly adopted in the absence of information about real misclassification costs. It is, however, not appropriate, see [25] for a discussion. The resulting stacking classifier was compared to known ensemble methods using classification accuracy, a metric that by design assumes equal misclassification costs.
Most examples of stacking use different learning algorithms in level-0, however in his original work Wolpert suggested that this must not be the case and the technique can also be applied when a single algorithm is considered. [5] propose a cost-sensitive variant of _bag-stacking_, a method originally proposed by [29], using bagged cost-sensitive decision trees in level-0 and using cost-sensitive logistic regression in level-1, thus implicitly proposing a _CS-CS_ stacking setup. To the best
of our knowledge, this study is the only example where real instance-dependent costs were used in model training. Models were evaluated using a cost-sensitive metric called the savings score, proposed in [2].
The only study to date that considers all three different cost-sensitive stacking setups is one by [13] on the application domain of software defect prediction. The misclassification costs were selected based on a literature however the authors emphasised that they treated costs as one of the hyperparameters of the classifier, which, we must note, is incorrect, as was previously discussed in [15]. The experiments are run on 15 datasets using the same class-dependent cost matrix on all. Balanced error-based metrics were used for evaluation together with cost-based evaluation metrics.
Identifying real misclassification costs is a complex task, which for many applications may prove too difficult to define and compute. Most studies resort to artificially generated misclassification costs (see [25] for a discussion on why this is inappropriate) and error-based evaluation metrics are typically employed to assess generalisation performance of cost-sensitive stacking. Examples of metrics used include the AUC, the arithmetic or geometric mean of class-specific accuracies, the F-measure, and the Matthew's correlation coefficient (MCC). All of these metrics assume equal misclassification costs, and the F-measure does not incorporate the performance on the negative class, so using these metrics is not compatible with cost-sensitive learning [17].
One of the challenges of stacking is the choice of the learning algorithms for the ensemble. Earlier studies proposed to use linear regression to combine level-0 inputs [30], however Wolpert does not impose any particular restrictions on which algorithm to use in level-1, and he believed that his famous 'No Free Lunch Theorem' [32] applies to the meta-learner as well. For the overview of which learning algorithms were used in cost-sensitive stacking ensembles to date we refer our reader to the summary Table 1.
## 3 MEC-weighted stacked generalization
In the typical supervised classification framework, a _learning algorithm_\(A\) is presented with a set \(\mathcal{S}\) of data instances \((\mathbf{x}_{i},y_{i})\), each describing some object \(i\). We call \(\mathbf{x}_{i}\) a feature vector, and \(y_{i}\) the class label of that object, drawn from a finite, discrete set of classes \(\{1,\ldots,K\}\). In this paper we will consider the binary classification problem, where \(y_{i}\in\{0,1\}\). The learning algorithm \(A\), given \(\mathcal{S}\) as input, after a process called training, produces a classifier \(C\), whose task is to predict the correct class label \(\hat{y}_{C}(\mathbf{x}_{j})\in\{0,1\}\) for a previously unseen feature vector \(\mathbf{x}_{j}\).
Training any number \(L\) of learning algorithms on the same set of data instances \(\mathcal{S}\), we obtain a set of classifiers \(\mathcal{C}=\{C_{1}\ldots C_{L}\}\), and for each feature vector \(\mathbf{x}_{i}\) the corresponding set of predictions \(\hat{\mathcal{P}}(\mathbf{x}_{i})=\{\hat{y}_{C_{1}}(\mathbf{x}_{i}),\ldots, \hat{y}_{C_{L}}(\mathbf{x}_{i})\}\). \(C\) is called _an ensemble of classifiers_ if the predictions from \(\hat{\mathcal{P}}(\mathbf{x}_{i})\) are combined, in some way, into a single prediction of the class label for the data instance \(\mathbf{x}_{i}\).
Stacking differs from other classifier ensembles in that the predictions from the set \(\hat{\mathcal{P}}(\mathbf{x}_{i})\) are combined with the original class label \(y_{i}\) to form the set \(\mathcal{S}_{meta}=\{(\hat{y}_{C_{1}}(\mathbf{x}_{i}),\ldots,\hat{y}_{C_{L}} (\mathbf{x}_{i})),y_{i}\}\) of meta level data instances subsequently used in another round of algorithm training to produce a new classifier, which is used to obtain the final predictions.
The novel method we propose in this paper is inspired by the cost-sensitive weights for model votes paradigm described in [25], and consequently called _MEC-weighted stacking_. To each classifier \(C\), we can assign a weight \(w_{C}\) based on that classifier's cost-performance on the validation set: \(w_{C}=f(\epsilon)\), where \(\epsilon\) is the sum of the misclassification costs of all data
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Publication** & **Stacking** & **Level-0** & **Level-1** & **Real** & **Costs** & **CS** \\
**setup** & **algorithm** & **algorithm** & **costs** & **type** & **evaluation** \\ \hline
[6] & CS-CS & DT, KNN, NB & LR & & c & ✓ \\
[19] & CS-CS & DT, KNN, NB & MT & & c & \\
[5] & CS-CS & DT & LR & ✓ & i & ✓ \\
[34] & CS-CS & DT, KNN, NB & LR & & c & \\
[7] & CS-CS & ExT, GBDT, LDA, LR, RF & LR & & c & \\
[33] & CS-CS & DT, KNN, RF, SVM & DT, KNN, NB, SVM & c & \\
[13] & CS-CS, CS-CS, CS-CS & DT, NB, KNN, SVM & LR, Ext & & c & ✓ \\ _this paper_ & CS-CS, CS-CS, CS-CS & DT, KNN, LR, SVM & Adab, DT, KNN, LR, RF, SVM & ✓ & i & ✓ \\ \hline \multicolumn{2}{l}{_Costs type:_} & c: class-dependent, i: instance-dependent \\ \multicolumn{2}{l}{_Algorithms:_} & Adab: Adaboost, DT:decision tree, ExT: extremely randomised trees, GBDT: gradient boosted trees, KNN: k-nearest neighbour, \\ \multicolumn{2}{l}{LDA: linear discriminant, LR: logistic regression, MT: Meta Decision Trees, NB: naive bayes, RF: random forest, SVM: support vector machines} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of cost-sensitive stacking literature
instances incorrectly classified by \(C\) on a validation set and \(f(\epsilon)\) is a transformation function, which for example can take one of the following forms: \(f(\epsilon)=\ln((1-\epsilon)/\epsilon)\), \(f(\epsilon)=1-\epsilon\), \(f(\epsilon)=\exp((1-\epsilon)/\epsilon)\), and \(f(\epsilon)=((1-\epsilon)/\epsilon)^{2}\).
The general stacking procedure is thus modified with the additional step of collecting the MEC-weights for each of the predictions from the set \(\hat{\mathcal{H}}(\mathbf{x}_{i})\), yielding the weighted set of predictions \(\hat{\mathcal{Y}}_{MEC}(\mathbf{x}_{i})=\{(w_{C_{1}}\hat{Y}_{C_{1}}(\mathbf{x }_{i}),\dots,w_{C_{L}}\hat{Y}_{C_{L}}(\mathbf{x}_{i})),y_{i}\}\), which is used in meta classifier training instead of \(\hat{\mathcal{Y}}(\mathbf{x}_{i})\).
## 4 Experimental setup
### Data
In this study we use a collection of 10 publicly available datasets and 2 private datasets, for which misclassification costs have either already been defined or will be defined here. This collection of datasets represents a number of application domains: _credit scoring_, _customer churn prediction_, _direct marketing_, _credit card fraud detection_, and _HR analytics_.
### Misclassification costs
Table 2 presents the references both to the datasets and to relevant publications where the instance-dependent misclassification costs for a given domain were introduced. Most of the datasets are large, the number of instances ranging between 1000 and almost 600000. The number of input features ranges from 15 to 479. All of these datasets demonstrate a large degree of class imbalance, where the percentage of positives reaches at most 30%, and in dataset _fraud_ulb_kgl_ less than 1%.
In this work we propose instance-dependent costs for these two datasets, for which no costs were previously defined.
The German credit dataset is well known and is referred to as _credit_de_uci_ in Table 2. Only class-dependent costs were available for this data set, where the prediction task is to identify customers that will default on their loan. We define instance-dependent costs using the conceptual framework proposed by [2]. For any data instance \(i\), the cost of a false negative \(C_{FN}^{i}\) is defined as loss given default and constitutes 75% of the credit line, while the cost of a false positive \(C_{FP}^{i}\) is the loss of the potential profit from rejecting a good customer, plus the sum of the average expected loss and the average expected profit estimated on the training sample. We define profits as simply the interests earned on the credit line in the current year. The profits are calculated using historic interest rates for the year 2000 in Germany, which we apply randomly and uniformly to the whole sample.
The bankruptcy dataset was provided by the credit risk department of a European utilities-provid
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & **Application** & **Dataset alias** & **\# instances** & **\# Attr** & **\% positives** & **Instance-dependent** \\ & **domain** & & & & & **costs source** \\ \hline
1 & Bankruptcy & bankruptcy (private) & 404999 & 221 & 3.31 & this publication \\
2 & Churn & churn\_kgl (Kaggle*) & 7043 & 21 & 26.54 & [25] \\
3 & Churn & churn\_AB [3] & 9410 & 45 & 4.83 & [3] \\
4 & Credit risk & credit\_kgl (Kaggle*) & 112915 & 15 & 11.70 & [2] \\
5 & Credit risk & credit\_de\_uci [12] & 1000 & 20 & 30.00 & this publication \\
6 & Credit risk & credit\_kdd09 [28] & 38938 & 39 & 19.89 & [2] \\
7 & Credit risk & credit\_ro\_vub [24] & 18918 & 24 & 16.95 & [24] \\
8 & Direct marketing & dm\_pt\_uci [12, 22] & 45211 & 17 & 11.27 & [4] \\
9 & Direct marketing & dm\_kdd98 [12] & 95412 (train) & 479 & 5.08 & [25] \\ & & 96367 (test) & & 5.06 & \\
10 & Fraud detection & fraud\_ulb\_kgl [21] & 284807 & 31 & 0.17 & [25] \\
11 & Fraud detection & fraud\_ieee\_kgl (Kaggle*) & 590540 & 432 & 3.50 & [25] \\
12 & HR analytics & absenteeism\_be (private) & 36853 (train) & 71 & 14.50 & [20] \\ & & 35884 (test) & & 10.76 & \\ \hline \multicolumn{6}{l}{* Kaggle: [https://www.kaggle.com/](https://www.kaggle.com/)} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Characteristics of the datasets used in our experiments
in predicting the risk of corporate bankruptcy for new customers. With minor modifications, it readily transfers to the same credit risk model described above. Here the credit line is equivalent 90 days of utilities usage by the customer, which, in case of default, the provider loses in full, so \(C_{FP}^{i}\) equals the credit amount. The profit margins were provided to us and are calculated per customer based on the assumption of a 12-month contract. Thus, the \(C_{FP}^{i}\) then equals the annual profit margins for the potentially good customer plus the expected average loss and expected average profit calculated on the given sample.
#### 4.2.1 Data preprocessing
We take care to employ the same preprocessing steps for each of the datasets in the sample, as recommended by the works that first published them.
In addition to that, we apply the following preprocessing steps to all datasets. All numeric variables are rescaled using the quantile statistics, which are robust to outliers. Missing values of numeric variables are imputed with sample median, and of categorical variables are encoded as a separate category. All categorical variables are transformed using weight-of-evidence coding [1].
#### 4.2.2 Data partitioning
The classifier performance estimates are obtained by means of repeated stratified k-fold cross-validation. The \(5\times 2\) cross-validation suggested by [10] is used to train and evaluate stacking ensembles. This resampling is repeated 5 times using different random seeds, and the results are averaged across folds and across iterations. Large datasets with more than 100000 observations, to keep training times manageable, were split into five disjoint subsets, uniformly at random.
We note that two datasets in our sample are provided with a separate test set, used to evaluate model performance. In this case, for fairness of comparison, we perform the split into folds on each of the training and test datasets using the same seed, we then proceed using the training partition of the training set and the test partition of the test set. The training partition of the test set remains unused in evaluation. When training and test data sets contain the same observations at different time periods (e.g. in bankruptcy prediction) we ensure that training and test datasets are disjoint and do not contain overlapping data instances.
### Learning algorithms
The choice of the algorithms for the base- and meta-level of stacking remains one of the challenges of stacked generalization. To the best of our knowledge, no study exists that demonstrates the necessity to use a specific algorithm combination in either base- or meta-level of stacking. The main requirement for the base classifiers of any ensemble is that they are sufficiently accurate (meaning they predict better than a random guess) and sufficiently diverse (meaning their errors are uncorrelated) [11]. In a heterogeneous ensemble, where the decisions of different learning algorithms are combined, the number of base-learners need not be large [26]. All algorithms below have previously been described and discussed in detail in a number of machine learning textbooks, for example [16], so we refrain from repeating these descriptions here.
#### 4.3.1 Base-learners
The base learners in our experiments are four well known classification algorithms, which are: CART Decision Tree (DT), K-Nearest Neighbors (KNN), Support Vector Machines (SVM) and Logistic Regression (LR). Unlike [7] and [33] before us, we choose not to use ensembles such as Random Forest or Extremely Randomised Trees in the base level of stacking. The reasons for this are two-fold. Firstly, ensembles in general, and stacking in particular are typically built on weak base-learners, which these very powerful models, which are themselves ensembles, certainly are not. Secondly these methods are based on decision trees and their errors will be correlated with DT. In our choice we also considered the recommendations of [8], one of the largest empirical studies known to date comparing algorithm performance on 121 datasets. Their results on binary problems (55 UCI datasets) demonstrate that Random Forest, SVM, Bagging and Decision Trees have the highest probability of obtaining more than 95% of accuracy, while classifiers of the Naive Bayes (NB) family are not competitive in comparison. We therefore do not include NB in our experiments, unlike some previous studies in cost-sensitive stacking.
#### 4.3.2 Meta-learners
The choice of the meta-learner constitutes a challenge as well, as was called 'the black art' by the original author of stacked generalization [31]. To keep the scale of our experiments manageable and to allow for statistical comparison between stacking and base classifiers, we use the same four algorithms that were used in the level-0 of stacking. In addition to that we also use two homogeneous ensemble methods that, according to [8], perform well on most problems, namely Adaboost (Ada) and Random Forest (RF).
#### 4.3.3 Cost-sensitive learners
While many variants of cost-sensitive learning algorithms exist that can incorporate the misclassification costs during classifier training [25], in this study we are not interested in comparing cost-sensitive learning algorithms, but in ways of combining cost-sensitive and cost-insensitive learners in a single ensemble. For our purposes it is important that the two classifiers we compare are different in all but one thing, that is the composition of the ensemble. We therefore choose to turn known cost-insensitive classifiers cost sensitive by applying a cost-sensitive threshold adjustment method called DMECC [25]. In this method, each data instance is classified according to its individual cost-sensitive decision threshold, which is based on the ratio of misclassification costs of that particular data instance. The threshold is calculated as follows: \(T^{i}_{cs}=\frac{C^{i}_{FP}-C^{i}_{TN}}{C_{FP}-C_{FN}+C_{FN}-C_{TP}}\), where \(C^{i}_{TN}\) and \(C^{i}_{TP}\) refer to the costs of correct classification, and \(C^{i}_{FN}\) and \(C^{i}_{FP}\) refer to the misclassification costs of the positive and negative data instances respectively. A given record is classified as positive when its estimated probability of being positive exceeds its individual cost-sensitive threshold \(T^{i}_{cs}\)[35].
Since some learning algorithms (such as DT or SVM) are known not to produce reliable probability estimates, we applied isotonic calibration [36] to all base-learners.
#### 4.3.4 Cost-sensitive stacking
To the best of our knowledge no definition exists of what constitutes cost-sensitive stacking. Based on the insights from the literature earlier discussed in Section 2, we see three main possibilities of introducing cost-sensitivity into the ensemble structure.
1. Level-0 classifiers are cost-sensitive, level-1 classifiers are cost-insensitive.
2. Level-0 classifiers are cost-insensitive, level-1 classifiers are cost-sensitive.
3. Both level-0 and level-1 classifiers are cost-sensitive.
We consider 4 functional forms for the MEC-weights as introduced in Section 3, which resulted in a total of 15 stacking setups to be compared. The complete list of ensemble compositions is presented in Table 3.
**MEC-weighted stacking** renders the level-1 classifier cost sensitive through manipulation of the training data in a cost-sensitive way by applying MEC-weights to the training data of the level-1 classifier. We consider it an alternative to obtaining an ensemble where both training levels are cost-sensitive, which is the third stacking setup stated above.
#### 4.3.5 Software used
All of our experiments were performed using the Python programming language (version 3.8). Cost-insensitive algorithm implementations came from the _scikit-learn_ (version 1.1.1) Python library [23], while the cost-sensitive implementations are our own.
### Evaluation
#### 4.4.1 Evaluation metrics
Contrary to previous studies in cost-sensitive stacking, we would like to emphasise the importance of using appropriate evaluation metrics for cost-sensitive classifiers. Most authors use traditional evaluation metrics such as ROC_AUC, Precision or F1 metric. ROC_AUC is known to be cost-invariant, since it is a measure that aggregates classifier performance over all possible class-dependent thresholds, thus implicitly averaging performance over multiple class-dependent costs, which
is not appropriate. Other error-based metrics typically assume equal class-dependent costs, which, again, is not appropriate, when instance-dependent costs are known at estimation time. Cost-sensitive learning aims to adapt the classification decision of a learning algorithm to the differences between misclassification costs assigned to each of the classes. It is therefore important that the evaluation metrics used to assess the performance of cost-sensitive classifiers is also adapted to account for the difference in misclassification costs. The typical evaluation metric used in cost-sensitive literature is the total misclassification cost [14], that simply adds up the errors weighted with their individual misclassification costs, as defined on the test set. Another option is to normalise the total misclassification cost over some budget constraint, which will depend on the application domain. A more general way to do this is to use the savings score proposed in [2], where the total misclassification costs are normalised with the cost of either misclassifying all positives as negatives, or misclassifying all negatives as positives, which ever is smallest. This gives a metric on the interval between 0 and 1, facilitating comparison across different datasets, when necessary.
Since the majority of comparisons in our study is performed based on average ranks, it requires no commensurability of the evaluation metrics, so the models are ranked according to their total misclassification costs, which allows for more precise outcome.
#### 4.4.2 Multiple classifier comparison
In order to compare multiple classifiers on multiple datasets, we use the standard approach of the combination of the Friedman omnibus test and post-hoc Nemenyi test [18]. The Friedman test is conducted under the null-hypothesis that all algorithms in comparison are equivalent in performance. If this null-hypothesis is rejected, the post-hoc test can be performed to identify pairs of classifiers whose performance is significantly different, which is measured using the critical difference statistic, and can be visualised using the critical differences diagrams [9]. The non-parametric tests, such as the Friedman test, are preferred in case where the number of datasets in comparison is less than 30, which is the number of datasets necessary to satisfy the normality assumptions of parametric statistical tests, such as ANOVA [9]. The post-hoc test is known to be of low power, not rejecting the null even if the null was rejected for the Friedman test. In this case, we additionally apply Wilcoxon signed-ranks test, as appropriate, which is used for pairwise comparisons of classifiers on multiple datasets. This test ranks differences in performances of a given pair of classifiers, under the null hypothesis that the median difference in ranks is zero. It therefore allows establishing whether the observed differences in performance between two classifiers are significant. It is considered more powerful than its parametric equivalent, the paired t-test when the assumptions of the latter cannot be guaranteed. It is also considered more powerful than the Sign test, which counts the number of wins, losses and ties [9].
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & **Stacking setup** & **Level-0** & **Level-1** & **Level-1** \\ & **alias** & **algorithm type** & **input weights** & \(f(\varepsilon)\) & **algorithm type** \\ \hline
1 & type-1 & CS & 1 & CiS \\
2 & type-1\_exp & CS & \(\exp((1-\varepsilon)/\varepsilon)\) & CiS \\
3 & type-1\_ln & CS & \(\ln((1-\varepsilon)/\varepsilon)\) & CiS \\
4 & type-1\_sq & CS & \(((1-\varepsilon)/\varepsilon)^{2}\) & CiS \\
5 & type-1\_acc & CS & \(1-\varepsilon\) & CiS \\
6 & type-2 & CiS & 1 & CS \\
7 & type-2\_exp & CiS & \(\exp((1-\varepsilon)/\varepsilon)\) & CS \\
8 & type-2\_ln & CiS & \(\ln((1-\varepsilon)/\varepsilon)\) & CS \\
9 & type-2\_sq & CiS & \(((1-\varepsilon)/\varepsilon)^{2}\) & CS \\
10 & type-2\_acc & CiS & \(1-\varepsilon\) & CS \\
11 & type-3 & CS & 1 & CS \\
12 & type-3\_exp & CS & \(\exp((1-\varepsilon)/\varepsilon)\) & CS \\
13 & type-3\_ln & CS & \(\ln((1-\varepsilon)/\varepsilon)\) & CS \\
14 & type-3\_sq & CS & \(((1-\varepsilon)/\varepsilon)^{2}\) & CS \\
15 & type-3\_acc & CS & \(1-\varepsilon\) & CS \\ \hline \hline \end{tabular}
\end{table}
Table 3: The complete list of stacking setups compared in our study.
## 5 Experimental results
The purpose of our experiments is twofold. Firstly, we would like to compare the performance of the different cost-sensitive stacking setups in order to determine which of them results in the lowest cost-loss and can be recommended to practitioners. Secondly, we aim to empirically evaluate MEC-weighted stacking, which is a new cost-sensitive stacking method we earlier described in Section 3.
Despite our best efforts, not all classifiers trained successfully on all 12 datasets. In particular, we were unable to collect results for the MEC-weighted stacking where the weights were defined by the logarithmic function on the credit scoring problem _credit_ro_sub_, and MEC-weighted stacking with exponential weights were missing on the fraud detection dataset _fraud_ulb_kgl_. The full results for all 15 stacking setups are thus available on 10 datasets, instead of 12. Unweighted stacking results, however, are available on all 12 datasets, which we briefly discuss, for completeness.
### Finding the best cost-senstive stacking setup
#### 5.1.1 Overall comparison
We begin with an overall comparison, where all classifiers are evaluated and ranked on each of the 10 datasets, and for each of them an average rank is computed across all datasets. Figure 1 presents the average ranks for all stacking classifiers, where the vertical axis shows the stacking setup and the horizontal axis shows the corresponding level-1 algorithm. The comparison consists of a total of 90 classifiers (6 algorithms and 15 stacking setups). For brevity, we adopt the aliases for each of the stacking setups earlier presented in Table 3.
We notice immediately that the ranking demonstrates clusters with stacking ensembles of type-3 ranking the best, while type-1 ensembles rank the worst. We note that models built with KNN and SVM algorithms tend to rank lower than decision tree based models or logistic regression. However, the general picture of type-3 stacking ranking the best and type-1 ranking the worst remains unchanged for KNN and SVM, although the differences in ranks between the three groups are smaller than for other algorithms.
Whether these differences in ranks are statistically significant will be discussed in the following subsection, where we demonstrate the outcomes of statistical tests that compare the performance of various stacking classifiers across multiple domains.
#### 5.1.2 Comparing unweighted stacking setups on 12 datasets
We begin by testing the null hypothesis that the three unweighted stacking setups show no difference in performance. The comparison is performed for each of the six classification algorithms used as level-1 learners. The null hypothesis of the
Figure 1: Comparing all classifiers by average rank across 10 datasets. Lower numbers correspond to better rank.
Friedman test was rejected for all 6 comparisons, and the test statistics are presented in row 1 of Table 4.
We proceed with the post-hoc Nemenyi test to evaluate the alternative hypothesis that the performance of three stackings setups is not equal. Figure 2 presents the results of the post-hoc tests at the 0.05 significance level. We find that type-3 stacking ranks best and is significantly different from both type-2 and type-1 for all algorithms except SVM, where the difference is only significant for the comparison between type-3 and type-1, but no conclusions can be made regarding the differences between ensembles of type-3 and type-2. Similarly, no conclusions can be made regarding the differences in rank between type-2 and type-1 stacking ensembles.
Since the outcome of the post-hoc tests are ambiguous in the case of SVM, we also perform the Wilcoxon rank sum test under the null hypothesis that the median of the paired differences is zero. For the comparison between type-3 and type-2 unweighted stacking the null is rejected at 0.05 level.
We conclude from these tests that type-3 stacking performs significantly better than the other two stacking setups.
#### 5.1.3 Comparing all cost-sensitive stacking setups on 10 datasets
We proceed to compare all 15 stacking classifiers on 10 datasets. The outcome of the Friedman rank sum test can be found in row 2 of Table 4. The null hypothesis of the Friedman test is rejected for every meta-learner at the 1% significance level, so we conclude that the performance of all 15 models is not equal and proceed with the post-hoc test. Figure 3 shows the outcome of the Nemenyi test at 0.05 significance level.
These are for the most part consistent with what we observed in Figure 1, where the classifiers tend to cluster by stacking setup, type-3 being the leader, type-2 the second-best and type-1 ranking worst. Similar to what we observed above with unweighted stacking, we can reject the null that type-3 stacking and its MEC-weighted variants are equal in performance to type-1 stacking and variants. This holds for all algorithms except KNN and SVM. For stacking ensembles with KNN in level-1 _type-3_ and _type-3_acc_ classifiers are not significantly different from _type-1_exp_ and _type-1_sq_, while for SVM no significant differences were detected between _type-3_exp_ and _type-3_sq_ and other type-1 ensembles.
Since Nemenyi post hoc test is not powerful enough to establish whether the differences between the three stacking setups are statistically significant, additional testing is required. From the outcomes of the post hoc test we observed that type-3 stacking generally tends to rank highest, and is therefore of most interest to us. We therefore perform the Wilcoxon rank sum test for all combinations of pairwise comparisons of stacking algorithms of type-3 vs type-1 and of type-3 vs type-2 under the null hypothesis that the median of the rank differences between the two groups is equal to zero. The complete tables with the obtained test statistics and p-values can be found in the Appendix. We find that the null could be confidently rejected for all comparisons between type-3 and type-1 stacking ensembles, we refer the reader to the Table 7
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & Test statistic (\(\chi_{(k-1)}\)) & Ada & DT & KNN & LR & RF & SVM \\ \hline Unweighted (\(k=3,n=12\)) & 6.5 & 32.69** & 32.59** & 32.79** & 32.69** & 32.59** & 32.59** & 32.59** \\ All (\(k=15,n=10\)) & 3.94 & 132.44** & 132.09** & 132.35** & 132.29** & 132.09** & 132.09** \\ \hline \hline \end{tabular}
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
****
****
****
****
****
****
******
******
******
******
******
******
******
********
******
******
******
******
******
******
******
******
******
********
******
******
******
******
******
******
******
******
******
******
******
******
********
******
******
********
******
******
********
******
********
********
********
******
********
********
******
********
********
********
********
********
******
********
********
********
********
********
********
********
********
********
********
**********
**********
********
**********
********
**********
**********
**********
********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
************
**********
************
**********
**********
**********
**********
**********
**********
**********
**********
**********
************
************
************
************
************
************
**********
************
************
**********
**********
**********
**********
************
************
**********
************
************
**********
**********
************
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
**********
************
**********
**********
************
************
************
**********
**********
**********
************
************
**********
**********
************
**********
************
**********
********
************
************
************
************
************
************
************
************
************
************
**************
**************
************
************
************
************
************
************
************
************
************
************
************
************
************
************
**********
************
************
************
************
************
************
************
************
************
************
************
************
************
**********
**********
************
**********
**********
**********
************
**********
**********
**********
**********
**********
************
**********
**********
************
**********
************
**********
**********
************
************
**********
**********
**********
************
**********
**********
**********
**********
**********
************
**********
**********
**********
************
**********
**********
**********
************
************
************
**********
************
**********
************
************
**********
************
**********
************
**********
**********
**********
**********
************
************
************
************
**********
************
**********
**********
************
**********
**********
**********
************
**********
**********
**********
************
**********
**************
************
**********
************
**********
**********
************
**********
**********
**********
**********
********
**********
**********
********
**********
**********
**********
**********
************
********
********
**********
**********
**********
**********
********
**********
********
********
**********
********
**********
**********
********
**********
********
********
**********
******
********
**********
**********
**********
************
**********
************
**********
************
**********
**********
**********
**********
********
**********
**********
************
**********
********
********
************
************
************
************
**********
**********
********
**********
********
**********
********
********
********
********
********
********
********
************
**********
**********
************
**********
**********
************
************
**********
************
********
**********
**********
************
************
**************
************
**********
************
****
in the Appendix for details.
As for the comparison of stacking type-3 with type-2, the only algorithm where the null could not be rejected was SVM. We found that the differences between all type-3 MEC-weighted stacking variants and type-2 stacking ensembles were not significant. However, type-3 unweighted stacking was significantly different from all type-2 stacking variants, see Table A.2 for details.
We can therefore recommend type-3 stacking, where both levels of stacking are cost-sensitive, as the winner.
### Evaluating MEC-weighted stacking
Our next research question is whether within the same setup, MEC-weighted stacking offers any improvement over the unweighted stacking. To determine whether there is a statistically significant difference in performance between the MEC-weighted stacking models and their unweighted counterparts we perform pairwise comparison using Wilcoxon rank sum test. The test statistics and corresponding p-values from the 72 comparisons are reported in Table 5. Values that are significant at 5% level were highlighted with boldface text, weakly significant values at 10% level were highlighted with italics. As previously, the results are reported per learning algorithm used as the level-1 classifier.
We find almost no significant differences in performance between unweighted and weighted stacking for setups of type-1 and type-2 with rare exceptions. Surprisingly, only one comparison of stacking type-1, where the level-1 classifier is cost-insensitive shows a significant difference in performance. Namely, the stacking setup _type-1_sq_ learned with Logistic Regression in level-1. Referring back to the average rankings reported in Figure 1 it happens to be the best performing Logistic Regression stacking of type-1, so in this instance MEC-weighted stacking is significantly better than its counterpart with equally weighted meta-inputs. It is, however, an exception, and we must conclude that introducing cost-sensitivity through MEC-weights into level-1 of stacking has no positive impact on type-1 stacking performance.
Similar conclusions can be drawn for stacking of type-2, where base classifiers are cost-insensitive but the meta-classifier is made cost-sensitive using DMECC. Here we observe only two statistically significant test outcomes, both of which have lower average ranks than the unweighted stacking of type-2.
For the setup of type-3 where both level-0 and level-1 of stacking are made cost-sensitive using DMECC, the null could not be rejected for AdaBoost, Logistic Regression and KNN. Most of the MEC-weighted ensembles built with Decision Tree, Random Forest and SVM were significantly different from unweighted stacking of type-3. Looking at the differences in the average ranks, however we note that unweighted stacking of type-3 ranks noticeably better than any of the MEC-weighted models.
We must therefore conclude that MEC-weighted stacking does not offer a statistically significant improvement over conventional stacking.
Figure 3: Comparing all stacking setups on 10 datasets using Nemenyi post-hoc test at 0.05 significance level.
### Comparing cost-sensitive stacking with single cost-sensitive models
Finally, one might ask whether the effort involved in training the level-1 classifier is worth it. To answer this we compare the best stacking classifier with the corresponding single classifier. Having previously determined the best stacking setup, where DMECC was applied in both levels, and no MEC-weights are applied, we will omit other classifiers from this analysis. We average classifier performance across cross-validation folds using the savings metric (see Section 4.4) for commensurability. We also rank the resulting selection of classifiers by savings and average ranks across 12 datasets. The results are reported in Table 6, where the winning classifier (per algorithm) is marked with boldface font, the best performer per dataset is marked with italics.
We note that cost-sensitive stacking always achieves positive savings, meaning its total misclassification costs are lower than the predetermined budget. Stacking has higher average savings on all algorithms except KNN and Random Forest. In terms of average ranks, stacking wins for all level-1 algorithms except KNN, which, we note, is one of the worst ranking algorithms in our study.
## 6 Discussion
Outcome 1: using cost-sensitive models in both levels of stacking is recommendedThe results presented in this paper, have demonstrated that there is a statistically significant difference in performance between the three different stacking setups considered in our experiments, namely _CiS-CS_, _CS-CiS_, and _CS-CS_. Contrary to the majority of cost-sensitive stacking papers that assumed that one level of cost-sensitive decision-making is sufficient, our experiments demonstrate that stacking models where the DMECC was applied in both levels of stacking achieved the highest ranking.
While these conclusions hold for this particular post-training method, cost-sensitivity can be introduced either before or during training of the learning algorithm. Further experiments are required to investigate how different cost-sensitive methods affect our conclusions. Now that we have established how cost-sensitive stacking should be built, future work can focus on combining various kinds of cost-sensitive algorithms, including pre-, during- and post-training cost-sensitive methods [25]. Another interesting avenue for future research would be investigating homogeneous cost-sensitive stacking, an example of which was proposed in [5] using cost-sensitive decision trees as base classifiers and cost-sensitive logistic
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Level-1 & \multicolumn{4}{c}{Unweighted stacking type-1 vs} \\ algorithm & type-1.acc & type-1.exp & type-1.ln & type-1.sq \\ \hline Ada & 18.0 (0.3) & 18.0 (0.3) & 18.0 (0.3) & 18.0 (0.3) \\ DT & 23.0 (0.63) & 23.0 (0.63) & 23.0 (0.63) & 23.0 (0.63) \\ KNN & 27.5 (1.0) & 21.0 (0.56) & 17.0 (0.32) & 26.5 (0.92) \\ LR & 27.0 (0.96) & 14.0 (0.15) & 22.0 (0.56) & _10.5 (0.08)_ \\ RF & 23.0 (0.63) & 23.0 (0.63) & 23.0 (0.63) & 23.0 (0.63) \\ SVM & 18.0 (0.3) & 27.0 (0.96) & 22.0 (0.56) & 27.0 (0.96) \\ \hline \hline \multicolumn{5}{c}{Unweighted stacking type-2 vs} \\ & type-2.acc & type-2.exp & type-2.ln & type-2.sq \\ \hline Ada & 22.0 (0.56) & 23.0 (0.63) & 23.0 (0.63) & 23.0 (0.63) \\ DT & 27.5 (1.0) & 27.5 (1.0) & 26.5 (0.92) & 18.5 (0.35) \\ KNN & 25.0 (0.8) & 21.0 (0.5) & 25.0 (0.8) & 21.0 (0.5) \\ LR & _10.5 (0.08)_ & 20.5 (0.47) & 24.5 (0.76) & 16.5 (0.26) \\ RF & 27.5 (1.0) & 22.5 (0.61) & 17.5 (0.3) & 26.5 (0.92) \\ SVM & 23.5 (0.68) & 19.5 (0.41) & 18.5 (0.36) & _10.5 (0.08)_ \\ \hline \hline \multicolumn{5}{c}{Unweighted stacking type-3 vs} \\ & type-3.acc & type-3.exp & type-3.ln & type-3.sq \\ \hline Ada & 13.0 (0.16) & 19.0 (0.43) & 19.0 (0.43) & 19.0 (0.43) \\ DT & **2.0 (0.01)** & 11.0 (0.11) & **0.0 (0.0)** & _10.0 (0.08)_ \\ KNN & 24.0 (0.77) & 17.0 (0.32) & 24.0 (0.77) & 16.0 (0.28) \\ LR & 13.0 (0.16) & 19.0 (0.43) & 14.0 (0.19) & 24.0 (0.77) \\ RF & **4.0 (0.01)** & **8.0 (0.05)** & **3.0 (0.01)** & _9.0 (0.06)_ \\ SVM & _10.0 (0.08)_ & **0.0 (0.0)** & _9.0 (0.06)_ & **7.0 (0.04)** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Pairwise comparison of unweighted and MEC-weighted stacking using Wilcoxon rank sum test. Statistically significant values are marked with boldface (significance 0.05) and italics (significance 0.1).
regression as level-1 classifier.
Outcome 2: cost-insensitive classifiers do not perform well when costs are known, even in stackingAs was previously shown in [20] cost-insensitive classifiers, having no way to account for differences in misclassification costs, typically perform worse than cost-sensitive models when evaluated using cost-based performance metrics. In our study, we observed yet another confirmation to this in the context of heterogeneous ensembles, where base-learners were cost-sensitive and meta-learners were cost-insensitive. It is, however, surprising that applying MEC-weights has no positive impact on the performance of these cost-insensitive meta-learners. So we conclude that the transfer of cost information via cost-sensitive decision-making of the base-classifiers, and via MEC-weights was not sufficient to influence the final decision of the meta-learner. And even though application of MEC-weights to the meta-inputs makes the meta-level cost-sensitive, the performance of this method is inferior to unweighted stacking models. We can hypothesise that it may be different if the meta-learner used misclassification costs internally, but this questions is left for future research.
LimitationsOur current work is not without limitations, which we address below. The choice of the algorithms to be used in stacking is likely to impact its performance. In order to keep our experiments manageable, we limited ourselves to algorithms used previously in the cost-sensitive stacking literature. No parameter tuning was performed to preserve the same base-classifier composition across domains. Those familiar with SVM classifiers could remark that not tuning this algorithm is a mistake. We are aware of this limitation, which resulted in possibly poor comparative performance of SVM-based ensembles, however the purpose of the work was to ensure that the ensembles compared differ in only one thing, which is the inclusion of cost-sensitive decision-making into different levels of the ensemble. We are interested in relative performance of stacking setups, not in optimal performance of every learning algorithm on every domain. In order to perform statistical tests, we had to ensure that the classifiers were the same in every ensemble for every dataset, while parameter tuning will have resulted in different parameter settings on different datasets, which would have prevented us from performing statistical comparison. In future work we may experiment with homogeneous stacking, where the diversity of the ensemble will be created by hyperparameter tuning of the base classifiers.
## 7 Conclusions
Stacking is a well established state-of-the-art ensemble method, that has been widely applied to many application domains. In this work we provide insights into ways to make stacking cost-sensitive. We compare 90 stacking models built with 15 different compositions of the stacking ensemble using 6 well known classification algorithms. We evaluate on 12 real-world cost-sensitive problems with clearly defined, non-synthetic, instance-dependent misclassification costs. In contrast
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{3}{c}{Ada} & \multicolumn{3}{c}{DT} & \multicolumn{3}{c}{KNN} & \multicolumn{3}{c}{LR} & \multicolumn{3}{c}{RF} & \multicolumn{3}{c}{SVM} \\ Dataset & Single & Stacking & Single & Stacking & Single & Stacking & Single & Stacking & Single & Stacking & Single & Stacking \\ \hline absenteism\_be\_1 & 0.188 & **0.225** & 0.219 & **0.242** & 0.199 & **0.224** & 0.188 & **0.23** & 0.172 & _0.243_ & 0.188 & **0.223** \\ bankruptcy & 0.03 & **0.123** & 0.112 & **0.123** & **0.105** & 0.058 & -0.043 & _0.126_ & **0.31** & 0.123 & -0.024 & **0.024** \\ churn\_AB & _0.171_ & 0.081 & 0.052 & 0.087 & **0.07** & 0.019 & 0.062 & **0.082** & **0.1** & 0.086 & 0.033 & **0.04** \\ churn\_kgl & _0.311_ & 0.295 & 0.0 & **0.297** & **0.242** & 0.031 & **0.302** & 0.295 & 0.273 & **0.296** & **0.225** & 0.066 \\ credit\_de\_uci & **0.399** & 0.391 & 0.293 & **0.387** & 0.337 & **0.351** & **0.396** & 0.388 & _0.424_ & 0.386 & **0.405** & 0.354 \\ credit\_kdd09 & _0.318_ & 0.303 & 0.277 & **0.302** & **0.287** & 0.276 & **0.312** & 0.303 & **0.313** & 0.302 & **0.289** & 0.279 \\ credit\_kgl & _0.511_ & 0.411 & 0.156 & **0.411** & **0.408** & 0.202 & -0.053 & **0.411** & **0.499** & 0.411 & **0.378** & 0.175 \\ credit\_ro\_vub & 1.793 & **1.796** & 1.787 & **1.795** & **1.786** & 1.785 & 1.727 & **1.793** & 1.762 & _1.797_ & 1.773 & **1.786** \\ dm\_kdd9\_train & 0.108 & **0.143** & 0.033 & _0.147_ & 0.035 & **0.061** & **0.122** & 0.119 & 0.059 & _0.147_ & 0.036 & **0.044** \\ dm\_pt\_uci & _0.568_ & 0.558 & 0.537 & **0.557** & **0.551** & 0.511 & **0.562** & 0.558 & 0.556 & **0.557** & **0.551** & 0.529 \\ fraud\_ieee\_kgl & 0.109 & **0.494** & 0.444 & **0.505** & **0.477** & 0.438 & 0.372 & **0.495** & _0.584_ & 0.506 & 0.407 & **0.444** \\ fraud\_ulb\_kgl & -0.062 & **0.714** & 0.625 & **0.701** & 0.679 & **0.706** & **0.75** & 0.72 & _0.762_ & 0.728 & -0.124 & **0.688** \\ \hline
**Avg Savings** & 0.37 & **0.461** & 0.378 & **0.463** & **0.431** & 0.389 & 0.391 & **0.46** & **0.484** & 0.465 & 0.345 & **0.388** \\
**Avg Rank** & 5.08 & **4.17** & 9.42 & **4.33** & **8.17** & 9.17 & 6.75 & **4.08** & 4.58 & **3.83** & 9.25 & **9.08** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparing single classifiers with type-3 unweighted stacking. Savings score is reported for each classifier, higher is better. Best model per dataset is marked with italics. The winning classifier is marked with boldface letters.
to the absolute majority of cost-sensitive literature, our experimental results demonstarate that for the best results, not one, but two layers of cost-sensitive decision-making are required.
We also found that applying MEC-weights to the training inputs of the level-1 classifier in stacking did not significantly change the performance of stacking models where the level-1 algorithm applied the default decision threshold to classify. Moreover, MEC-weighted stacking models where both levels were cost-sensitive performed worse than the unweighted stacking of the same type, indicating that two levels of cost-sensitivity is sufficient for good performance.
Another contribution of our work is the consolidation of all publically available datasets with record-dependent costs in one place. In addition to that we derive instance-dependent costs for the well known _credit-g_ dataset from the UCI repository.
|
2306.05589 | Intrusion Detection Systems for Flying Ad-hoc Networks | Unmanned Aerial Vehicles (UAVs) are becoming more dependent on mission
success than ever. Due to their increase in demand, addressing security
vulnerabilities to both UAVs and the Flying Ad-hoc Networks (FANET) they form
is more important than ever. As the network traffic is communicated through
open airwaves, this network of UAVs relies on monitoring applications known as
Intrusion Detection Systems (IDS) to detect and mitigate attacks. This paper
will survey current IDS systems that include machine learning techniques when
combating various vulnerabilities and attacks from bad actors. This paper will
be concluded with research challenges and future research directions in finding
an effective IDS system that can handle cyber-attacks while meeting performance
requirements. | Jordan Quinn, Safdar Hussain Bouk | 2023-06-08T23:24:16Z | http://arxiv.org/abs/2306.05589v1 | # Intrusion Detection Systems for Flying Ad-hoc Networks
###### Abstract
**Unmanned Aerial Vehicles (UAVs) are becoming more dependent on mission success than ever. Due to their increase in demand, addressing security vulnerabilities to both UAVs and the Flying Ad-hoc Networks (FANET) they form is more important than ever. As the network traffic is communicated through open airwaves, this network of UAVs relies on monitoring applications known as Intrusion Detection Systems (IDS) to detect and mitigate attacks. This paper will survey current IDS systems that include machine learning techniques when combating various vulnerabilities and attacks from bad actors. This paper will be concluded with research challenges and future research directions in finding an effective IDS system that can handle cyber-attacks while meeting performance requirements.**
**flying ad-hoc networks, unmanned aerial vehicles, machine learning, intrusion detection.**
## I Introduction
With the growing popularity of Unmanned Ariel Vehicles (UAVs), they have branched outside the military realm and into the hands of everyday civilians. Their usage has exploded into many industries worldwide and has completed diverse missions ranging from disaster management, search and rescue, weather monitoring, agricultural monitoring, and healthcare delivery [1-5].
With continuous technological improvements in UAVs, much progress has been made in creating wireless networks to accommodate their airborne missions. The latest form of such a network is the Flying Ad-hoc NETwork (FANET). FANET is a decentralized wireless network comprised of Unmanned Ariel Vehicles (UAVs), each representing nodes that communicate with each other while in flight [6],[7]. This also includes a Ground Control Station (GCS), which communicates with the other UAVs, starting with the closest drone in its proximity, as illustrated in Figure 1[8]. While this wireless network protocol offers UAVs scalability, low latency, and resilience benefits, it is exposed to vulnerabilities other networks share [9]. These vulnerabilities make it necessary to explore automated solutions that effectively protect the FANET while not compromising the UAVs' performance.
This paper offers the following contributions below:
* A Survey of recent IDS using machine learning (ML) techniques on UAVs.
* Discuss research challenges and limitations.
* Suggested future research directions based on the research and lessons learned.
The remainder of this paper is arranged as follows: Section II defines FANET and UAV components. This will also mention security models. Section III addresses IDS and machine learning. Section IV is the survey of ML UAV IDS. Section V consists of research challenges. Section VI is the conclusion of this paper.
## II Fanet & UAV Components
The FANET's primary purpose is to provide fast, dependable, and effective communication links between the UAVs without regard to any prior infrastructure placement. What makes FANETs attractive is the fact that they are inexpensive, fast to set up, scalable when adding numerous UAVs, and have high fault tolerance [10]. Although FANETs are commonly compared to the architecture of Mobile Ad-hoc NETworks (MANETs) and Vehicle Ad-hoc NETworks (VANETs), they have unique characteristics such as mobility, memory, and power when communicating with each other [11, 12, 13]. These characteristics affect not only the performance of the FANET but also the effectiveness of cyber security controls.
UAVs today can be categorized by numerous criteria such as purpose, shape, weight, and communication capabilities [14],[15]. Nevertheless, they all share similar components while operating in flight. For example, inside a popular multi-motored drone such as the DJI's Phantom 4, other drones can relate to having similar features such as types of sensors (GPS receivers, obstacle avoidance, etc.), motherboard, CPU, memory, battery power source, flight controller, and motors [16],[13]. With the given components come several challenges that must be addressed:
* **Power limitations**: Given UAVs components, battery power restricts the UAV capabilities. Also, Cyber-attacks can maliciously drain the power consumption of the UAV, causing it to disconnect from the FANET altogether [13].
* **Memory and CPU**: Robust cryptography methods are required within the UAVs to ensure data confidentiality. However, options are minimal due to limited battery power, hardware, and processing speeds. Without energy-efficient cryptographic algorithms, successful authentication attacks would exhaust UAV resources [13].
* **Mobility**: UAV movement constantly fluctuates, impacting the network topology and performance. For instance, UAV node speeds ranging from 30 to 460km per hour [18] make high mobility a significant challenge [19]. Due to these challenges, UAVs are subject to interference-type attacks such as signal jamming.
* **Wireless Network**: A communications medium must be established to transmit navigation and sensory data. FANETs may use wireless protocols such as IEEE 802.11, IEEE
802.15.4, Bluetooth, Satellite, or cellular mobile technologies, each with its challenges in signal strength, bandwidth, network management, and security [20].
FANET communication systems are also vulnerable to the same cyber-attacks as many wireless networks, such as jamming, spoofing, and intrusion of their network [32][21]. However, the stakes have never been higher due to the mobility of drones and the safety concerns should they be compromised while in flight.
From a cybersecurity perspective, one must follow the most basic security model, such as the CIA triad, to search for vulnerabilities within any system. The CIA triad is an acronym for Confidentiality, Integrity, and Availability. Confidentiality ensures that data or information is viewable by authorized nodes. Integrity refers to communicating messages that are not altered or deleted by anyone or anything. Availability describes a node always being online and ready to receive instructions from other nodes [21]. With this security model in mind, security measures like an IDS system can be implemented to protect or detect malicious threats on the FANET network.
## III IDS & Machine Learning
An IDS is a system composed of hardware or software that monitors network traffic looking for unauthorized behavior to report [22]. Types of unauthorized behavior include but are not limited to malware, UAV spoofing, routing attacks, and data forgery [23]. When deploying the IDS onto the network, it can be either host-based or network-based. For example, the host-based IDS system can be inserted directly inside each UAV, while the network-based IDS system can be inserted inside a network system on land. Given how decentralized FANETs are, and their component challenges, host-based IDS deployment may be the most effective implementation in detection. When it comes to the IDS detection methods, there are several types of techniques
1. Signature-based detection relies on comparing previous attack signatures or patterns to suspicious activity on the network to match the same signature. One major drawback is that it cannot detect newer attacks or zero-day exploits because there are no known signatures to compare it to.
2. Anomaly-based detection relies on a predefined model of normal network behavior instead of relying on patterns or signatures. Should network traffic not fit the model, it predicts the behavior as an anomaly. This method is excellent when detecting newer attacks but has limitations, such as seeing encrypted packets and higher false positives [22].
3. Hybrid detection is a combination of both signature and anomaly-based detection methods. This reduces the number of false positives while combining both their strengths [23].
4. Machine learning-based detection uses Machine learning (ML) algorithms to identify malicious network traffic. The detection methods can be signature, anomaly, or hybrid-specific. The main difference with ML is that the algorithms can reconfigure the IDS and improve the detection accuracy of newer or missed threats over time [24].
There have been attempts to create an IDS solution in FANET without machine learning, but they needed more overall effectiveness. For example, a proposed threat estimation model based on the belief approach was used to reduce IDS false positives. However, this method was based on known behaviors, limiting its effectiveness against unknown anomalies [25, 26]. Given the limited resources of UAVs and mobility demands on FANET, the survey findings in section IV will explore the machine learning-based IDS.
Machine learning is considered a subfield of artificial intelligence (AI), enabling computer systems to learn from experiences using algorithms and models over time. The algorithms are given sample data to build a model to make decisions or predictions outside the confines of its original programming [27].
Federated learning is a machine learning technique where the user's local data is never sent to centralized servers, ensuring data privacy [28]. Each client uses their data to train a piece of a model sent from a server on the ground. The client then encrypts its results and uploads data back to the ground server. The server then collects and decrypts all pieces of the model from all clients and pieces them together, forming an updated and improved global model to distribute back to the clients [29]. This process between the server and client is repeated until an acceptable accuracy is achieved before the final model is available for use by all clients.
While Federated Learning seems like the perfect IDS solution for FANET, it has never been applied to FANET specifically. However, Federated Learning has excellent promise as an effective IDS. Studies have shown that Federated learning effectively reports malicious traffic in other kinds of ad-hoc networks [22],[30].
## IV Survey of Machine Learning Approaches to UAV Security
Table 1 summarizes the UAV IDS approaches in building an automated system that can handle known and unknown attacks with the help of machine learning methods.
O. Bouhamed et al. [33] proposed IDS and intrusion detection prevention system (IDPS) using the method Deep
Figure 1: FANET Structure Illustration
Reinforcement Learning (specifically Deep Q-learning) to allow autonomous detection of suspicious attacks on the UAV network, such as signal jamming and spoofing. Abu Al-Hhaija et al. [34] proposes an autonomous IDS that detects malicious threats against UAVs using deep convolutional neural networks (UAV-IDS-ConvNet). This was done using a two-class classifier from the UAV-IDS-2020 dataset to enhance detection from the deep-learning model.
Kyung Ho Park [35] proposed an IDS for UAVs leveraging unsupervised learning. He has pointed out that supervised learning models cannot identify attacks not included in machine learning models. Therefore, his model does require heavy data labeling but provides an effective IDS system for detecting jamming and spoofing attacks for UAVs. Liang Xiao[36] focuses on physical security methods to defend against jamming, eavesdropping, and spoofing attacks to UAV power supply. Reinforcement learning is proposed to achieve optimal power allocation against these attacks.
Gaoyang Liu et al. [37] was concerned with how bad actors easily imitate satellite signals and proposed a GPS detection system that uses machine learning algorithms Xgboost and K-NN to establish the actual position of the UAV and detect if its positional data has been compromised. Menaka Arthur et al. [38] proposes a lightweight IDS using an unsupervised featured algorithm, Self-Taught Learning (STL), which builds a training dataset from unlabeled data collected from the UAV's sensors. Multiclass SVM is also used to ensure high detection rates of the IDS.
Jason Whelan et al. [39] proposed one-class classifiers, such as One-class Support Vector Machine (OC-SVM) to train their "novelty-based" IDS, which only learns typical sensor values from previous flight logs. Roshaan Mehmood et al. [40] Proposed finding the most accurate machine learning algorithm from SVM, KNN, and Random Forest Classifier to train the model of the CIC-IDS2018 dataset. While using Cooja Simulator to simulate the UAV environment, Random Forest Classifier had the best accuracy rate ranging from 95%-96%. Rabie Ramadan et al. [41] proposed an IDS for FANET using deep learning and big data analytics. The framework used two Recurrent Neural Network (RNN) modules, one at ground level and another inside FANET.
## V Research Challenges
With machine learning playing a more significant role in UAV IDS, effectiveness and efficiency seem obtainable. However, by adding ML into UAV IDS, some challenges must be addressed:
* Data collection and quality: training algorithms that detect network intrusions require mass volumes of data that are in short supply in this subject. Worse yet, the few available datasets may be incomplete or biased.
* Model selection: Many algorithms are available to develop a UAV IDS. However, selecting the "best" one depends on many factors, such as the selected data, desired levels of accuracy, and the problem that needs to be solved.
* Transferable: ML models are not universal to all UAV IDS. This is due to hardware and software requirements.
* Data security: Data can be manipulated or inserted into the IDS to become ineffective in identifying intrusions on UAV or the FANETs themselves. Ways to verify data integrity are a must in this case.
Addressing these challenges will require further research to ensure an efficient, effective, and secure UAV IDS system.
## VI Conclusion
This paper provided a short overview of the usage of machine learning in UAV IDS. Also, discussions relating to FANET, UAVs, and ML were made. Lastly, research challenges and future research were addressed in creating an effective UAV IDS system.
|
2303.07540 | Tensor-based Multimodal Learning for Prediction of Pulmonary Arterial
Wedge Pressure from Cardiac MRI | Heart failure is a serious and life-threatening condition that can lead to
elevated pressure in the left ventricle. Pulmonary Arterial Wedge Pressure
(PAWP) is an important surrogate marker indicating high pressure in the left
ventricle. PAWP is determined by Right Heart Catheterization (RHC) but it is an
invasive procedure. A non-invasive method is useful in quickly identifying
high-risk patients from a large population. In this work, we develop a tensor
learning-based pipeline for identifying PAWP from multimodal cardiac Magnetic
Resonance Imaging (MRI). This pipeline extracts spatial and temporal features
from high-dimensional scans. For quality control, we incorporate an epistemic
uncertainty-based binning strategy to identify poor-quality training samples.
To improve the performance, we learn complementary information by integrating
features from multimodal data: cardiac MRI with short-axis and four-chamber
views, and Electronic Health Records. The experimental analysis on a large
cohort of $1346$ subjects who underwent the RHC procedure for PAWP estimation
indicates that the proposed pipeline has a diagnostic value and can produce
promising performance with significant improvement over the baseline in
clinical practice (i.e., $\Delta$AUC $=0.10$, $\Delta$Accuracy $=0.06$, and
$\Delta$MCC $=0.39$). The decision curve analysis further confirms the clinical
utility of our method. | Prasun C. Tripathi, Mohammod N. I. Suvon, Lawrence Schobs, Shuo Zhou, Samer Alabed, Andrew J. Swift, Haiping Lu | 2023-03-14T00:05:08Z | http://arxiv.org/abs/2303.07540v2 | Tensor-based Multimodal Learning for Prediction of Pulmonary Arterial Wedge Pressure from Cardiac MRI
###### Abstract
Heart failure is a serious and life-threatening condition that can lead to elevated pressure in the left ventricle. Pulmonary Arterial Wedge Pressure (PAWP) is an important surrogate marker indicating high pressure in the left ventricle. PAWP is determined by Right Heart Catheterization (RHC) but it is an invasive procedure. A non-invasive method is useful in quickly identifying high-risk patients from a large population. In this work, we develop a tensor learning-based pipeline for identifying PAWP from multimodal cardiac Magnetic Resonance Imaging (MRI). This pipeline extracts spatial and temporal features from high-dimensional scans. For quality control, we incorporate an epistemic uncertainty-based binning strategy to identify poor-quality training samples. To improve the performance, we learn complementary information by integrating features from multimodal data: cardiac MRI with short-axis and four-chamber views, and Electronic Health Records. The experimental analysis on a large cohort of 1346 subjects who underwent the RHC procedure for PAWP estimation indicates that the proposed pipeline has a diagnostic value and can produce promising performance with significant improvement over the baseline in clinical practice (i.e., \(\Delta\text{AUC}=0.10\), \(\Delta\text{Accuracy}=0.06\), and \(\Delta\text{MCC}=0.39\)). The decision curve analysis further confirms the clinical utility of our method.
Keywords:Cardiac MRI Multimodal Learning Pulmonary Arterial Wedge Pressure.
## 1 Introduction
Heart failure is usually characterized by the inability of the heart to supply enough oxygen and blood to other organs of the body [5]. It is a major cause of
mortality and hospitalization [16]. Elevated Pulmonary Arterial Wedge Pressure (PAWP) is indicative of raised left ventricular filling pressure and reduced contractility of the heart. In the absence of mitral valve or pulmonary vasculature disease, PAWP correlates with the severity of heart failure and risk of hospitalization [2]. While PAWP can be measured by invasive and expensive Right Heart Catheterization (RHC), simpler and non-invasive techniques could aid in better monitoring of heart failure patients.
Cardiac Magnetic Resonance Imaging (MRI) is an effective tool for identifying various heart conditions and its ability to detect disease and predict outcome has been further improved by machine learning techniques [4]. For instance, Swift et al. [20] introduced a machine-learning pipeline for identifying Pulmonary Artery Hypertension (PAH). Goh et al. [7] performed right ventricular remodeling for predicting treatment failure in heart patients using cardiac MRI. Recently, Uthoff et al. [21] developed geodesically smoothed tensor features for predicting mortality in PAH.
Cardiac MRI scans contain high-dimensional spatial and temporal features generated throughout the cardiac cycle. The small number of samples compared to the high-dimensional features poses a challenge for machine learning classifiers. To address this issue, Multilinear Principal Component Analysis (MPCA) [13] utilizes a tensor-based approach to reduce feature dimensions while preserving the information for each mode, i.e. spatial and temporal information in cardiac MRI. Hence, the MPCA method is well-suited for analyzing cardiac MRI scans. The application of the MPCA method to predict PAWP might further increase the diagnostic yield of cardiac MRI in heart failure patients and help to establish cardiac MRI as a non-invasive alternative to RHC. Existing MPCA-based pipelines for cardiac MRI [20, 21, 3] rely on manually labeled landmarks that are used for aligning heart regions in cardiac MRI. The manual labeling of landmarks is a cumbersome task for physicians and impractical for analyzing large cohorts. Moreover, even small deviations in the landmark placement may significantly impact the classification performance of automatic pipelines [18]. To tackle this challenge, we leverage automated landmarks with uncertainty quantification [17] in our pipeline.
In recent years, multimodal learning obtained remarkable performance for solving various healthcare problems [1]. The utilization of EHR (Electronic Health Record) features enhanced the diagnostic power in several studies [19]. This motivates us to extract complementary information from multimodal data from short-axis, four-chamber, and EHR features. Specifically, we aim to utilize EHR features identified in the baseline work by Garg et al. [6] for PAWP prediction. These features include left arterial volume and left ventricular mass.
Our **main contributions** are summarized as follows: 1) **Methodology:** We developed a fully automatic pipeline for PAWP prediction using cardiac MRI and EHR data, which includes automatic landmark detection with uncertainty quantification, an uncertainty-based binning strategy for training sample selection, tensor feature learning, and multimodal feature integration. 2) **Effectiveness:** Extensive experiments on the cardiac MRI scans of 1346 patients with various
heart diseases validated our pipeline with a significant improvement (\(\Delta\text{AUC}\)\(=0.1027\), \(\Delta\text{Accuracy}=0.0628\), and \(\Delta\text{MCC}=0.3917\)) over the current clinical baseline. 3) **Clinical utility**: Decision curve analysis indicates the diagnostic value of our pipeline, which can be used in screening high-risk patients from a large population.
## 2 Methods
As shown in Fig. 1, the proposed pipeline for PAWP prediction comprises three components: preprocessing, tensor feature learning, and performance analysis.
**Cardiac MRI Preprocessing:** The preprocessing of cardiac MRI contains (1) normalization of scans, (2) automatic landmark detection, (3) inter-subject registration, and (4) in-plane downsampling. We standardize cardiac MRI intensity levels using Z-score normalization [9] to eliminate inter-subject variations. Furthermore, we detect automated landmarks, which is explained in the next paragraph. We perform affine registration to align the heart regions of different subjects to a target image space. We then carry out in-plane scaling of scans by max-pooling at 2, 4, 8, and 16 times and obtain down-sampled resolutions of \(128\times 128\), \(64\times 64\), \(32\times 32\), and \(16\times 16\), respectively.
**Landmark Detection and Uncertainty-based Sample Binning:** We utilize supervised learning to automate landmark detection using an ensemble of Convolutional Neural Networks (CNNs) for each modality (short-axis and four-chamber). We use the U-Net-like architecture and utilize the same training regime implemented in [17]. We employ _Ensemble Maximum Heatmap Activation (E-MHA)_ strategy [17] which incorporates an ensemble of 5 models for each modality. We utilize three landmarks for each modality, with the short-axis modality using the inferior hinge point, superior hinge point, and inferolateral inflection point of the right ventricular apex, and the four-chamber modality using the left ventricular apex and mitral and tricuspid annulus. E-MHA produces
Figure 1: The schematic overview of the PAWP prediction pipeline including preprocessing, tensor feature learning, and performance analysis. The blocks in gray color are explained in more detail in Section 2.
an associated uncertainty estimate for each landmark prediction, representing the model's epistemic uncertainty as a continuous scalar value.
A minor error in landmark prediction can result in incorrect image registration [18]. To address this issue, we hypothesize that incorrectly preprocessed samples resulting from inaccurate landmarks can introduce ambiguity during model training. For quality control, it is crucial to identify and effectively handle such samples. In this study, we leverage predicted landmarks and epistemic uncertainties to tackle this problem using uncertainty-based binning. To this end, we partition the training scans based on the uncertainty values of the landmarks. The predicted landmarks are divided into \(K\) quantiles, i.e., \(Q=\{q_{1},q_{2},...,q_{K}\}\), based on the epistemic uncertainty values. We then iteratively filter out training samples starting from the highest uncertain quantile. A sample is discarded if the uncertainty of any of its landmarks lies in quantile \(q_{k}\) where \(k=\{1,2,...,K\}\). The samples are discarded iteratively until there is no improvement in the validation performance, as measured by the area under the curve (AUC), for two subsequent iterations.
**Tensor Feature Learning:** To extract features from processed cardiac scans, we employ tensor feature learning, i.e. Multilinear Principal Component Analysis (MPCA) [13], which learns multilinear bases from cardiac MRI stacks to obtain low-dimensional features for prediction. Suppose we have \(M\) scans as third-order tensors in the form of \(\{\mathcal{X}_{1},\mathcal{X}_{2},..,\mathcal{X}_{M}\in\mathbb{R}^{I_{1}\times I _{2}\times I_{3}}\}\). The low-dimensional tensor features \(\{\mathcal{Y}_{1},\mathcal{Y}_{2},..,\mathcal{Y}_{M}\in\mathbb{R}^{P_{1} \times P_{2}\times P_{3}}\}\) are extracted by learning three (\(N=3\)) projection matrices \(\{U^{(n)}\in\mathbb{R}^{I_{n}\times P_{n}},n=1,2,3\}\) as follows:
\[\mathcal{Y}_{m}=\mathcal{X}_{m}\times_{1}U^{(1)^{T}}\times_{2}U^{(2)^{T}} \times_{3}U^{(3)^{T}},m=1,2,...,M, \tag{1}\]
where \(P_{n}<I_{n}\), and \(\times_{n}\) denotes a mode-wise product. Therefore, the feature dimensions are reduced from \(I_{1}\times I_{2}\times I_{3}\) to \(P_{1}\times P_{2}\times P_{3}\). We optimize the projection matrices \(\{U^{(n)}\}\) by maximizing total scatter \(\psi_{\mathcal{Y}}=\sum_{m=1}^{M}||\mathcal{Y}_{m}-\bar{\mathcal{Y}}||_{F}^{2}\), where \(\bar{\mathcal{Y}}=\frac{1}{M}\sum_{m=1}^{M}\mathcal{Y}_{m}\) is the mean tensor feature and \(||.||_{F}\) is the Frobenius norm [12]. We solve this problem using an iterative projection method. In MPCA, \(\{P_{1},P_{2},P_{3}\}\) can be determined by the explained variance ratio, which is a hyperparameter. Furthermore, we apply Fisher discriminant analysis to select the most significant features based on their Fisher score [10]. We select the top \(k\)-ranked features and employ Support Vector Machine (SVM) for classification.
**Multimodal Feature Integration:** To enhance performance, we perform multimodal feature integration using features extracted from the short-axis, four-chamber, and EHR. We adopt two strategies for feature integration, namely the early and late fusion of features [8]. In early fusion, the features are fused at the input level without doing any transformation. We concatenate features from the short-axis and four-chamber to perform this fusion. We then apply MPCA [13] on the concatenated tensor, enabling the selection of multimodal features. In late fusion, the integration of features is performed at the common latent space that allows the fusion of features that have different dimensionalities. In this way,
we can perform a late fusion of EHR features with short-axis and four-chamber features. However, we can not perform an early fusion of EHR features with short-axis and four-chamber features.
**Performance Evaluation:** In this paper, we use three primary metrics: Area Under Curve (AUC), accuracy, and Matthew's Correlation Coefficient (MCC), to evaluate the performance of the proposed pipeline. Decision Curve Analysis (DCA) is also conducted to demonstrate the clinical utility of our methodology.
## 3 Experimental Results and Analysis
**Study Population**: Patients with suspected pulmonary hypertension were identified after institutional review board approval and ethics committee review. A total of 1346 patients who underwent Right Heart Catheterization (RHC) and cardiac MRI scans within 24 hours were included. Of these patients, 940 had normal PAWP (\(\leq\) 15 mmHg), while 406 had elevated PAWP (\(>\) 15 mmHg). Table 1 summarizes baseline patient characteristics. RHC was performed using a balloon-tipped 7.5 French thermodilution catheter.
**Cardiac MRI data:** MRI scans were obtained using a 1.5 Tesla whole-body GE HDx MRI scanner (GE Healthcare, Milwaukee, USA) equipped with 8-channel cardiac coils and retrospective electrocardiogram gating. Two cardiac MRI protocols, short-axis and four-chamber, were employed, following standard clinical protocols to acquire cardiac-gated multi-slice steady-state sequences with a slice thickness of 8 mm, a field of view of \(48\times 43.2\), a matrix size of \(512\times 512\), a bandwidth of 125 kHz, and TR/TE of 3.7/1.6 ms. The proposed method works on volumetric slices of cardiac MRI containing 20 temporal phases.
**Experimental Design:** We conducted experiments on short-axis and four-chamber scans across four scales. To determine the optimal parameters, we performed 10-fold cross-validation on the training set. From MPCA, we selected the top 210 features. We employed early and late fusion on Short-axis and Four-chamber scans, respectively, while EHR features were only fused using the late fusion strategy. We divided the data into a training set of 1081 cases and a testing set of 265 cases. To simulate a real testing scenario, we designed the
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline & Low PAWP(\(\leq 15\)) & High PAWP(\(>15\)) & \(p\)-value \\ \hline Number of patients & 940 & 406 & - \\ \hline Age (in years) & \(64.8\pm 14.2\) & \(70.5\pm 10.6\) & \(<0.01\) \\ \hline Body Surface Area (BSA) & \(1.88\pm 0.28\) & \(1.93\pm 0.24\) & \(<0.01\) \\ \hline Heart Rate (bpm) & \(73.9\pm 15.5\) & \(67.6\pm 15.9\) & \(<0.01\) \\ \hline Left Ventricle Mass (LVM) & \(92.3\pm 25\) & \(106\pm 33.1\) & \(<0.01\) \\ \hline Left Atrial Volume (\(ml^{2}\)) & \(72.2\pm 33.7\) & \(132.2\pm 56.7\) & \(<0.01\) \\ \hline PAWP (mmHg) & \(10.3\pm 3.1\) & \(21.7\pm 4.96\) & \(<0.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Baseline characteristics of included patients. \(p\) values were obtained using \(t\)-test [23].
experiments such that patients diagnosed in the early years were part of the training set, while patients diagnosed in recent years were part of the testing set. We also partitioned the test into 5 parts based on the diagnosis time to perform different runs of methods and report standard deviations of methods in comparison results. For SVM, we selected the optimal hyper-parameters from \(\{0.001,0.01,0.1,1\}\) using grid search technique. The code for the experiments has been implemented in Python (version 3.9). We leveraged the cardiac MRI preprocessing pipeline and MPCA from the Python library PyKale [11] and SVM implementation is taken from scikit-learn [14].
**Uncertainty-Based Sample Binning:** To improve the quality of training data, we used quantile binning to remove training samples with uncertain landmarks. The landmarks were divided into 50 bins, and then removed one bin at a
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline Modality & Resolution & AUC & Accuracy & MCC \\ \hline Unimodal (EHR) [6] & - & \(0.7300\pm 0.04\) & \(0.7400\pm 0.03\) & \(0.1182\pm 0.03\) \\ \hline Unimodal (SA) & \(64\times 64\) & \(0.7391\pm 0.05\) & \(0.7312\pm 0.07\) & \(0.3604\pm 0.02\) \\ & \(128\times 128\) & \(0.7495\pm 0.05\) & \(0.7321\pm 0.04\) & \(0.3277\pm 0.01\) \\ \hline Unimodal (FC) & \(64\times 64\) & \(0.8034\pm 0.02\) & \(0.7509\pm 0.04\) & \(0.4240\pm 0.02\) \\ & \(128\times 128\) & \(0.8100\pm 0.04\) & \(0.7925\pm 0.05\) & \(0.4666\pm 0.02\) \\ \hline Bi-modal (SA and FC): & \(64\times 64\) & \(0.7998\pm 0.01\) & \(0.7698\pm 0.03\) & \(0.4185\pm 0.03\) \\ Early fusion & \(128\times 128\) & \(0.7470\pm 0.02\) & \(0.7283\pm 0.02\) & \(0.3512\pm 0.02\) \\ \hline Bi-modal (SA and FC): & \(64\times 64\) & \(0.8028\pm 0.04\) & \(0.7509\pm 0.03\) & \(0.3644\pm 0.01\) \\ Late fusion & \(128\times 128\) & \(0.8122\pm 0.03\) & \(0.7547\pm 0.03\) & \(0.3594\pm 0.02\) \\ \hline Bi-modal (SA and EHR): & \(64\times 64\) & \(0.7564\pm 0.04\) & \(0.7585\pm 0.02\) & \(0.3825\pm 0.02\) \\ Late fusion & \(128\times 128\) & \(0.7629\pm 0.03\) & \(0.7434\pm 0.03\) & \(0.3666\pm 0.03\) \\ \hline Bi-modal (FC and EHR): & \(64\times 64\) & \(0.8061\pm 0.03\) & \(0.7709\pm 0.02\) & \(0.4435\pm 0.02\) \\ Late fusion & \(128\times 128\) & \(0.8135\pm 0.02\) & \(0.7925\pm 0.02\) & \(0.4999\pm 0.03\) \\ \hline Tri-modal (FC, SA, and EHR) & \(64\times 64\) & \(0.8146\pm 0.04\) & \(0.7774\pm 0.03\) & \(0.4460\pm 0.02\) \\ Hybrid fusion & \(128\times 128\) & \(\mathbf{0.8327\pm 0.06}\) & \(\mathbf{0.8038\pm 0.05}\) & \(\mathbf{0.5099\pm 0.04}\) \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison using three metrics (with **best** in bold and second best underlined). FC: Four-Chamber features; SA: Short-Axis features; EHR: Electronic Health Record features. The standard deviations of methods were obtained by dividing the test set into 5 parts based on the diagnosis time.
Figure 2: Performance comparison of removing a different number of bins of training data on 10-fold cross-validation.
time in the descending order of their uncertainties. Figure 2 depicts the results of binning using 10-fold cross-validation on the training set, where the performance improves consistently over the four scales when removed bins \(\leq\) 5. Based on the results, we removed 5 bins (129 out of 1081 samples) from the training set, and used the remaining 952 training samples for the following experiments.
**Unimodal Study:** We compared the performance of single modalities including short-axis, four-chamber, and EHR features in this experiment. As a baseline, we used previously reported EHR features [6], which include left ventricle mass and left arterial volume. Table 2 presents the performance of the unimodal models. It is observed that the four-chamber modality model produced a better performance with improvement over the baseline (\(\Delta\text{AUC}=0.0800\)\(\Delta\text{Accuracy}\) = 0.0527 and \(\Delta\text{MCC}=0.3484\)). This experiment indicates that tensor-based features have a diagnostic value.
**Bi-modal Study:** In this experiment, we compared the performance of bi-modal models. As shown in Table 2, bimodal (four-chamber and EHR) produces superior performance (i.e., \(\text{AUC}=0.8135\), Accuracy=0.7925 and \(\text{MCC}=0.4999\)) among bi-modal models. Next, we investigated the effect of fusing EHR features with short-axis and four-chamber modalities in Fig. 3. It can be observed from these figures that the fusion of EHR features enhances the diagnostic power of cardiac MRI modalities at all scales. The bi-modal (four-chamber and EHR) model achieved the improvement in the performance (\(\Delta\text{AUC}=0.0035\) and \(\Delta\text{MCC}=0.0333\)) over the unimodal (four-chamber) model.
Figure 3: The effect of combining EHR features on short-axis and four-chamber. SA: Short-axis; FC: Four-chamber.
**Effectiveness of Tri-modal:** In this experiment, we performed a fusion of EHR features with the bi-modal models to create two tri-modal models. The first tri-modal is tri-modal late (EHR with a late fusion of short-axis and four-chamber) and the second tri-modal is a tri-modal hybrid (EHR with an early fusion of short-axis and four-chamber). As shown in Fig. 4, EHR features enhance the performance of bi-modal models and tri-modal hybrid outperforms all. The tri-modal hybrid achieved the best performance (i.e., AUC = 0.8327, Accuracy = 0.8038, and MCC = 0.5099) (see Table 2). This model obtained significant improvements over the baseline method (i.e., \(\Delta\)AUC = 0.1027, \(\Delta\)Accuracy = 0.0628, and \(\Delta\)MCC = 0.3917).
**Decision Curve Analysis:** We performed Decision Curve Analysis (DCA) [22, 15] to show the potential clinical utility of the proposed method. As shown in Fig. 5, the Tri-modal model outperformed the baseline method for most possible benefit/harm preferences, where benefit indicates a positive net benefit (i.e. correct
Figure 4: The effect of combining EHR features on the bi-modals including early and late fusion of four-chamber and short-axis. Early fusion: early fusion of short-axis and four-chamber; late fusion: late fusion of short-axis and four-chamber.
Figure 5: Evaluating clinical utility of our method using Decision Curve Analysis (DCA) [22].“Treat All” means treating all patients, regardless of their actual disease status, while “Treat None” means treating no patients at all. Our predictive model’s net benefit is compared with the net benefit of treating everyone or no one to determine its overall utility.
diagnosis) and harm indicates a negative net benefit (i.e. incorrect diagnosis). The tri-modal model (the best model) obtained a higher net benefit between decision threshold probabilities of 0.30 and 0.70 which implies that our method has a diagnostic value and can be used in screening high-risk patients from a large population.
## 4 Conclusions
This paper proposed a tensor learning-based pipeline for PAWP classification. We demonstrated that: 1) tensor-based features have a diagnostic value for PAWP, 2) the integration of EHR features improved the performance of unimodal and bi-modal methods, 3) the pipeline can be used to screen a large population, as shown using decision curve analysis. However, the current study is limited to single institutional data. In the future, we would like to explore the applicability of the method for multi-institutional data using domain adaptation techniques.
## Acknowledgment
The study was supported by the Wellcome Trust grants 215799/Z/19/Z and 205188/Z/16/Z.
|
2310.07478 | Multimodal Graph Learning for Generative Tasks | Multimodal learning combines multiple data modalities, broadening the types
and complexity of data our models can utilize: for example, from plain text to
image-caption pairs. Most multimodal learning algorithms focus on modeling
simple one-to-one pairs of data from two modalities, such as image-caption
pairs, or audio-text pairs. However, in most real-world settings, entities of
different modalities interact with each other in more complex and multifaceted
ways, going beyond one-to-one mappings. We propose to represent these complex
relationships as graphs, allowing us to capture data with any number of
modalities, and with complex relationships between modalities that can flexibly
vary from one sample to another. Toward this goal, we propose Multimodal Graph
Learning (MMGL), a general and systematic framework for capturing information
from multiple multimodal neighbors with relational structures among them. In
particular, we focus on MMGL for generative tasks, building upon pretrained
Language Models (LMs), aiming to augment their text generation with multimodal
neighbor contexts. We study three research questions raised by MMGL: (1) how
can we infuse multiple neighbor information into the pretrained LMs, while
avoiding scalability issues? (2) how can we infuse the graph structure
information among multimodal neighbors into the LMs? and (3) how can we
finetune the pretrained LMs to learn from the neighbor context in a
parameter-efficient manner? We conduct extensive experiments to answer these
three questions on MMGL and analyze the empirical results to pave the way for
future MMGL research. | Minji Yoon, Jing Yu Koh, Bryan Hooi, Ruslan Salakhutdinov | 2023-10-11T13:25:03Z | http://arxiv.org/abs/2310.07478v2 | # Multimodal Graph Learning for Generative Tasks
###### Abstract
Multimodal learning combines multiple data modalities, broadening the types and complexity of data our models can utilize: for example, from plain text to image-caption pairs. Most multimodal learning algorithms focus on modeling simple one-to-one pairs of data from two modalities, such as image-caption pairs, or audio-text pairs. However, in most real-world settings, entities of different modalities interact with each other in more complex and multifaceted ways, going beyond one-to-one mappings. We propose to represent these complex relationships as graphs, allowing us to capture data with any number of modalities, and with complex relationships between modalities that can flexibly vary from one sample to another. Toward this goal, we propose Multimodal Graph Learning (MMGL), a general and systematic framework for capturing information from multiple multimodal neighbors with relational structures among them. In particular, we focus on MMGL for _generative_ tasks, building upon pretrained Language Models (LMs), aiming to augment their text generation with multimodal neighbor contexts. We study three research questions raised by MMGL: (1) how can we infuse multiple neighbor information into the pretrained LMs, while avoiding scalability issues? (2) how can we infuse the graph structure information among multimodal neighbors into the LMs? and (3) how can we finetune the pretrained LMs to learn from the neighbor context in a parameter-efficient manner? We conduct extensive experiments to answer these three questions on MMGL and analyze the empirical results to pave the way for future MMGL research.
## 1 Introduction
There are diverse data modalities in real-world applications, from commonly observed texts, images, and videos to time series data or domain-specific modalities like protein sequences. These various modalities are not collected individually but together with multifaceted relations among them. Wikipedia [2] is one of the most popular sources of multimodal web content, providing multimodal data such as texts, images, and captions. TimeBuilder [29], recently released by Meta, builds personal timelines using each user's multimodal data, including their photos, maps, shopping, and music history. In addition to these examples, important industrial and medical decisions are also made by considering diverse multimodal data such as images, tables, or audio [13; 26]. These multimodal data have complicated \(many\)-to-\(many\) relations among their multimodal entities -- which can be represented as graphs -- providing open research space on how to understand them holistically.
With the rise of multimodal datasets, various ground-breaking research has been done in multimodal learning. Previously, multimodal learning focused on novel architectures, extending transformers [9; 19; 30] or graph neural networks [12; 25], and training them from scratch using large-scaled multimodal datasets. Fueled by the strong generative power of pretrained Language Models (LMs), recent multimodal approaches [1; 17; 16] are built upon pretrained LMs and focus on the generation
of multimodal content. For instance, [16] generates images/text grounded on given text/images using pretrained image encoders and LMs. However, all existing models assume that a pair of modalities with a clear \(1\)-to-\(1\) mapping is provided as input (e.g., image-caption pairs in Figure 1(a)). As a result, they cannot be directly applied on multimodal datasets with more general \(many\)-to-\(many\) mappings among modalities (e.g., multimodal Wikipedia webpage in Figure 1(b)).
Here, we expand the scope of multimodal learning beyond \(1\)-to-\(1\) mappings into multimodal graph learning (MMGL) while preserving generative abilities by integrating them into pretrained LMs. We introduce a systematic framework on how MMGL processes multimodal neighbor information with graph structures among them and generate free-form texts using pretrained LMs (Figure 2). Our MMGL framework extracts _neighbor encodings_, combines them with _graph structure information_, and optimizes the model using _parameter-efficient fine-tuning_. Accordingly, we define three design spaces to study three research questions for MMGL as follows:
* **Research Question 1.** How can we provide multiple multimodal neighbor information to LMs while avoiding scalability issues?
* **Research Question 2.** How can we infuse graph structure information among multimodal neighbors into LMs?
* **Research Question 3.** How can we finetune pretrained LMs to learn through multimodal neighbor information in parameter-efficient ways?
In conventional multimodal learning with the \(1\)-to-\(1\) mapping assumption, typically only one neighbor is provided (e.g., an image for a text caption) [16; 17; 1]. On the contrary, MMGL requires the processing of several neighbors with various data sizes (e.g., image resolution and text sequences of various lengths), which leads to the scalability issue. For _Research Question 1_, we study three neighbor encoding models: (1) _Self-Attention with Text + Embeddings_ (SA-Text+Embeddings) precomputes image embeddings using frozen encoders, then concatenates them to the input text sequences with any raw text from neighbors (originally proposed from [31]), (2) _Self-Attention with Embeddings_ (SA-Embeddings) precomputes embeddings for both text and image modalities using frozen encoders and concatenates to the input text, and (3) _Cross-Attention with Embeddings_ (CA-Embeddings) feeds precomputed text or image embeddings into cross-attention layers of LMs.
In _Research Question 2_, we study how to infuse graph structure information among multimodal neighbors into LMs (e.g., section hierarchy and image orders in Figure 1(b)). We compare the sequential position encoding with two graph position encodings widely used in graph transformers [24; 34]: _Laplacian eigenvector position encoding_ (LPE) [6] and _graph neural networks encoding_ (GNN) [15] that runs GNNs on precomputed neighbor embeddings using graphs structures before feeding them into LMs.
_Research Question 3_ seeks to improve the cost and memory efficiency compared to full fine-tuning of LMs. In this work, we explore three parameter-efficient fine-tuning (PEFT) methods [10]: _Prefix tuning_[18], _LoRA_[11], and _Flamingo tuning_[1]. Which PEFT methods to use depends on the
Figure 1: **Multimodal datasets extracted from Wikipedia**: (a) Most multimodal models target multimodal datasets with clear \(1\)-to-\(1\) mappings between modalities. (b) Multimodal Graph Learning (MMGL) handles multimodal datasets with complicated relations among multiple multimodal neighbors.
neighbor encoding model: when neighbor information is concatenated into the input sequences (_SA-Text+Embeddings_ or _SA-Embeddings_ neighbor encodings), we can apply _Prefix tuning_ or _LoRA_ for fine-tuning. When neighbor information is fed into cross-attention layers (_CA-Embeddings_ neighbor encoding), we apply _Flamingo tuning_ that finetunes only cross-attention layers with gating modules for stable finetuning [1].
Based on our MMGL framework, we run extensive experiments on the recently released multimodal dataset, WikiWeb2M [2]. WikiWeb2M unifies each Wikipedia webpage content to include all text, images, and their structures in a single example. This makes it useful for studying multimodal content understanding with many-to-many text and image relationships, in the context of generative tasks. Here, we focus on the section summarization task that aims to generate a sentence that captures information about the contents of one section by understanding the multimodal content on each Wikipedia page. Through rigorous testing on WikiWeb2M, we provide intuitive empirical answers to research questions raised in MMGL.
In summary, our contributions are:
* **Multimodal Graph Learning (MMGL):** We introduce a systematic MMGL framework for processing multimodal neighbor information with graph structures among them, and generating free-form texts using pretrained LMs.
* **Principled Research Questions:** We introduce three research problems MMGL is required to answer: (1) how to provide multiple neighbor information to the pretrained LMs, (2) how to infuse graph structure information into LMs, and (3) how to fine-tune the LMs parameter-efficiently. This paves research directions for future MMGL research.
* **Extensive Empirical Results:** We show empirically that (1) neighbor context improves generation performance, (2) _SA-Text+Embeddings_ neighbor encoding shows the highest performance while sacrificing the scalability, (3) _GNN_ embeddings are the most effective graph position encodings, and (4) _SA-Text+Embeddings_ neighbor encoding with _LoRA_ and _CA-Embeddings_ neighbor encoding with _Flamingo tuning_ show the highest performance among different PEFT models.
Our code is publicly available at 1.
Footnote 1: [https://github.com/minjiyoon/MMGL](https://github.com/minjiyoon/MMGL)
## 2 Related Work
End-to-End Multimodal Learning:While many discriminative multimodal models [14; 22] have also been developed, we primarily consider related work on generative multimodal models, as this is most closely related with our approach. Several recent approaches tackle multimodal learning by building upon the Transformer [32] architecture. Multimodal extensions typically use either full self-attention over modalities concatenated across the sequence dimension [3; 28] or a cross-modal attention layer [30]. Self-supervised multimodal pretraining methods train these architectures from large-scale unlabeled multimodal data before transferring them to downstream multimodal tasks via fine-tuning [9; 19]. These methods perform end-to-end pre-training, incurring extremely high computation costs, especially as model parameters increase [17]. Moreover, this framework is relatively inflexible for end-to-end pre-trained models to leverage readily available unimodal pre-trained models, such as text-only LMs or pretrained vision models.
Multimodal Learning with Frozen Image Encoders and Large Language Models:Recently, various vision-language models have been proposed to leverage off-the-shelf pre-trained models and keep them frozen during pretrainig [1; 17; 16]. To input visual information directly to a frozen text-only LLM, a key challenge is to align visual features to the text space. Motivated by Frozen [31], which finetunes a visual encoder to map images into the hidden space of a text-only LLM, Blip-2 [17] and GILL [16] finetune separate image mapping networks whose inputs are precomputed by frozen image encoders and outputs are directly used as soft prompts to LLMs. On the other hand, Flamingo [1] inserts new cross-attention layers into the LLM to inject visual features and pre-trains the new layers on image-text pairs. Note that all these methods primarily focus on processing _interleaved image and text inputs_ to generate text outputs.
Graph Neural Networks on Multimodal GraphsHeterogeneous Graph Neural Networks (HGNNs) [36] to learn from multimodal heterogeneous
graphs. This is done through precomputing input node embeddings using frozen encoders, and training the GNN to map different modality embeddings either at the input layer [25], intermediate [12], or late layers [35]. However, most HGNN models focus on node classification, and are difficult to adapt for generative tasks. Recently, various approaches have been proposed to fine-tune LLMs with GNNs on text-attributed graphs [4; 8; 38]. These methods specialize in node/edge classification tasks by putting GNN models after LLMs, making them difficult to adapt for use in generative tasks.
## 3 Multimodal Graph Learning for Generative Tasks
Given multimodal graphs with text or images on each node, we aim to generate text conditioned on each node and its neighbor nodes. More specifically, given text input on a target node, pretrained LMs generate free-form text conditioned on the input text and the multimodal context around the target node. In our multimodal graph learning (MMGL) framework, we first encode each neighbor's information individually using frozen encoders (Figure 2(b)). The frozen encoders could be pretrained ViT [5] or ResNeT [7] for images that map pixels to embeddings, and pretrained LMs [22] for texts that map texts to embeddings (similarly for other modalities). Then, we encode the graph structure around the target node using graph position encodings (Figure 2(c)). Finally, the encoded neighbor
Figure 2: **Multimodal Graph Learning (MMGL) framework**: (a) Multiple multimodal neighbors are given with the input text. (b) Multimodal neighbors are first encoded using frozen vision/text encoders and then aligned to the text-only LM space using 1-layer MLP mappers. The mappers are trained during LM fine-tuning. Based on the neighbor encoding scheme, texts could be used without any preprocessing (_Self-Attention with Text+Embeddings_) or encoded into embeddings (_Self-Attention with Embeddings_ or _Cross-Attention with Embeddings_). Images are always encoded into embeddings to align to the text-only LM space. (c) Graph structures among neighbors are encoded as graph position encodings. (d) Encoded neighbor information could be infused either by concatenating to the input sequences (_Self-Attention with Text+Embeddings_ or _Self-Attention with Embeddings_) or feeding into cross-attention layers (_Cross-Attention with Embeddings_). The graph position encodings are added to the input token/text/image embeddings.
information with graph position encodings is fed into the pretrained LMs with the input text to generate text conditioned on the multimodal input content (Figure 2(d)).
The framework leaves us with three design spaces: (1) how can we feed neighbor information to the LMs? (2) how can we infuse graph structure information among multimodal neighbors into LMs? (3) how can we finetune the pretrained LMs to learn from the neighbor context parameter-efficiently? In this section, we investigate each problem and discuss possible methodologies we can apply.
### Research Question 1: Neighbor Encoding
Unlike existing multimodal learning, which assumes a single image (corresponding to the input text) as input, multimodal graph learning considers an arbitrary number of neighbor images/texts as input; thus, scalability is the first problem to solve to learn from multiple multimodal neighbors. In vision-text models, a standard recipe is to first process images with an image encoder (e.g., ViT, ResNet) into image embeddings, then map the embeddings into the text-only LM space, and finally feed them into the LMs. Two popular ways to feed image embeddings into LMs are with full self-attention over modalities concatenated across the sequence dimension [31] or with cross-modal attention layers [30].
Motivated by these two approaches, we propose three neighbor encoding methods as follows:
* **Self-Attention with Text + Embeddings (SA-Text+Embeddings)**: Text neighbors are concatenated as raw texts, while other modalities are first processed by frozen encoders (e.g., ViT for images), and then their embeddings are concatenated to the input sequence. We add a linear maper that aligns precomputed embeddings into the text space of LLMs.
* **Self-Attention with Embeddings (SA-Embeddings)**: Same as _SA-Text+Embeddings_ except text neighbors are also processed by separate frozen encoders, and their embeddings are concatenated to the input sequence. Text encoders could be the same or different from the base LLM model.
* **Cross-Attention with Embeddings (CA-Embeddings)**: All neighbors are processed by separate frozen encoders, mapped into the text space by linear mapers, and then fed into cross-attention layers.
In general, when we provide text embeddings instead of raw text, the amount of information the LLMs are able to exploit is bottlenecked by the precomputed embeddings. However, raw texts introduce scalability issues as the attention mechanism of LMs uses the \(O(T^{2})\) compute with the sequence length \(T\). Thus, there is a trade-off between computation cost and scalability. For _SA-Text+Embeddings_ and _SA-Embeddings_, we have additional parameters only for mappers that are located outside of the LMs, while _CA-Embeddings_ inserts additional cross-attention layers into pretrained LMs and trains them from scratch. This means _CA-Embeddings_ could result in an unstable initial state as the pretrained LLM layers are affected by randomly initialized cross-attention layers. In Section 4.4, we explore these three approaches and discuss their empirical results.
### Research Question 2: Graph Structure Encoding
Given neighbor information, we can simply concatenate neighbor information either as raw texts or embeddings and treat them as a sequence. But the neighbors have structures among them. For instance, sections have hierarchical structures, and images are included in certain sections in WikiWeb2M (Figure 1(b)). To encode this graph structure among the neighbor information, we borrow two popular graph position encodings from graph transformers and compare them with sequential position encoding.
* **Laplacian Position Encoding (LPE)**: We exploit Laplacian eigenvectors of neighbors computed from their graph structure as their position encodings.
* **Graph Neural Networks (GNN)**: We first compute neighbor embeddings from frozen encoders and run GNN over the embeddings using the graph structure. Then, we use the output GNN embeddings, which encode graph structure information as position encodings.
_LPE_ has an additional \(1\)-layer MLP mapper to map the Laplacian eigenvectors to the text space of LMs. Parameters used for graph structure encoding (e.g., mappers for _LPE_ or _GNN_ parameters) are trained with LMs in an end-to-end manner during LM fine-tuning. In Section 4.5, we explore how well these different position encodings bring additional graph structure information among neighbors into LMs and improve performance.
### Research Question 3: Parameter-Efficiency
While we need to fine-tune the pretrained LM model for the specific task and newly added neighbor information, full fine-tuning requires high computation costs and also brings inconvenience in sharing MMGL modules when users decide to use neighbor information. Recently, various parameter-efficient fine-tuning (PEFT) methods have been proposed to fine-tune only a small amount of parameters while preserving the full fine-tuning performance. We choose three different PEFT models proper for the three neighbor encoding approaches we described above.
* **Prefix tuning**: When we choose _SA-Text+Embeddings_ or _SA-Embeddings_ for neighbor encoding, we do not have any newly added parameters but self-attention layers; thus, we can easily apply Prefix tuning [18], which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors prepended to the original activation vectors across all layers.
* **LoRA**: Like _Prefix tuning_, low-rank adaptation (LoRA) [11] is suitable for _SA-Text+Embeddings_ or _SA-Embeddings_ neighbor encodings. LoRA injects trainable rank decomposition matrices into each layer while freezing the original parameters.
* **Flamingo**: For _CA-Embeddings_ neighbor encoding, we can directly apply _Flamingo_[1], which fine-tunes only newly added cross-attention layers with _tanh_ gating to keep the pretrained LM intact at initialization for improved stability and performance.
In Section 4.6, we explore how well PEFT models preserve the full fine-tuning performance by tuning a small number of parameters.
## 4 Experiments
### WikiWeb2M dataset
WikiWeb2M dataset [2] is built for the general study of multimodal content understanding with many-to-many text and image relationships. Built upon the WIT dataset [27] which contains only image-caption pairs, WikiWeb2M includes the page title, section titles, section text, images and their captions, and indices for each section, their parent section, their children sections, and many more.
In this work, we focus on the section summarization task to generate a single sentence that highlights a particular section's content. The summary is generated given all images and (non-summary) text present in the target and context sections. We sample \(600\)k Wikipedia pages randomly from WikiWeb2M for the section summarization task. In total, the training/validation/test set sizes for the section summarization task are \(680\)k/\(170\)k/\(170\)k, respectively.
### Experimental Settings
From WikiWeb2M, we can get four types of information for section summarization: (1) section text, (2) section images, (3) text from page description and other sections, and (4) images from page description and other sections. We provide information incrementally to LMs to study the effectiveness of multimodal neighbor information: (1) _section text, 2) section all_ (text + image), 3) _page text_ (all text from a Wikipedia page the input section belongs to), and 4) _page all_ (all text and images from the Wikipedia page).
We use Open Pre-trained Transformer (OPT-125m) [37] for the base LM to read the input section text and generate a summary. For text and image encoders for neighbor information, we use text/image encoders from CLIP [22]. Following [23], we finetune OPT for \(10000\) steps of \(125\) batch size with learning rate \(10^{-4}\). The text/image encoders are frozen across all experiments. We measure BLEU-4 [21], ROUGE-L [20], and CIDEr [33] scores on the validation set. All experiments are run on \(4\) Nvidia-RTX 3090 GPUs with \(24\)GB memory.
### Effectiveness of Neighbor Information
We first examine the effectiveness of multimodal neighbor information. As described in Section 4.2, we provide more information incrementally to the base LM: (1) _section text_, (2) _section all_ (text + image), 3) _page text_, and 4) _page all_ (all texts and images). Here, we use _Self-Attention with Text+Embeddings (SA-Text+Embeddings)_ neighbor encoding across different input types. For images, we first compute the image embeddings from the frozen CLIP image encoder and concatenate them
right after the text of a section each image belongs to preserve the structure. The results in Table 1 indicate that _more multimodal neighbor information is helpful:_ performance significantly improves when going from _section_ to _page_ content, and further when adding _page all_ content, based on their BLEU-4, ROUGE-L, and CIDEr scores.
Discussion: Missing Modalities.Performance of _section all_ decreased slightly from _section text_, despite the addition of section images. In Wikipedia, not every section has corresponding images. Thus, in the _section all_ case, input to the LMs is inconsistent with some samples having text and images, while other samples only have text. This points to an important unaddressed _missing modality issue_ that is common in the real world, which is not typically encountered in the conventional \(1\)-to-\(1\) multimodal setting, emphasizing the importance of developing MMGL approaches that are robust to the presence of missing modalities.
### Neighbor Encoding
We encode multiple multimodal neighbor information using three different neighbor encodings, _Self-Attention with Text+Embeddings_ (SA-TE), _Self-Attention with Embeddings_ (SA-E), and _Cross-Attention with Embeddings_ (CA-E). While SA-E and CA-E encode all modalities, including text, into embeddings using frozen encoders, SA-TE encodes text neighbors as they are by concatenating them to the input text sequence. Thus SA-TE requires longer input sequence lengths (\(1024\)) to encode additional texts, leading to potential scalability issues. On the other hand, SA-E and CA-E require one token length to encode one text neighbor, improving scalability with shorter input lengths (\(512\)). The results in Table 2 indicate that _scalability is traded off with performance:_ SA-TE consistently performs better than SA-E and CA-E on different input types at the cost of longer input lengths.
Discussion: Information Loss.In conventional multimodal learning with \(1\)-to-\(1\) mappings, SA-TE is commonly used to infuse text input as it is, and image inputs as embeddings are precomputed by frozen encoders [1; 16; 17]. These methods successfully generate texts grounded on the input images, showing image embeddings' effectiveness as input to the pretrained LMs. However, the performance gap between SA-TE and SA-E in Table 2 indicates that text embeddings likely lead to _information loss_ in the LMs. This could be either because the \(1\)-layer MLP mapper that aligns precomputed text embeddings into the text space of the LMs is not expressive enough, or because longer input texts
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Input type** & **Input length** & **BLEU-4** & **ROUGE-L** & **CIDEr** \\ \hline \hline
**Section text** & 512 & 8.31 & 40.85 & 79.68 \\
**Section all** & 512 & 8.03 & 40.41 & 77.45 \\ \hline
**Page text** & 1024 & 9.81 & 42.94 & 92.71 \\
**Page all** & 1024 & 9.96 & 43.32 & 96.62 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Effectiveness of neighbor information**: As more neighbor information is fed to LMs together with input texts (_section text, section all \(\Rightarrow\) page text, page all_), generation performance is improved. We increase the input sequence length to \(1024\) to encode _page text_ and _page all_ as more information is required to be encoded. The best results are colored in red, while the second-best results are colored in blue.
\begin{table}
\begin{tabular}{l|c c||c c|c c|c c} \hline \hline & \multicolumn{3}{c|}{**BLEU-4**} & \multicolumn{3}{c|}{**ROUGE-L**} & \multicolumn{3}{c}{**CIDEr**} \\ \hline
**Input type** & **SA-TE** & **SA-E** & **CA-E** & **SA-TE** & **SA-E** & **CA-E** & **SA-TE** & **SA-E** & **CA-E** \\ \hline \hline
**Section all** & 8.03 & 7.56 & 8.35 & 40.41 & 39.89 & 39.98 & 77.45 & 74.33 & 75.12 \\
**Page text** & 9.81 & 8.37 & 8.47 & 42.94 & 40.92 & 41.00 & 92.71 & 80.14 & 80.72 \\
**Page all** & 9.96 & 8.58 & 8.51 & 43.32 & 41.01 & 41.55 & 96.01 & 82.28 & 80.31 \\ \hline
**Max input length** & 1024 & 512 & 512 & 1024 & 512 & 512 & 1024 & 512 & 512 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Neighbor encodings in MMGL**: We encode multiple multimodal neighbor information using three different neighbor encodings, _Self-Attention with Text+Embeddings_ (SA-TE), _Self-Attention with Embeddings_ (SA-E), and _Cross-Attention with Embeddings_ (CA-E). While SA-TE shows the best performance, SA-TE requires a longer input length (\(1024\)) to encode texts from neighbors in addition to the original text input, leading to scalability issues. The best results are colored in red.
compared to short texts used in the conventional multimodal learning (e.g., one-sentence captions) makes LMs hard to learn from precomputed text embeddings. From a practical angle, our results illuminate the trade-off between scalability and performance. Meanwhile, our results emphasize the need for more MMGL research to address the challenging issue of information loss when using embeddings to capture text information.
### Graph Structure Encoding
In addition to each modality on neighbors, multimodal graphs contain graph structure information among neighbors. We encode the graph structures among multimodal neighbors using sequential position encodings (_Sequence_), Graph Neural Network embeddings (_GNN_), and Laplacian position encodings (_LPE_). Computed position encodings are first mapped to the text space of LMs by \(1\)-layer MLP, added to input token/text/image embeddings, and fed into LMs. In Table 3, _GNN_ embeddings show the best performance. Especially, the improvement over _Sequence_ position encoding shows the _importance of graph-aware structure encoding methods_ in MMGL.
\begin{table}
\begin{tabular}{l|l|c c|c|c|c} \hline \hline \multicolumn{2}{l|}{**Neighbor encoding (max length)**} & \multicolumn{2}{c|}{**SA-TE (1024)**} & \multicolumn{2}{c|}{**SA-E (512)**} & \multicolumn{2}{c}{**CA-E (512)**} \\ \hline \multicolumn{2}{l|}{**Metric**} & \multicolumn{1}{l|}{**Input type**} & \multicolumn{1}{l|}{**Prefix tuning**} & \multicolumn{1}{l|}{**LoRA**} & \multicolumn{1}{l|}{**Prefix tuning**} & \multicolumn{1}{l|}{**LoRA**} & \multicolumn{1}{l|}{**Flamingo**} \\ \hline \hline \multirow{3}{*}{**BLEU-4**} & **Section all** & 6.70 & 6.65 & 6.80 & 7.07 & 6.96 \\ & **Page text** & 7.84 & 7.94 & 6.88 & 7.09 & 7.81 \\ & **Page all** & 8.21 & 8.18 & 6.91 & 7.12 & 8.12 \\ \hline \multirow{3}{*}{**ROUGE-L**} & **Section all** & 38.67 & 38.84 & 38.97 & 39.30 & 39.43 \\ & **Page text** & 40.61 & 40.98 & 38.38 & 39.69 & 40.29 \\ & **Page all** & 41.08 & 41.25 & 38.98 & 39.05 & 40.95 \\ \hline \multirow{3}{*}{**CIDEr**} & **Section all** & 65.84 & 65.00 & 67.24 & 68.61 & 69.31 \\ & **Page text** & 78.12 & 78.60 & 66.55 & 69.26 & 76.20 \\ & **Page all** & 81.07 & 80.75 & 68.20 & 68.86 & 82.37 \\ \hline \hline \multicolumn{2}{l|}{**\# Finetuned parameters**} & 20M & 82M & 20M & 84M & 90M \\ \# Total parameters & 230M & 250M & 300M & 320M & 363M \\ \% Finetuned parameters & 9\% & 33\% & 7\% & 26\% & 25\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Parameter-efficient finetuning in MMGL: We apply _Prefix tuning_ and _LoRA_ for _Self-Attention with Text+Embeddings_ (_SA-TE_) and _Self-Attention with Embeddings_ (_SA-E_) neighbor encodings. For _Cross-Attention with Embeddings_ (_CA-E_) neighbor encoding, we apply _Flamingo_-style finetuning that finetunes only newly added cross-attention layers with gating modules. Note that _SA-E_ and _CA-E_ neighbor encodings have more parameters than _SA-TE_ because (frozen) text encoders are added to encode text neighbors. The best results are colored in red, while the second-best results are colored in blue.**
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline \hline \multicolumn{2}{l|}{**Metric**} & \multicolumn{1}{l|}{**PEFT**} & \multicolumn{1}{l}{**Sequence**} & \multicolumn{1}{l}{**GNN**} & \multicolumn{1}{l}{**LPE**} \\ \hline \hline \multirow{2}{*}{**BLEU-4**} & **Prefix tuning** & 6.91 & 6.98 & 6.80 \\ & **LoRA** & 7.12 & 7.30 & 7.13 \\ \hline \multirow{2}{*}{**ROUGE-L**} & **Prefix tuning** & 38.98 & 39.13 & 39.10 \\ & **LoRA** & 39.05 & 39.48 & 39.35 \\ \hline \multirow{2}{*}{**CIDEr**} & **Prefix tuning** & 68.20 & 69.29 & 68.15 \\ & **LoRA** & 68.86 & 70.86 & 69.34 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Graph structure encoding in MMGL: We encode graph structures among multimodal neighbors using sequential position encodings (_Sequence_), Graph Neural Network embeddings (_GNN_), and Laplacian position encodings (_LPE_). Computed position encodings are added to input token/text/image embeddings and fed into LMs. We use _Self-Attention with Embeddings_ (_SA-E_) neighbor encoding and _Prefix tuning_ in this experiment. The best results are colored in red.**
### Parameter-Efficient Fine-Tuning
Full fine-tuning of pretrained LMs requires high computational costs. For parameter-efficient fine-tuning for MMGL, we study _Prefix tuning_ and _LoRA_ for _Self-Attention with Text+Embeddings (SA-TE)_ and _Self-Attention with Embeddings (SA-E)_ neighbor encodings. For _Cross-Attention with Embeddings (CA-E)_ neighbor encoding, we apply _Flamingo-_style finetuning that finetunes only newly added cross-attention layers with gating modules.
The results in Table 4 show that _LoRA performs better than Prefix tuning_ for _SA-TE_ and _SA-E_ neighbor encodings with more fine-tuned parameters (\(7-9\%\) for _Prefix tuning_ and \(26-33\%\) for _LoRA_). However, _Prefix tuning_ still shows comparable performance with _LoRA_ using nearly \(4\) times fewer parameters with _SA-TE_ neighbor encoding. _Flamingo_ with _CA-E_ neighbor encoding shows comparable performance with _LoRA_ with _SA-TE_ neighbor encoding employing the similar numbers of fine-tuned parameters (\(82M\) for _LoRA_ and \(90M\) for _Flamingo_). Note that _SA-E_ and _CA-E_ neighbor encodings have more parameters than _SA-TE_, attributed to the inclusion of (frozen) text encoders for text neighbor processing.
In Table 2 (without PEFT), it is evident that _CA-E_ neighbor encoding lags in performance compared to _SA-TE_ neighbor encoding. However, when infused with Flamingo, gating modules in Flamingo effectively ensure that the pre-trained LMs remain unaffected by randomly set cross-attention layers at initialization, thereby enhancing the performance of _CA-E_, as shown in Table 4 (with PEFT). This underscores the pivotal role of strategic initialization when introducing supplementary modules for neighbor encoding in MMGL and when integrating them into the pre-trained LMs.
## 5 Conclusion
In this work, we extend the conventional multimodal learning with \(1\)-to-\(1\) mappings between a pair of modalities into multimodal graph learning (MMGL) with \(many\)-to-\(many\) relations among multiple modalities. Our MMGL framework is systematically structured around three critical components: (1) neighbor encodings, (2) graph structure encodings, and (3) parameter-efficient fine-tuning. Through rigorous testing on the WikiWeb2M dataset, we explored different options for each component: (1) three variations of neighbor encodings, _Self-Attention with Text+Embeddings_, _Self-Attention with Embeddings_, and _Cross-Attention with Embeddings_, highlighting the balance between scalability and performance, (2) three different graph position encodings, _sequence_, _LPE_, and _GNN_, and (3) three PEFT models, _prefix tuning_, _LoRA_, and _Flamingo_, and their trade-off between parameter-efficiency and performance. Our in-depth analyses and findings aim to lay the groundwork for future MMGL research, igniting further exploration in this field.
|
2301.09314 | Hooke and Coulomn Energy of Tripod Spiders | Tripod spiders are the simplest examples of arachnoid mechanisms. Their
workspaces and configuration spaces are well known. For Hooke potential, we
give a complete description of the Morse theory and treat the robust control of
the spider. For the Coulomb energy, we use stationary charges and the trapping
domain to study the robust control of spiders. We show that, for a regular
triangle and positive charges, the domain of robust control is non-void. This
relates to questions about the Maxwell conjecture about point charges. We end
with several natural problems and research perspectives suggested by our
results. | Giorgi Khimshiashvili, Dirk Siersma | 2023-01-23T08:22:49Z | http://arxiv.org/abs/2301.09314v1 | # Hooke and Coulomb energy of tripod spiders
###### Abstract.
Tripod spiders are the simplest examples of arachnoid mechanisms. Their workspaces and configuration spaces are well known. For Hooke potential, we give a complete description of the Morse theory and treat the robust control of the spider. For the Coulomb energy, we use stationary charges and the trapping domain to study the robust control of spiders. We show that, for a regular triangle and positive charges, the domain of robust control is non-void. This relates to questions about the Maxwell conjecture about point charges. We end with several natural problems and research perspectives suggested by our results.
Key words and phrases:Spider linkage, workspace, Hooke energy, Coulomb energy, stationary charges, robust control 2020 Mathematics Subject Classification: 58K05, 70B15
## 1. Introduction
The geometry and topology of mechanical linkages play an important and increasing role in applied problems. Most of the previous studies were concerned with the workspace and the topology of the configuration space, which is only known in a few cases summarized in [KM], [Oh], [Mo]. The aim of this paper is to extend the known results to new classes of linkages and enrich them by considering potential functions on the workspace, with aplications to control of the linkages considered. Our approach is based on Morse theory which yields an explicit connection between the topology of configuration space and the critical points of potential.
We will be basically concerned with the so-called arachnoid mechanisms, the topology of which is a largely unexplored topic. Nowadays the same type of objects is often called "spidery linkage". Detailed information on the topology of arachnoid linkages is important for the design and control of certain types of spider robots. More concretely, we study the simplest arachnoid mechanism, the 3-leg spider also known as tripod spider [Oh], [Mo].
In Section 2 we use Hooke energy as a potential function and describe in detail the workspace and the critical point theory. This implies, in particular, that a weighted version of Hooke energy can be used to control such a linkage.
In section 3 we study the Coulomb potential of point charges placed at the foots of a 3-leg spider linkage. We deal with the so-called stationary charges of the spider's center (the common point of legs) and the so-called trapping domain of stationary charges determined in [GK1], [GK2]. This enables us to determine the domain in the workspace of spider, where the position of spider's center can be robustly controlled by the values of stationary charges using the Coulomb control scenario developed in [KPS], [GK1]. For a symmetric spider based on a regular triangle and having a contractible workspace, we show that the
domain of robust Coulomb control is a non-void open subset of the workspace containing the center of the reference triangle.
In the sequel we use Morse theory for manifolds with boundaries and corners. Most of it is "folklore" hidden in the literature. We mention here [JJT] and [GM] as the most general reference. For critical points of functions on manifolds with boundary and corners we refer to [Si]. For brevity, we refer to criteria of such critical points and corresponding topological changes in level surfaces as "standard rules".
In conclusion we mention several related problems and perspectives suggested by our results. In general, this paper may be considered as a first step in applying our approach to spider linkages and creating a paradigm for further research in this direction.
The results of this paper were obtained and written up in the framework of a "Research in Residence" project at the "Centre des Rencontres Mathematiques" (CIRM, Luminy, France) realized in November of 2022. It is our pleasure to acknowledge the support and excellent working conditions at CIRM which largely facilitated our research.
## 2. Hooke Energy as a potential function of tripod spider
We will consider the Hooke potential for three points in several situations. First without any constraint, next with constraints on maximal an minimal distances, and finally for a 3-leg spider.
### No constraints
We start with the following simplified situation. Given 3 points \(A,B,C\) (in vector notation \(a,b,c\)), the foot points of the spider, and its center (joint) the point \(X\). The legs \(AX\), \(BX\), \(CX\) are completely flexible around the foot, their length is allowed to change.
The Hooke Energy is defined by
\[H(x)=||x-a||^{2}+||x-b||^{2}+||x-c||^{2}\]
The stationary points are determined by: \(\nabla H=6x-2(a+b+c)=0\), so \(x=\frac{1}{3}(a+b+c)\), the center of gravity. There are no other stationary points. Note that \(H(X)=||x-z||^{2}+K\), where \(z=\frac{1}{3}(a+b+c)\), the center of gravity \(Z\) and \(K=||a||^{2}+||b||^{2}+||c||^{2}-||z||^{2}\). All level curves are circles.
### Maximal length constraints
We require
\[|XA|\leq R_{A}\;,\;|XB|\leq R_{B}\;,\;|XC|\leq R_{C}.\]
The configuration space is the intersection of 3 discs \(D_{A},D_{B},D_{C}\) with centers resp \(A,B,C\) and radii \(R_{A},R_{B},R_{C}\). We study the boundary extrema of \(H\).
**Lemma 1**.: _The map \(H\) restricted to the boundary of \(D_{A}\) has an extrema in \(Y\) iff \(AY\) has the direction of \(AB+AC\), i.e the two intersection points of the line trough \(A\) and \(Z\) with the boundary of \(D_{A}\), a minimum and a maximum._
Proof.: Suppose \(A\) is the origin. Let \(R_{A}e_{\phi}\) a point on \(\delta D_{A}\). Use Lagrange multiplyers:
\[\nabla H=6R_{A}e_{\phi}-2(b+c)=\lambda e_{\phi}\]
Therefore
\[(-\lambda+6r_{A})e_{\phi}=2(b+c)\]
**Proposition 1**.: _Let the workspace \(W=D_{A}\cap D_{B}\cap D_{C}\) contain an open neighborhood of \(Z\) then \(Z\) is the only stationary point of \(H\), an absolute minimum. There are no boundary singularities._
Proof.: The center of gravity \(Z\) is clearly a minimum. Potential other critical points are the critical points of the restiction of \(H\) to the boundary circles (intersections of \(ZA\) with the circle around \(A\), etc) and also the 3 'corner points ', where 2 boundary circles intersect. The statement follows now from the standaard rules. For boundary points, they are here as follows:
\begin{tabular}{l l l l} type of restriction & direction normal & type & cell attaching \\ \hline minimum & inward & no change & no \\ minimum & outward & minimum & 0-cell \\ maximum & inward & no change & no \\ maximum & outward & saddle & 1-cell \\ \end{tabular} With no change we mean, that the topological type of the lower level sets don't change.
For corner points there are similar rules. They give in our case: no change. Due to the special (circular) form of level curves and boundaries one can obtain the same result by 'inspection of pictures'.
### Two sided length constraints
We require
\[0<R_{A}^{-}\leq|XA|\leq R_{A}\;,\;0<R_{B}^{-}\leq|XB|\leq R_{B}\;,\;0<R_{C}^{ -}\leq|XC|<R_{C}.\]
The configuration space is the intersection of 3 discs with centers \(A,B,C\) and radii \(R_{A},R_{B},R_{C}\), where some smaller (open) discs have been taken out. From the many different posiblities, we consider here the case that these 3 discs have a relatively small radii and the workspace \(W\) is a disc with 3 holes, containing an open neighborhood of the gravity center \(Z\).
**Proposition 2**.: _In this situation the point \(Z\) is an absolute minumum of \(H\) on \(W\), moreover there are 3 saddle points on the (outer) intersection points of the small discs with \(ZA\), \(ZB\), \(ZC\)._
Proof.: The center of gravity \(Z\) is an absolute minimum. Other potential critical points are the critical points of the restiction of \(H\) to the boundary circle, the 3 small circles and also the 3 'corner points ', where 2 boundary circles intersect. The statement follows now from standard rules for boundary and corner singularities as explained above.
**Remark 1**.: The phase portrait of the gradient of \(H\) consists of straight lines to \(Z\), except for those lines that intersect one of the small discs. Their trajectories follow from the moment that they intersect these discs a part of the boundary circle until they become 'visible' from \(Z\) and then continue as a straight line. There are 3 intervals between the outer and the inner circles, which are conflict strata for the gradient flow.
**Remark 2**.: The Morse theory in the 3 cases above is as follows:
In 2.1: \(b_{0}=1,b_{i}=0\) (\(i\geq 1\)) and \(\mu_{1}=1,\mu_{i}=0\) (\(i\geq 1\)),
In 2.2: \(b_{0}=1,b_{i}=0\) (\(i\geq 1\)) and \(\mu_{1}=1,\mu_{i}=0\) (\(i\geq 1\)),
In 2.3: \(b_{0}=1,b_{2}=3,b_{i}=0\) (\(i\geq 2\)) and \(\mu_{1}=1,\mu_{1}=3,\mu_{i}=0\) (\(i\geq 2\)).
In these three cases, \(H\) is a perfect Morse function.
### Robust control
A similar study can be made for the weighted Hooke Energy:
\[H_{\alpha,\beta,\gamma}=\alpha||x-a||^{2}+\beta||x-b||^{2}+\gamma||x-c||^{2}\]
Assume \(\alpha>0,\beta>0,\gamma>0\). This potential function has the point \(Z=(\alpha:\beta:\gamma)\) (barycentric coordinates) as an absolute minimum. The level curves are circles with the center at this point \(Z\). The critical point theory is similar to the case of \(H\), which corresponds to \((\alpha:\beta:\gamma)=(1:1:1)\).
A proper choice of the controls \((\alpha,\beta,\gamma)\) can be used to move the 3-leg spider to any point in the triangle \(ABC\) via minimum points of Hooke energy. This procedure yields a _robust control_ of the spider.
### The 3-leg spider
In this case the telescopic connections are replaced by 2-arms with fixed arm lengths and flexible turning point. We assume that the two parts of the arm have different lengths. So \(AX\) is replaced by the arm \(AP\cdot PX\), \(BX\) by \(BQ\cdot QX\) and \(CX\) by \(CR\cdot RX\). The configuration space \(\mathcal{C}\) is an 8-fold cover of the workspace \(W\) with certain identifications at the boundaries. The topology has been studied in full generality by P. Mounod [Mo]. He showed that an \(n\)-leg spider with generic arm lengths has a smooth two-dimensional configuration space, and gave a formula for its Euler characteristic. In his paper he also solved some questions of J. O'hara [Oh].
The configuration space of 3-leg spider projects now clearly to the workspace \(W\) from 2.3 (with the 3 small holes). \(R_{A}^{-}\) is equal to the difference of arm-lengths in the \(AP\cdot PX\) and \(R_{A}\) is equal to their sum. Similar for the points \(B\) and \(C\). We assume here that the arm lengths are such, that they give the workspace mentioned above.
According to the formula of [Mo] the Euler characteristic of \(\mathcal{C}\) is -22. So we have a smooth surface with genus 12.
We consider the quadratic distance function \(H\) (see above).
**Proposition 3**.: _The Hooke energy \(H\) on the configuration space \(\mathcal{C}\) of the 3-leg spider has:_
* _8 minima_
_36 saddles_
* _6 maxima_
Proof.: The potential \(H\) is defined on the configuration space via projection to the work space. We will use the results from Propostion 2 and use (branched) covering arguments. The covering is 8-fold above the open part of the workspace. The branching takes place at the boundaries and corners. The potential critical points are just the preimages of the special points, which have been studied in Proposition 2.
We find in this way 8 critical pre-images of the center of gravity \(Z\), these are all absolute minima (with the same value). The 36 saddles come from the 9 special points on boundary circles, where we have covering degree 4. Their types can be deduced by topological study of the gluing of the two boundary pieces. Finally, we have 6 maxima, which corespond to the three corner points, where the covering is 2-fold and at each such point such four pieces are glued together.
There is is a dictionary between the Morse theory on the workspace treated above and the configuration space. Compare the level curves in both cases via gluing and this results in the Morse indices mentioned in the proposition.
## 3. Coulomb Energy as a potential function of tripod spider
From the viewpoint of control theory, it is also interesting and practically important to consider the Coulomb potential of point charges which are placed at the fixed foot points of a spider and can be varied in order to change the position of its body endowed with a fixed charge. A similar scenario was studied in big detail in the papers [6], [7] and our exposition in this section relies on the constructions and results given in those two papers.
We begin with recalling several concepts and constructions in the form needed in the sequel. Recall that the Coulomb potential \(E=E(Q@A)\) of \(n\) point charges of non-zero magnitudes \(q_{i}\) placed at \(n\) points \(A_{i}\) is a function of point \(X\) in the complement of points \(A_{i}\) by the formula
\[E=\sum\frac{q_{i}}{d_{i}},\]
where \(d_{i}=d(X,A_{i})\) is the Euclidean distance between the points \(P\) and \(A_{i}\). As usual a point \(X\) is called a stationary point (or equilibrium point) of potential \(E\) if its gradient \(\nabla E\) vanishes at \(X\). There exists a huge number of results and problems concerned with the Coulomb potential and equilibrium points of point charges. For us it is important to mention that, as was proven by M. Morse himself, for a generic configuration and generic values of point charges Coulomb potential is a Morse function [CM]. This suggests that Coulomb potential is a reasonable candidate for developing Morse theory for a spider linkage along the same lines as in the case of Hooke's energy considered above. The aim of this section is to present a number of results in this direction and present an application in the spirit of robust Coulomb control discussed in [10], [7], [7]. Another relevant topic in our context is the so-called Maxwell's conjecture on point charges stating that the number of isolated equilibrium points of \(n\) point charges in 2-dimensional Euclidean space does not exceed \((n-1)^{2}\) (see, e.g., a recent review in [11]).
We next focus on the case of 3 point charges. Given a triangle \(\triangle\) with vertices \(A,B,C\), referred to as a reference triangle (or base triangle), and a point \(X\) in the same plane, the triple of normalized stationary charges \(Q(X)=\{q_{i}(X;\triangle)\}\) is defined as a triple of non-zero real numbers \(q_{i}\) such that \(\nabla E(X,Q)=0\). As was proven in [GK1] these charges are uniquely determined by the ratio's
\[q_{1}:q_{2}:q_{3}=d_{1}^{3}\mathcal{A}_{1}:d_{2}^{3}\mathcal{A}_{2}:d_{3}^{2} \mathcal{A}_{3},\]
where \(\mathcal{A}_{1}\) is the area of \(\triangle BCX\), \(\mathcal{A}_{2}\) of \(\triangle CAX\), \(\mathcal{A}_{3}\) of \(\triangle ABX\). We denote the angles at X of these triangles by \(\alpha_{1},\alpha_{2},\alpha_{3}\) respectively.
In [GK1], [GK2] _the Coulomb trapping domain_\(T(\triangle)\) is defined as the set of minimum points of the potential induced by the charges \(Q(X)\). The trapping domain is given by the inequality \(h(X)>0\), where \(h\) is the Hessian of \(E(X,Q(X))\), which we call the _trapping Hessian_. We have in geometric terms the following formula:
\[h(X)=-2\mathcal{A}+9(\prod_{i}^{3}\sin\alpha_{i})(\sum_{i=1}^{3}d_{i}^{2} \mathcal{A}_{i}).\]
It was conjectured in [GK2] that \(T(\triangle)\) is a non-empty subset containing the incenter of \(\triangle\). This conjecture has been verified in a number of cases, including the regular triangle.
As was mentioned, a complete description of equilibria (stationary points) of \(E\) and their Morse indices is not known even in the case of three charges. However rather complete results for special configurations of three point charges have been obtained in [Ts]. In particular, the Maxwell's conjecture was proven for arbitrary three non-zero point charges at the vertices of a regular triangle and the Morse indices are computed.
For this reason, in the rest of the text we always assume that the reference triangle is regular since it is the most important case for applications and, at the same time, in this case all necessary background results are rigorously proven in [GK2].
Further, given a 3-leg spider \(S\) with the center at (moving) point \(X\), (fixed) foot-points \(A,B,C\) and identical legs with the links of lengths \(a\) (thigh length) and \(b\) (foot length), we denote by \(W(S)\) its workspace considered above. To simplify the discussion we assume that \(R>a>b>0\), in which case the whole workspace of the spider is the intersection of three circular annuli with centers \(A,B,C\).
**Proposition 4**.: _The domain of robust Coulomb control of spider \(S\) in the above scenario is equal to the intersection \(D(S)=W(S)\cap T(\triangle)\)._
As said before the topology of the workspace depends on the triangle and the arm lengths. This influences the intersection which could even be void. For the regular triangle, we make the choice that the workspace becomes a contractible region bounded by three circular arcs and containing the incenter. Also we assume that all charges are positive. The trapping domain \(T(\triangle)\) is explicitly known as the interior of a compact convex region bounded by \(h=0\) and containing the incenter of the triangle. Under these conditions this yields the following conclusion.
**Corollary 1**.: _The domain of robust Coulomb control of spider \(S\) based on a regular triangle is non-void._
For a spider based on the regular triangle one can also carry out a complete Morse analysis in a similar way as for the Hooke potential. One knows that for given positive charges there is a single minimum of E in the trapping area \(T(\triangle)\) and 3 saddle points in \(\triangle\) outside \(T(\triangle)\). The poles of \(E\) are not in the workspace and therefore not in the configuration space, which is an 8-fold cover of the workspace branched (glued) over boundary components. After obtaining information about the boundary singularities one gets a complete Morse analysis.
## 4. Concluding remarks
There are several natural problems and research perspectives suggested by our results. We mention some of them which seem most interesting and feasible.
First of all, note that all the concepts used in our paper make sense for \(n\)-leg spiders with arbitrary base \(n\)-gon and arbitrary lengths of the links. So a natural next step is to generalize the above results to Hooke and Coulomb potential of an \(n\)-leg spider.
Next, one can also consider other geometrically or physically meaningful potentials of \(n\)-leg spider like Riesz energies or the oriented area of the polygon formed by the moving joints of spider.
Further, one can formulate several extremal problems related to the design of spider linkages with desirable properties like the area or shape of the workspace.
Moreover, one can try to extend our results to the case of \(n\)-leg spiders with a moving \(n\)-gon platform instead of the center point.
Finally, it is natural to extend our discussion to spatial spiders where the legs can move in a three-dimensional Euclidean space.
It is also possible to consider "legs" with more than two links or with flexible non-extendible tether links. Each of the mentioned possibilities deserves a closer look and careful exploration which we intend to undertake in the future research.
Returning to 3-leg spiders we add that an urgent topic is to clarify what happens with the domain of robust Coulomb control for arbitrary configuration of foots. In this case, there are several feasible problems concerned with the optimization of workspace with given restrictions on the sum of the lengths of links and circumradius of the base triangle. For example, there is good evidence that the maximal area of \(W(S)\) arises for the most symmetric spider with equal links and regular configuration of foots.
|
2302.07669 | Unsupervised Hashing with Similarity Distribution Calibration | Unsupervised hashing methods typically aim to preserve the similarity between
data points in a feature space by mapping them to binary hash codes. However,
these methods often overlook the fact that the similarity between data points
in the continuous feature space may not be preserved in the discrete hash code
space, due to the limited similarity range of hash codes. The similarity range
is bounded by the code length and can lead to a problem known as similarity
collapse. That is, the positive and negative pairs of data points become less
distinguishable from each other in the hash space. To alleviate this problem,
in this paper a novel Similarity Distribution Calibration (SDC) method is
introduced. SDC aligns the hash code similarity distribution towards a
calibration distribution (e.g., beta distribution) with sufficient spread
across the entire similarity range, thus alleviating the similarity collapse
problem. Extensive experiments show that our SDC outperforms significantly the
state-of-the-art alternatives on coarse category-level and instance-level image
retrieval. Code is available at https://github.com/kamwoh/sdc. | Kam Woh Ng, Xiatian Zhu, Jiun Tian Hoe, Chee Seng Chan, Tianyu Zhang, Yi-Zhe Song, Tao Xiang | 2023-02-15T14:06:39Z | http://arxiv.org/abs/2302.07669v2 | # Unsupervised Hashing via Similarity Distribution Calibration
###### Abstract
Existing unsupervised hashing methods typically adopt a feature similarity preservation paradigm. As a result, they overlook the intrinsic similarity capacity discrepancy between the continuous feature and discrete hash code spaces. Specifically, since the feature similarity distribution is intrinsically biased (_e.g._, moderately positive similarity scores on negative pairs), the hash code similarities of positive and negative pairs often become inseparable (_i.e._, the similarity collapse problem). To solve this problem, in this paper a novel **Similarity Distribution Calibration** (SDC) method is introduced. Instead of matching individual pairwise similarity scores, SDC aligns the hash code similarity distribution towards a calibration distribution (_e.g._, beta distribution) with sufficient spread across the entire similarity capacity/range, to alleviate the similarity collapse problem. Extensive experiments show that our SDC outperforms the state-of-the-art alternatives on both coarse category-level and instance-level image retrieval tasks, often by a large margin. Code is available at [https://github.com/kamwoh/sdc](https://github.com/kamwoh/sdc).
## 1 Introduction
Hashing has been used extensively in real-world large-scale image retrieval systems. By converting continuous feature vectors into binary/discrete hash codes for indexing, hashing significantly reduces both computational cost and memory footprint. Recent deep supervised learning to hash [63, 66, 66, 31, 36, 12, 31, 38] have greatly outperformed conventional methods [29, 27, 58, 14]. However, supervised hashing is limited in scalability due to its reliance on a large quantity of labeled training data. A natural solution is to use _unsupervised hashing methods_ instead, which do not require costly training data annotation.
The current state-of-the-art unsupervised hashing methods [55, 32] are based on _preserving individual pairwise similarities_ between continuous feature vectors in the learned Hamming space. Compared to the alternative strategies (_e.g._, reconstruction [7], clustering [33], pseudo-labels [62], and contrastive learning [46]), pairwise similarity preservation is both easier to implement and more efficient, hence advantageous for large-scale applications [23].
However, for the first time, we point out that these similarity preservation-based hashing methods suffer from a _similarity collapse_ problem as illustrated in Fig. 1b. That is, the hash code similarities of positive and negative pairs become inseparable. There are two causes: (i) The similarity distribution in the original continuous feature space is biased. In particular, most negative pairs take moderately positive similarity scores, as shown in Fig. 1a. (ii) The intrinsic similarity capacity discrepancy between the continuous feature and discrete hash code spaces. Here the capacity is defined as how fine-grained the similarity can be measured - The similarities between any two hash codes are of a fixed set of values determined by the code length (_i.e._, limited capacity), whilst the original feature similari
ties are continuous (_i.e_., unlimited capacity). With limited hash code similarity scores to map to, the hashing process is given little chance to recover from the collapsing positive and negative feature similarity scores in the Hamming space (Fig. 1b), resulting in inferior retrieval results.
To alleviate this similarity collapse problem, in this work a novel _Similarity Distribution Calibration_ (**SDC**) method is introduced. Instead of preserving the original pairwise similarity scores individually, we match the hash code similarity distribution as a whole against a pre-defined calibration distribution (_e.g_., beta distribution) with sufficient capacity range. Due to this stretching effect, the learned Hamming space is no longer restricted severely by the original biased similarity distribution as in the existing methods. This enables the limited similarity capacity of Hamming space to be better leveraged, resulting in improved performance (Fig. 1c).
We make the following **contributions**: (i) We reveal the fundamental similarity collapse problem suffered by existing pairwise similarity preservation-based unsupervised hashing methods. (ii) To address this problem, we propose a Similarity Distribution Calibration (SDC) method by alleviating the severe restriction imposed by the original biased similarity scores. (iii) Extensive experiments validate the superiority of our SDC over state-of-the-art alternatives on four category-level and three instance-level image retrieval benchmarks.
## 2 Related Work
**Learning to hash.** Although earlier hashing methods [13, 14, 20, 24, 43, 58, 40, 43] are easy to apply in practice, their performance is typically inferior to more recent deep learning counterparts. Deep supervised hashing methods [6, 12, 31, 35, 57, 63, 66, 36] usually achieve better performance over unsupervised ones by using additionally the semantic class labels. However, they are limited in scalability as class label annotation is costly and even impossible in extreme cases (_e.g_., rare objects). Without this constraint, unsupervised methods are thus more scalable. Existing unsupervised hashing methods can be categorized into the following groups: similarity preservation [18, 19, 22, 29, 30, 32, 34, 39, 50, 55, 68], generative model [5, 10, 54, 70], reconstruction [7, 8, 51, 52], pseudo-labeling [67, 69, 15, 42, 60, 15], clustering [67, 65, 33, 69] and contrastive learning [33, 46, 64].
Among these, similarity preservation-based unsupervised hashing methods achieve the current state-of-the-art performance in image retrieval. They are also simple in design and efficient computationally. For example, PCA-H [24] and ITQ [14] project the features linearly into Hamming space which maximally preserves the original similarity. SSDH [60] and DistillHash [61] learn a hashing model with binary pairwise pseudo-labels inferred by the similarity scores. Instead, Binary Reconstructive Embeddings [29] optimizes a hash function by minimizing the difference between the Hamming distances and the original feature Euclidean distances on each training sample pair. Similarly, Angular Reconstructive Embeddings [18], Bi-half [32] and GreedyHash [55] all rely on minimizing the cosine similarity difference of training pairs across the hashing process. Despite the differences in their formulations, many of them suffer from the similarity collapse problem due to the common pairwise similarity preservation strategy adopted. _The objective of this work is to alleviate this limitation._
**Separating positive and negative pairs in hashing.** MIHash [2, 3] maximizes the quality of a hash function with the mutual information between Hamming distances and pairwise labels during the learning. Similarly, RankMI [25] estimates the separation between distributions of positive similarities and negative similarities with mutual information and variational functions. However, both previous works rely on training labels which are absent in unsupervised hashing, making them inapplicable. Conceptually, we ex
Figure 1: **(a)** Original continuous features (before hashing) and the cosine similarity distribution of positive (green) and negative (red) pairs. **(b)** After mapping to the hash code space under feature similarity preservation, the similarity collapse phenomenon happens due to the limited similarity capacity in Hamming space and the original similarity bias. **(c)** This problem can be well alleviated by our _Similarity Distribution Calibration_ method.
pand the advantages of separating positive and negative pairs from supervised hashing to the unsupervised setting by aligning with a prior distribution.
## 3 Methodology
To obtain a hash code \(b\in\{-1,+1\}^{K}\) with \(K\) bits, we need a hash function \(h\) as:
\[\mathbf{b}=h(\mathbf{x})=\texttt{sign}(\phi(\mathbf{x})), \tag{1}\]
where \(\phi:\mathbf{x}\rightarrow\mathbf{f}\in\mathbb{R}^{K}\) is a (non-)linear mapping function compressing a \(d\)-dimensional feature vector \(\mathbf{x}\in\mathbb{R}^{d}\) into a \(K\)-dimensional continuous code \(\mathbf{f}\). \(h\) is learned by optimizing an objective function \(\mathcal{L}\). At test time for image retrieval, the Hamming distances between a query code, \(\mathbf{b}_{p}\), and the gallery codes, \(\mathbf{b}_{q}\), of a database can be mathematically computed as:
\[\mathcal{D}_{h}(\mathbf{b}_{p},\mathbf{b}_{q})=\frac{K}{2}(1-\cos\theta_{pq}), \tag{2}\]
where \(\cos\theta_{pq}=\cos(\mathbf{b}_{p},\mathbf{b}_{q})\) is the cosine similarity between \(\mathbf{b}_{p}\) and \(\mathbf{b}_{q}\).
The optimization is not differentiable when directly using Eq. (1) in a hashing objective \(\mathcal{L}\) due to the non-differentiable sign function. A straightforward solution is to remove the sign function, whilst minimizing the quantization error between \(\mathbf{f}\) and its hash code \(\mathbf{b}\) during training [41] as:
\[\mathcal{L}_{q}=\frac{1}{N}\sum_{i=1}^{N}(1-\cos\theta_{i}),\ \ \text{and}\ \ \cos\theta_{i}=\cos(\mathbf{f}_{i},\mathbf{b}_{i}), \tag{3}\]
where \(\cos\theta_{i}\) is the cosine similarity between the continuous code \(\mathbf{f}_{i}\) and the hash code counterpart \(\mathbf{b}_{i}=\texttt{sign}(\mathbf{f}_{i})\) of \(i\)-th sample, and \(N\) specifies the training set size. This enables differentiable end-to-end hashing without a straight-through estimator [1, 55] or continuous relaxation [6]. Note, although the continuous codes \(\mathbf{f}\) are involved in learning, we describe the learning process directly with hash codes \(\mathbf{b}\) hereafter for convenience.
### Hashing by Conventional Similarity Preservation
Prior arts [14, 18, 20, 24, 29, 32, 55] preserve the pairwise similarities of the original continuous feature space during hashing. The loss function is often formulated as:
\[\mathcal{L}_{\text{p}}=\frac{1}{|\mathcal{N}|}\sum_{(i,j)\in\mathcal{N}}(t_{( i,j)}-s_{(i,j)})^{2}, \tag{4}\]
where \(t_{(i,j)}=\cos(\mathbf{x}_{i},\mathbf{x}_{j})\) is the similarity reconstruction target for a pair of samples \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) drawn from a training set \(\mathbf{X}\in\mathbb{R}^{N\times d}\), \(\mathcal{N}\) is a set of selected sample pairs, \(|\mathcal{N}|\) is the cardinality of \(\mathcal{N}\), and \(s_{(i,j)}=\cos(\mathbf{b}_{i},\mathbf{b}_{j})\) is the similarity of two hash codes. Each hash code is obtained with the hash function \(\mathbf{b}=h(\mathbf{x})\) (Eq. (1)).
As discussed earlier, similarity preservation-based unsupervised hashing methods suffer from a _similarity collapse_ problem, as indicated by the severe overlapping in the hash code similarity scores of positive and negative pairs (Fig. 1b). Intuitively, this would lead to suboptimal retrieval performance.
As a concrete example, we examine the pairwise similarity distribution of CIFAR10 in the Hamming space. From Fig. 2 we observe that the distribution of hash code similarities is mainly concentrated in the positive region. This is because similarity preservation (_i.e._, Eq. (4)) would directly inherit the similarity bias of the original feature space (VGG-16 features in this case). As a result, the similarity capacity of Hamming space is leveraged only at a limited degree, giving rise to the similarity collapse problem.
### Similarity Distribution Calibration
_Similarity Distribution Calibration_ (SDC) is designed particularly for alleviating the similarity collapse problem. The idea is to align the empirical hash code similarity distribution of the training data with a calibration distribution with sufficient spread across the entire similarity capacity.
To measure the discrepancy between two probability distributions for similarity calibration, we adopt the Wasserstein distance with an elegant solution based on _inverse Cumulative Distribution Function_ (iCDF) [47, 49]. Formally, we consider the hash code similarity \(s\) as a random variable with the iCDF \(F\). Our Wasserstein distance-based calibration is formulated as:
\[\int_{0}^{1}|F(z)-C(z)|dz, \tag{5}\]
where \(z\) is the quantile with the interval of \([0,1]\), and \(C\) is the iCDF of the calibration distribution. The pipeline of SDC is depicted in Fig. 3.
Figure 2: Pairwise similarity distribution of CIFAR10 using 100k randomly chosen negative pairs and 10k positive pairs. **(Left)** VGG16 features. **(Right)** 64-bits hash codes of GreedyHash [55]. Note that, the positive and negative labels are included for ease of explanation (_i.e._ descriptive purpose only), and were not deployed during the actual unsupervised training.
#### 3.2.1 Approximation
For mini-batch based deep learning, we can estimate \(F\) by collecting the pairwise similarities of hash codes and sorting them in the ascending order of _feature similarities_\(t\). Due to the limited batch size, Eq. (5) needs to be approximated. Concretely, we evenly divide the probability range \([0,1]\) into \(|\mathcal{N}|\) bins and then aggregate per-bin calibration as:
\[\mathcal{L}_{\text{sdc}}=\frac{1}{|\mathcal{N}|}\sum_{s_{i}\in\mathcal{N}}\big{|} s_{i}-C(\frac{2i-1}{2|\mathcal{N}|})\big{|}, \tag{6}\]
where \(s_{i}\) is \(i\)-th sorted hash code pairwise similarity. Orthogonal hash codes (_i.e._, \(s=0\)) will produce maximum average Hamming distance [17]. We hence clip the negative part of \(C\), which also helps improve training stability in practice.
#### 3.2.2 Instantiation
In general, any distribution with sufficient capacity spread is a candidate for the calibration distribution. As an instantiation, we consider _beta distribution_, \(\texttt{Beta}(\alpha,\beta)\) with \(\alpha\) and \(\beta\) the two positive shape parameters. This is because its iCDF is bounded to the range \([0,1]\) so that it can be easily transformed to the target similarity range (_e.g._, \([-1,1]\) for cosine similarity in our case). Other distributions (_e.g._, Gaussian) can also be considered (see Table 4).
We set \(\alpha=\beta\) for simpler symmetric beta distribution. There is no prior knowledge about the optimal parameter value. However, as illustrated in Fig. 3(b), the shapes of probability density functions (PDF) over different parameter values all seem to meet our requirements as calibration distribution. Empirically, we find that \(\alpha=\beta=5\) work well overall (the default setting in experiments).
#### 3.2.3 In the Hash Buckets Perspective
We justify the rationales of our SDC from the hash buckets perspective. It is assumed that an optimal hash function should encode similar items with the same hash code (preserved similarity) and fully utilize the Hamming space (decorrelated and balanced bit) [58]. In an ideal inverted file system [21], \(K\)-bits hash codes can form \(2^{K}\) hash buckets, and all the hash buckets should be utilized with equal size. Consequently, any two \(K\)-bits hash codes can be sampled with uniform probability.
Since the bits are balanced, the probability that one bit differs equals to \(0.5\). The probability that \(d\) bits differ1 and \(K-d\) bits do not differ thus equals to \((0.5)^{d}(0.5)^{K-d}\). Next, the number of ways that only \(d\) bits differ equals to \(\binom{K}{d}\). As a result, the probability that the Hamming distance between two uniformly sampled \(K\)-bits hash codes equals to \(d\) is:
Footnote 1: This is also the Hamming distance.
\[\binom{K}{d}(0.5)^{d}(0.5)^{K-d}=\frac{1}{2^{K}}\binom{K}{d}. \tag{7}\]
This is equivalent to the probability mass function of a binomial distribution \(\texttt{B}(K,0.5)\). We plot the result in Fig. 3(a)
Figure 3: Pipeline of our proposed Similarity Distribution Calibration (SDC). We first map the original features into Hamming space through a learnable hash function. Next, we construct the empirical hash code similarity distribution. Finally, we minimize the Wasserstein distance between the empirical distribution and a calibration distribution.
and the iCDF of Eq. (7) in Fig. 4c. From both figures, we see that as \(K\) varies from \(2\)-bits to \(64\)-bits, the similarity distribution is similar to the beta distribution with \(\alpha=\beta\rightarrow\infty\) (see Fig. 4d). This means that for learning an optimal hash function, we should produce hash codes with their pairwise similarity distribution similar to a binomial distribution. In case that the Hamming space is not fully used, there should exist imbalanced hash bucket sizes, corresponding to a biased similarity distribution -- similarity collapse emerges.
#### 3.2.4 Remarks
Note that the feature similarity scores are used to sort the hash code counterparts while constructing the iCDF. However, unlike the conventional strategy preserving _individual_ pairwise similarity scores rigidly, our SDC leverages the distribution of feature similarity scores _holistically_.
In practice, we observe that the pairwise feature similarities could vary over mini-batches. During calibration, we apply sorting to rank them before aligning their corresponding hash code similarities with the prior distribution. As a result, our method does not seek a one-to-one alignment between feature similarity and prior distribution. This property could be understood as a type of stochastic noise during optimization, in the spirit of SGD optimization.
### Overall Learning Objective
For model training, we deploy the overall objective loss function as:
\[\mathcal{L}=\mathcal{L}_{\text{sdc}}+\lambda_{q}\mathcal{L}_{q}, \tag{8}\]
where \(\mathcal{L}_{q}\) is the quantization loss as defined in Eq. (3), and \(\lambda_{q}\) is a weight hyper-parameter. We simply set \(\lambda_{q}=1\) unless mentioned otherwise. The algorithm is depicted in Alg. 1.
## 4 Experiments
**Datasets.** We consider both coarse category-level and fine-grained instance-level image retrieval tasks in our experiments. Following [12, 32, 46, 52, 55], we use 4 category-level datasets: i) **CIFAR-10**[28], ii) **NUS-WIDE**[9], and iii) **MS-COCO**[37]. With an ImageNet pre-trained model, we also choose iv) **ImageNet100** (a subset of ImageNet [11] as first used by [6] and later by supervised deep hashing
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c|c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Venue} & \multicolumn{3}{c|}{CIFAR10} & \multicolumn{3}{c|}{ImageNet100} & \multicolumn{3}{c|}{NUSWIDE} & \multicolumn{3}{c}{MS-COCO} \\ \cline{3-13} & & 16 & 32 & 64 & 16 & 32 & 64 & 16 & 32 & 64 & 16 & 32 & 64 \\ \hline LsH [20] & STOC’98 & 23.9 & 29.6 & 37.6 & 14.7 & 29.7 & 48.7 & 51.0 & 59.3 & 67.1 & 45.2 & 51.6 & 59.8 \\ SH [58] & NeurIPS’08 & 41.8 & 42.1 & 43.5 & 35.1 & 50.9 & 60.9 & 63.0 & 60.9 & 64.0 & 59.4 & 64.8 & 66.2 \\ PCA-H [24] & - & 46.3 & 49.2 & 52.6 & 46.0 & 62.4 & 73.1 & 71.5 & 74.3 & 76.5 & 67.5 & 72.8 & 75.5 \\ ITQ [14] & TPAMI’12 & 46.8 & 51.3 & 54.4 & 45.5 & 62.1 & 72.7 & 73.2 & 75.0 & 77.1 & 67.6 & 72.9 & 75.4 \\ \hline SSDH [60] & IJCAI’18 & 41.0 & 39.6 & 38.5 & 32.3 & 40.1 & 44.6 & 66.8 & 67.8 & 66.7 & 53.9 & 56.7 & 57.4 \\ GreedyHash [55] & NeurIPS’18 & 44.9 & 51.9 & 55.7 & 54.4 & 68.7 & 74.7 & 70.0 & 76.2 & 79.3 & 66.8 & 73.2 & 77.4 \\ TBH [52] & CVPR’20 & 48.2 & 50.2 & 50.7 & 42.9 & 44.5 & 48.3 & 75.8 & 77.8 & 78.5 & 68.8 & 72.6 & 74.8 \\ CIBHash [46] & IJCAI’21 & 51.7 & 54.6 & 55.7 & 64.8 & 71.6 & 73.7 & **79.4** & 80.9 & 81.6 & **75.5** & 78.3 & 79.1 \\ BiHalf [32] & AAAI’21 & 54.7 & 58.1 & 60.6 & 60.7 & 71.2 & 76.0 & 77.4 & 80.1 & 81.9 & 71.2 & 75.6 & 78.0 \\
**SDC (Ours)** & - & **59.1** & **64.2** & **67.3** & **70.9** & **79.7** & **82.9** & 79.1 & **81.3** & **82.4** & 75.3 & **79.0** & **80.7** \\ \hline _Original Features_ & - & & 58.3 & & 78.1 & & & 82.3 & & & 80.0 & \\ \hline \end{tabular}
\end{table}
Table 1: Coarse category-level image retrieval results of different unsupervised hashing methods with 3 different hash code lengths (16, 32, and 64). Note that PCA-H is ITQ before the quantization error minimization. The last row reports the results of the original 4096d VGG-16 [53] features with the cosine similarity.
Figure 4: (updated) (**a**) Probability mass function (PMF) of \(\texttt{B}(K,0.5)\) and (**b**) probability density function (PDF) of \(\texttt{Beta}(\alpha,\beta)\) distribution with different \(\alpha/\beta\) values. (**c**) Inverse cumulative density function (iCDF) of \(\texttt{B}(K,0.5)\) and (**d**) \(\texttt{Beta}(\alpha,\beta)\) distribution with different \(\alpha/\beta\) values. Note that we set \(\alpha=\beta\) for symmetric PDF.
works), which was ignored by previous unsupervised hashing works. For evaluating instance-level retrieval tasks, three popular datasets are chosen including i) **GLDV2**[59], ii) **70Cvx**[44, 48], and iii) **7Paris**[45, 48].
**Evaluation metrics.** Following previous works [32, 46, 55], we measure the model performance with mean Average Precision (mAP) at top 1000 (mAP@1K) for single-labeled datasets (_i.e_., ImageNet100 and CIFAR10), while top 5000 (mAP@5K) for multi-labeled datasets (_i.e_., NUS-WIDE and MS-COCO). Note, for instance-level retrieval tasks, we follow the evaluation protocol of [4, 17] and use mAP@100 as the evaluation metric. For statistical stability, we run 3 trials for each experiment and report the average of per-trial best results on the validation set. In addition, we also plot Precision-Recall curves (PR) to compare the precisions at different recall rates.
**Competitors.** We compare our method with 4 classic unsupervised hashing methods [24, 20, 14, 58] still considered as strong baselines, and 5 recent SOTA unsupervised deep hashing methods [32, 55, 60, 46, 52].
**Implementation details.** For fair comparisons, we follow the existing experimental protocol [32, 46, 55, 52]. We use an ImageNet pre-trained VGG-16 [53] as the fixed and frozen feature extractor for main experiments, as the focus of our evaluation is on the hashing model. For more extensive evaluation, we also additionally use pre-trained ResNet50 [16] and DeiT [56]. We test three common code lengths: 16, 32, and 64 bits. We train all the compared methods using Adam [26] optimizer for 100 epochs with a learning rate of \(0.0001\) and a batch size of 64, with a single exception in TBH [52], for which a batch size of 400 is used and 1000 epochs are required. Note that we reimplemented all competing methods based on the original released codes. Our reimplementation can reproduce the reported performances under the original setting. This allows us to evaluate all the models fairly under a single setting sharing the same datasets, testing protocols, and network architectures. More experimental details including the training/query/gallery splits for each dataset and implementation details are given in the supplementary material.
### Comparative Results
**Coarse category-level retrieval results.** We report the retrieval results of our SDC and prior art unsupervised hashing methods on three different datasets in Table 1. It is evident that our SDC outperforms significantly the best competitors (_e.g_., BiHalf and CIBHash), especially in the low bit (_e.g_., 16) cases, _e.g_., by up to 6.7%, and 10.2% on CIFAR10 and ImageNet100 respectively. In terms of training efficiency, with heavy data augmentation on raw training data, CIBHash needs to take _hours_ for training on a single GTX 3070 GPU (_e.g_., about 60 seconds per epoch and totally about 1.6 hours for ImageNet100). In contrast, both BiHalf and our SDC can use pre-extracted features with the training taking _only minutes_ (_e.g_., about 4 seconds per epoch and 7 minutes in total for ImageNet100).
We also report PR curves in Fig. 5. It can be observed that our SDC (blue curves) consistently outperforms all competing methods, especially at low bit cases (_i.e_., 16-bits). This suggests that our SDC can learn hash codes for superior image retrieval across different recall rates.
\begin{table}
\begin{tabular}{l|c c c c|c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{ResNet50 [16]} & \multicolumn{4}{c}{DeiT-S [56]} \\ \cline{2-9} & CIFAR10 & ImageNet100 & NUSWIDE & MS-COCO & CIFAR10 & ImageNet100 & NUSWIDE & MS-COCO \\ \hline ITQ [14] & 64.6 & 73.9 & 79.5 & 75.3 & 80.4 & 74.7 & 79.2 & 78.4 \\ GreedyHash [55] & 61.4 & 77.9 & 83.0 & 77.6 & 81.5 & 79.7 & 80.8 & 80.8 \\ BiHalf [32] & 76.6 & 80.4 & 82.7 & 78.5 & 79.9 & **81.3** & 74.5 & 79.7 \\
**SDC (Ours)** & **78.7** & **85.1** & **83.7** & **81.3** & **89.5** & 80.7 & **83.3** & **82.9** \\ \hline _Original Features_ & 69.3 & 81.8 & 83.9 & 82.1 & 82.3 & 82.5 & 83.9 & 84.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Coarse category-level image retrieval results of representative unsupervised hashing methods on ImageNet100. 64-bits hash codes are used. The last row reports the results of the original 2048d ResNet50 and 384d DeiT-S features.
Figure 5: PR curve on CIFAR10 and ImageNet100. Left and right figures correspond to 16-bits and 64-bits hash codes.
Different feature representations.For more extensive evaluation, we further evaluate two stronger feature models: ResNet50 [16] and DeiT-Small [56]. Table 2 shows that our SDC again outperforms the strong competitors ITQ and Bihalf. This suggests that the advantage of our method is feature representation agnostic.
Instance-level retrieval results.We also evaluate our model on instance-level image retrieval tasks. We follow the evaluation protocol of [17]2. We use three datasets, namely GLDv2 [59], \(\mathcal{R}\)Oxf, and \(\mathcal{R}\)Paris [48]. Note, due to no training data with \(\mathcal{R}\)Oxf and \(\mathcal{R}\)Paris, we use the training set of GLDv2 for model training for all datasets. As shown in Table 3, our SDC still outperforms consistently the state-of-the-art similarity preservation based methods (_i.e_., GreedyHash [55], and BiHalf [32]) by a large margin. This indicates that the superiority of our SDC generalizes from coarse category retrieval to fine-grained instance retrieval, even in the presence of a distributional shift between the training and test sets.
Footnote 2: Please see supplementary material for implementation details.
### Further Analysis
**Similarity collapse analysis.** We first examine the similarity collapse problem. We study three representative hashing methods (ITQ, GreedyHash, Bihalf) in comparison with our SDC. To quantify this collapse, we compute the intersection between the cosine similarity histogram of positive and negative pairs. Higher intersection rates suggest worse collapses with lower discriminating ability. We use 64-bits hash codes. To compute the two histograms, we sample 10k positive and 100k negative random pairs of ImageNet100. Fig. 6 presents the degree of similarity collapse in the order of ITQ \(>\) GreedyHash \(>\) BiHalf \(>\) SDC. This verifies again that our method is most effective in alleviating this collapse problem.
**Hash code visualization.** For further examination, we visualize continuous hash codes in a proof-of-concept setting.
Figure 6: Analysis of the similarity collapse problem on ImageNet100. We plot the hamming distance histograms for 10000 positive and 100000 negative random pairs with 64-bits hash codes. For similarity collapse quantification, we use the intersection between the two histograms as the metric, lower is better. Note that, the positive and negative labels are included for ease of explanation (_i.e_. descriptive purpose only), and were not deployed during the actual unsupervised training.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{GLDv2} & \multicolumn{2}{c|}{\(\mathcal{R}\)Oxf} & \multicolumn{2}{c}{\(\mathcal{R}\)Paris} \\ \cline{2-7} & 128 & 512 & 128 & 512 & 128 & 512 \\ \hline ITQ [14] & 5.2 & 11.3 & 1.6 & 5.4 & 4.8 & 12.3 \\ GreedyHash [55] & 3.8 & 7.9 & 15.8 & 34.2 & 34.9 & 52.8 \\ BiHalf [32] & 4.0 & 6.7 & 20.2 & 33.3 & 42.0 & 52.0 \\
**SDC (Ours)** & **6.3** & **12.1** & **27.1** & **40.8** & **50.3** & **63.8** \\ \hline _Original features_ & \multicolumn{2}{c|}{13.8} & \multicolumn{2}{c|}{51.0} & \multicolumn{2}{c}{71.5} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Instance-level image retrieval results of representative unsupervised hashing methods on GLDv2, \(\mathcal{R}\)Oxf-**Hard** and \(\mathcal{R}\)Paris-**Hard**. Original features: 2048D R50-DELG features [4] with the cosine similarity.
Figure 7: **(a)** The t-SNE visualization of VGG16 features of 4 selected classes of ImageNet100. **(b-e)** Continuous 2-bit codes before quantization derived by different unsupervised hashing methods. Dotted lines denote the separation of Hamming space.
. Specifically, we examine the behaviour of ITQ, GreedyHash, BiHalf and SDC in learning a 2-bits hash function over 4 object classes (cock, indigo-bunting, loggerhead, bloodhound) from ImageNet100. We use the VGG-16 features. We observe from Fig. 7 that whilst the original features are already well separable, different methods behave differently. For example, simply preserving the original similarity, GreedyHash collapses the similarity scores completely across all the classes. With a code balance layer on top, BiHalf partly reduces the degree of collapsing to two groups. Through aligning the similarity distribution with a calibration distribution, our SDC solves this collapsing problem well, even further separating the originally confusing two classes (cock and bloodhound). This validates the unexceptional potential of SDC in improving over the original continuous features.
**Qualitative evaluation.** For visual analysis, we provide a couple of image retrieval examples on CIFAR10. It is evident in Fig. 8 that our SDC can identify the positive class more confidently with a more distinctive separation between positive and negative classes compared to GreedyHash. This indicates the superior discrimination ability of our similarity distribution calibration idea in unsupervised hashing.
### Ablation Study
**Calibration distribution.** We evaluate the effect of calibration distribution. We further test normalized Gaussian distribution (bounded within \([-1,1]\)) as well as a variety of Beta distributions. We observe in Table 4 that (1) The performance Beta calibration is generally stable in the range of \([2,5]\); (2) Gaussian calibration is similarly effective suggesting the flexibility of our SDC in distribution selection.
**Orthogonality.** We evaluate the effect of orthogonality with our SDC loss design (Section 3.2.1). It is shown in Table 5 that the orthogonality constraint is useful, confirming our design choice and also echoing a similar finding as [17].
## 5 Conclusion
We have presented a simple yet effective _Similarity Distribution Calibration_ (SDC) method for unsupervised hashing. This is particularly designed to alleviate the largely ig
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Distributions & CIFAR10 & ImageNet100 & GLDv2 \\ \hline \(\texttt{Beta}(0.2,0.2)\) & 62.2 & 79.7 & 11.6 \\ \(\texttt{Beta}(0.5,0.5)\) & 63.6 & 80.6 & 11.9 \\ \(\texttt{Beta}(0.7,0.7)\) & 64.1 & 80.8 & 11.8 \\ \(\texttt{Beta}(1,1)\) & 66.2 & 80.8 & 12.0 \\ \(\texttt{Beta}(2,2)\) & 67.0 & 81.4 & **12.1** \\ \(\texttt{Beta}(3,3)\) & **67.3** & 81.9 & 12.0 \\ \(\texttt{Beta}(5,5)\) & **67.3** & 82.9 & **12.1** \\ \(\texttt{Beta}(7,7)\) & 66.8 & 82.8 & 11.8 \\ \(\texttt{Beta}(10,10)\) & 65.9 & **83.3** & 11.8 \\ \hline Gaussian & **67.3** & 82.3 & 11.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effect of the calibration distribution. _Setting_: 64-bits hash codes on CIFAR10 and ImageNet and 512-bits on GLDv2.
Figure 8: Two qualitative object image retrieval examples on CIFAR10. Green: Positive class; Red: Negative class.
nored _similarity collapse_ problem suffered by the existing similarity preservation-based unsupervised hashing methods. Concretely, we minimize the Wasserstein distance between the distribution of Hamming similarities and a calibration distribution with a sufficient capacity range. As a result, the low similarity capacity of hash code can be better exploited for improved discriminating ability. Extensive experiments on both coarse and fine-grained image retrieval tasks validated the advantage of our method over the state-of-the-art alternatives.
|
2308.12559 | Thermal-aware Workload Distribution for Data Centers with Demand
Variations | Thermal-aware workload distribution is a common approach in the literature
for power consumption optimization in data centers. However, data centers also
have other operational costs such as the cost of equipment maintenance and
replacement. It has been shown that server reliability depends on frequency of
their temperature variations, arising from workload transitions due to dynamic
demands. In this work, we formulate a nonlinear optimization problem that
considers the cost of workload transitions in addition to IT and cooling power
consumption. To approximate the solution, we first linearize the problem; the
result is a mixed integer programming problem. A modified heuristic is then
proposed to approximate the solution of the linear problem. Finally, a Model
Predictive Control (MPC) approach is integrated with the proposed heuristics
for automatic workload reconfiguration when future demand is not known exactly,
but predictions are available. Numerical results show that the proposed schemes
are attractive in different settings. | Somayye Rostami, Douglas G. Down, George Karakostas | 2023-08-24T04:53:45Z | http://arxiv.org/abs/2308.12559v1 | # Thermal-aware Workload Distribution for Data Centers with Demand Variations
###### Abstract
Thermal-aware workload distribution is a common approach in the literature for power consumption optimization in data centers. However, data centers also have other operational costs such as the cost of equipment maintenance and replacement. It has been shown that server reliability depends on frequency of their temperature variations, arising from workload transitions due to dynamic demands. In this work, we formulate a nonlinear optimization problem that considers the cost of workload transitions in addition to IT and cooling power consumption. To approximate the solution, we first linearize the problem; the result is a mixed integer programming problem. A modified heuristic is then proposed to approximate the solution of the linear problem. Finally, a Model Predictive Control (MPC) approach is integrated with the proposed heuristics for automatic workload reconfiguration when future demand is not known exactly, but predictions are available. Numerical results show that the proposed schemes are attractive in different settings.
switching cost, model predictive control, thermal-aware workload distribution, data center
## I Introduction
Energy consumption of data centers is increasing rapidly, due to growth in demand for internet services and cloud computing tasks. IT and cooling equipment are the main power consumers in a data center [1][2]. Due to heat recirculation effects, thermal-aware workload distribution is necessary to minimize the power consumption while respecting operational temperature constraints [3].
Data centers also have operational costs such as the cost of equipment maintenance and replacement. The reliability of servers depends on several factors such as their inlet temperature and frequency of temperature variations. In thermal-aware workload distribution, the inlet temperatures are typically bounded by a red-line temperature [4]. However, when the workload distribution is changed due to dynamic demands, the cost of varying the workload on a server (which we will call switching costs) has not been addressed in the literature. The varying workload leads to temperature variations that can impact the reliability of servers [5]. To be more precise, most of the approaches solve the thermal-aware workload distribution problem for a fixed demand. Considering time varying demand is beneficial for reducing the switching costs [6]. In this case, it would be desirable to incorporate demand predictions into the problem. So, we are interested in a thermal-aware workload distribution problem where demand is time varying and the effects of varying the utilizations of servers are taken into account when (re)distributing workload.
While dynamic workload allocation may have costs associated with decreased server lifetimes, workload migration may also be required. This mainly affects the quality of service by imposing a delay on processing when virtual machines migrate between different physical machines [7][8]. In this work, we do not address the virtual machine migration problem directly, however, the problem we plan to solve can also consider the migration cost indirectly by adding a penalty for workload transitions. This penalty can lead to decrease the number of migrations.
There are a few works that consider switching costs in the workload distribution policy. Most of the literature addresses thermal-aware workload distribution for a constant demand (steady- state) [9][10][11][12][13][14]. In [6], switching costs are considered but cooling power consumption is not considered. In [15], switching costs are considered in the thermal-aware workload distribution policy, where a transient thermal model is used. A particle-based optimization algorithm is then used to solve the problem. In this work, we formulate a thermal-aware workload distribution problem in discrete time that considers switching costs in addition to IT and cooling power consumption. The proposed problem is a generalized form of the problem introduced in [16][17].
The problem proposed in [17] is a general power optimization problem with nonlinear cooling power consumption and steady-state thermal model. As a power reduction scenario, they also consider two different red-line temperatures corresponding to idle and fully-utilized servers, respectively. However, the demand value is fixed. The approach proposed to solve nonlinear power optimization problems is to linearize the problem. Depending on the type of variables (continuous or integral) the resulting problem is a linear programming problem or an (mixed) integer linear programming problem. This approach is especially beneficial because the nonlinear problem is general enough to represent a range of similar problems in this area. The approach is also time efficient because the thermal models may be computationally expensive when they are used to calculate the temperatures, for example if they are based on solving differential equations. Using linear regression to construct a linear model helps reduce the time
complexity. Finally, a heuristic is proposed to approximate the solution of the linear problem that is then applied to the original problem.
In this work, we also linearize the problem and generalize the heuristic to approximate the solution of the resulting mixed integer programming problem. The solution is then used for the original problem. When demand predictions are available, we integrate a Model Predictive Control (MPC) approach with the proposed heuristic. Using MPC is common in the literature in the presence of transient thermal models [15][16][18][19][20]. In this work, we show that an MPC approach is useful for reducing the size of the problem and for incorporating updates to the predicted demand. Our contributions can be listed as follows:
* Generalization of the constant demand problem to a discrete-time, time-varying problem which also considers switching costs
* Generalization of the heuristic proposed in [17] for the resulting mixed integer linear programming problem and proving its applicability for the proposed problem
* Integration of an MPC approach with demand predictions for the proposed heuristic
* Evaluation of the proposed schemes that suggest the potential for significant cost reductions, e.g. when compared to separating the problem into independent instances at each time step
In the remainder of the paper, we first describe the system model and introduce the optimization problem, in Section II. We also linearize the problem. An approximation algorithm to solve the problem is proposed in Section III. An MPC approach is introduced in Section IV. Section V covers the evaluation of the proposed schemes for the introduced integer linear programming problem. Concluding remarks are provided in Section VI.
## II System Model
The problem introduced in this paper is a thermal-aware workload distribution problem that considers both power consumption and switching costs in the presence of demand variations. We consider a discrete time demand model in which there are \(K\) time slots and the demand at time slot \(k\) is denoted by \(D_{k}\), the number of required servers at time slot \(k\). The problem is a generalized form of the problem introduced in [16][17]. For the case of one time slot or fixed demand, presented in [17], a general nonlinear optimization problem is considered for minimizing the total power consumption in a data center. The system considered in this paper includes \(m\) cooling facilities and \(n\) servers. The decision variables are the cooling parameters and the server utilizations at time slot \(k,k=1,...,K\), denoted by the vectors \(v^{(k)}_{m\times 1}\) and \(\rho^{(k)}_{n\times 1}\), respectively. As a power reduction scenario, two red-line temperatures are considered corresponding to idle or fully utilized servers, so the server utilizations are 0 or 1. This helps reduce the cooling effort because the lightly loaded servers have a higher red-line temperature. The servers are assumed to be identical. The power consumption and thermal models are generally nonlinear. The cost function is the summation of cooling and IT power consumption along with the cost of workload migration and switching the servers between idle or fully utilized (or on and off states in the case of server consolidation) in consecutive time slots. Addition of the switching cost controls the utilization variation of the servers (and the temperature variations) which as discussed in Section I, can lead to increased server lifetimes. So, the trade-off of power consumption and the impact on server reliability due to the frequency of varying server utilizations is considered. There are performance and temperature constraints for each time slot.
We assume that the initial workload distribution is denoted by \(\rho^{(0)}\). Thus, the problem that we wish to solve is problem (1), where \(F(v^{(k)})\) is the cooling power consumption corresponding to the cooling variable vector \(v^{(k)}_{m\times 1}\) at time slot \(k\), \(\rho^{(k)}_{n\times 1}\) is the vector of workload distribution at time slot \(k\), and \(M(v^{(k)},\rho^{(k)})\) is the function corresponding to the thermal model. Within each time slot, a steady-state thermal model is considered. In other words, we assume the time slots are long enough (in the range of minutes) so that a steady-state thermal model is appropriate. The first constraint is a performance constraint with the target demand \(D_{k}\), and the second constraint limits the inlet temperatures to be less than the corresponding red-line temperatures, \(T_{idle}\) and \(T_{busy}\) (according to [4], \(T_{idle}>T_{busy}\)). The cost of switching (and migration) per server for the \(k\)th time slot is denoted by \(w_{k}\). The computing (IT) power consumption of server
\(i\) in the \(k\)th time slot is denoted by \(P(\rho_{i}^{(k)},t_{i}^{(k)})\), where \(t_{n\times 1}^{(k)}=M(v^{(k)},\rho^{(k)})\) is the vector of server inlet temperatures at time slot \(k\). The vectors of lower bounds and upper bounds for the cooling variables are \(V_{LB}\) and \(V_{UB}\), respectively.
There are many possible models that could be used for IT power consumption, but we focus on one choice. We ignore the dependence of IT power consumption on the inlet server temperature. The model is \(P(\rho_{i}^{(k)})=c+d\rho_{i}^{(k)}\), where \(c\) and \(d\) are constants, but we assume that there is server consolidation, so that idle servers are turned off and \(P(\rho_{i}^{(k)})=0\) when \(\rho_{i}^{(k)}=0\). In general, server consolidation may change the thermal model but we leave that as a topic for future work. Server consolidation requires an extra step of linearizing the IT power consumption.
The approach proposed in [17] to solve nonlinear power optimization problems is to linearize the problem and in NP-complete cases (when there are integral variables) propose heuristics developed for approximating the solution of the resulting integer programming problems. The linearized version of the single time slot problem extracted in [17] (with some modifications, assumptions and normalization) is problem (2), where \(a=T_{idle}-T_{busy}>0\), \(b=T_{idle}\), \(A_{n\times m}\), \(B_{n\times n}\) and \(E_{n\times 1}\) are the cooling matrix, the heat-recirculation matrix and the constant part, respectively and \(A_{i,j},B_{i,j}\geq 0\) (nonnegative entries). In [17], we showed that problem (2) is NP-complete and proposed a heuristic to construct an approximate solution. Similarly, we first linearize problem (1) and then generalize the heuristic proposed in [17] to approximate the solution of the linear problem. Linearizing the switching cost is straightforward and leads to introducing the new variables \(s_{k,i}\). When the server utilizations are 0 or 1, linearizing the IT power consumption is also straightforward. In this case \(P(\rho_{i}^{(k)})=(c+d)\rho_{i}^{(k)}\). So, the integer linear programming problem is problem (3).
However, with the relaxation of server utilizations that is needed for the approximation algorithm, more work is needed to linearize the IT power consumption in problem (1). According to the IT power consumption model, in the case of consolidation there is a jump in \(P(\rho_{i}^{(k)})\) when \(\rho_{i}^{(k)}=0\). We approximate \(P(\rho_{i}^{(k)})\) with a piecewise linear function as is shown in Fig. 1. For a small value \(\epsilon\), the IT power consumption is \(P_{\epsilon}=c\epsilon+d\). If \(\rho_{i}^{(k)}\leq\epsilon\), then the IT power consumption is approximated as \(P(\rho_{i}^{(k)})=\frac{P_{\epsilon}}{\epsilon}\rho_{i}^{(k)}\), and if \(\rho_{i}^{(k)}>\epsilon\), then \(P(\rho_{i}^{(k)})=c\rho_{i}^{(k)}+d\). To linearize these conditions, we divide \(\rho_{i}^{(k)}\) into two parts, \(\rho_{i}^{(k)}=\rho_{i}^{-(k)}+\rho_{i}^{+(k)}\), where one of the following cases is true, depending on the value of the new 0-1
variable \(y_{i}^{(k)}\). If \(y_{i}^{(k)}=1\), then \(\rho_{i}^{-(k)}\leq\epsilon\) and \(\rho_{i}^{+(k)}=0\), otherwise \(\rho_{i}^{-(k)}=0\) and \(\rho_{i}^{+(k)}\geq\epsilon\). This procedure leads to extra constraints being added to problem (3). Finally the relaxed form of the problem (\(0\leq y_{i}^{(k)}\leq 1\)) is problem (4).
## III Approximation Algorithm
Our aim is to approximate the solution of problem (3) and use it for the original problem (1). We generalize the H2 heuristic in [17] to approximate the solution of problem (3). The proposed heuristic is based on gradual rounding of the fractional solution of the relaxed linear problem (4). Let us denote the solution of problem (4) by \((v^{*(k)},\rho^{*(k)}),\forall k=1,...,K\). In H2, the main idea for approximating the solution of problem (2) efficiently is to link the original problem to a problem that can be approximated more efficiently. The efficiency comes from reducing the number of constraints and decision variables. The main variables in problem (2) are the server utilizations, because by knowing them, the cooling parameters can be found by solving a standard linear programming problem. In the problem that H2 approximates, the decision variables are only the server utilizations and there is only one constraint, on the number of working servers. In the gradual rounding of the server utilizations, at each step, the decision of which server to be turned off (or keep idle) is made by redistribution of the load of each server to the other servers and calculation of the cost for a new optimization problem that aims at minimizing the total increase (as compared to the fractional cost) in the dominant cooling variables (the dominant cooling variable for server \(i\) is the variable with the largest entry in the \(i\)th row of cooling matrix \(A\)). That is because the total increase in the dominant cooling variables is assumed to be a good approximation for the total increase in the cooling variables for the original problem (2). For our problem, the proposed heuristic, called DCVS (Dominant Cooling Variable with Switching cost), is similarly based on gradual rounding of the fractional server utilizations. However, instead of one problem, \(K\) problems are approximated. The values of \(\hat{\rho}^{(k)}\) are computed consecutively, as the greatest correlation between demands will typically be between consecutive time slots. The problem for time slot \(k\) is problem (5), where \(B^{\prime}=B+I_{n\times n}\) (\(I\) is the identity matrix) and there are \(R\) dominant cooling variables (the variables
Fig. 1: a) The actual IT power consumption (Watts) b) The piecewise linear approximation of IT power consumption
with the largest corresponding coefficient for at least one row of \(A\)). \(S_{r}\) is the set of servers with corresponding dominant cooling variable \(r\), \(z_{l}\) is the corresponding coefficient of the cooling variable \(r\) (in the \(l\)th row of \(A\)) for the server \(l\in S_{r}\) and \(D_{k}^{*}=|\sum_{i=1}^{n}\rho_{i}^{*(k)}|\) (\(\lfloor\) is the floor function). The cost function for (5) is an approximation of the component of the cost function of problem (3) that is affected by the value of \(\hat{\rho}^{(k)}\). There is no IT power consumption term because it is a constant when the number of working servers is fixed (equal to \(D_{k}^{*}\)). The problem for \(k=K\) does not include the last term in the cost function.
Similarly to H2, DCVS is greedy and includes three phases. The first (main) phase is modified to approximate the solution of problem (1) in terms of server utilizations, as described in Algorithm 1. At each step, the algorithm redistributes the load of each server to other servers (proportional to their current load) and chooses the server to be turned off, based on the cost for problem (5). The other two phases can be found in [17]. In phase 2, servers that are turned off in the fractional solution are also considered to be turned on in the integral solution. In phase 3, there are small perturbations of \(A\), \(B\) and \(E\), which lead to multiple fractional solutions where the best solution among them is chosen.
## IV MPC Approach
An MPC approach is useful in the presence of demand predictions, although it can also be used when demands are known exactly to reduce the size of the problem. We assume that instead of actual demands, we have a noisy version of demands coming from a prediction scheme. In the presence of demand predictions, the weights \(w_{k}\) should be chosen to depend on \(k\) as it is reasonable to assume that knowledge about the future demands is more accurate for closer time slots. In general, it is reasonable to assume that \(w_{k}\leq w_{k^{\prime}}\) when \(k\geq k^{\prime}\).
In problem (3), it may not be efficient or sufficiently precise to solve the problem for the whole time interval of size \(K\). This is both due to the size of the problem and the fact that distant demand predictions may not be sufficiently accurate. One possibility to address these issues is using an MPC approach. The main idea of MPC is considering a window of size \(W\) and using the predictions for the next \(W\) time slots to compute the workload distribution in the next time slot. This reduces the greediness of the algorithm by using the information for several time slots. It also allows for updates to predicted demand values to be considered, each time the solution for the next time slot is calculated. We use the MPC scheme which is described in Algorithm 2. Each time, a problem of size \(W\) is solved and the solution for the first time slot is kept and used as the initial workload distribution for the next round.
```
1:Solve problem (4) and let the solution be \((v^{*(k)},\rho^{*(k)}),\forall k=1,...,K\)
2:\(\hat{\rho}^{(0)}=\rho^{(0)}\)
3:for\(k=1:K\)do
4:\(S=\{i\in\rho^{*(k)}|0<\rho_{i}^{*(k)}\leq 1\}\)
5:\(l=|S|-D_{k}^{*}\)
6:\(\hat{\rho}=\rho^{*(k)}\)
7:while\(l\neq 0\)do
8:for\(i\in S\)do\(\triangleright\) distributing the load of server \(i\) to the other servers, proportional to their current load
9:\(\rho^{[i]}=\hat{\rho}\)
10:\(S^{\prime}=S-\{i\}\)
11:\(r=\rho_{i}^{[i]}\)
12:\(\rho_{i}^{[i]}=0\)
13:for\(j\in S^{\prime}\)do
14:\(\rho_{j}^{[i]}=\rho_{j}^{[i]}+\frac{\rho_{j}^{[i]}}{\sum\limits_{k\in S^{ \prime}}\rho_{k}^{[i]}}r\)
15:endfor
16:while\(\exists s\in S^{\prime},\rho_{s}^{[i]}>1\)do\(\triangleright\) fixing the loads that are greater than \(1\)
17:\(r=\rho_{s}^{[i]}-1\)
18:\(\rho_{s}^{[i]}=1\)
19:\(S^{\prime}=S^{\prime}-\{s\}\)
20:for\(j\in S^{\prime}\)do
21:\(\rho_{j}^{[i]}=\rho_{j}^{[i]}+\frac{\rho_{j}^{[i]}}{\sum\limits_{l\in S^{ \prime}}\rho_{l}^{[i]}}r\)
22:endfor
23:endwhile
24:\(x_{i}=\) value of the cost function of problem (5) for \(\rho^{[i]}\)
25:endfor
26: Remove \(i\) with the smallest \(x_{i}\) from \(S\) and \(\hat{\rho}=\rho^{[i]}\)
27:\(l=l-1\)
28:endwhile
29:\(\hat{\rho}^{(k)}=\hat{\rho}\)
30:endfor
31:return\(\hat{\rho}^{(k)},\forall k=1,...,K\)
```
**Algorithm 1** Calculation of \(\hat{\rho}^{(k)},\forall k=1,...,K\)
```
1: update the (predicted) demand values
2: solve problem (3) for \(k=s,...,s+W-1\) and call the solution \((v^{(k)},\rho^{(k)}),\forall k=s,...,s+W-1\)
3:\(\hat{\rho}^{(s)}=\rho^{(s)},\hat{v}^{(s)}=v^{(s)}\)
4:return\(\hat{\rho}^{(s)}\) and \(\hat{v}^{(s)}\)
```
**Algorithm 2** Calculation of \(\hat{\rho}^{(s)},\hat{v}^{(s)}\) using MPC approach with window size \(W\)
## V Evaluation
The system we use for evaluation comes from an experimental data center at McMaster University that is modeled in [21]. The data center has 25 servers located in 5 racks and two cooling facilities. The cooling variables are the chilled water temperature and total air flow generated by two fans at either end of the racks. The top view of the data center is shown in Fig. 2. The functions \(M\) and \(F\) in problem (1) are simulated based on the model in [21] for inlet temperatures and cooling power consumption. The platform we used was MATLAB R2021b running on a 64-bit system with an i7-1185G7 processor and 8-GB RAM. The function \(M\) is not explicitly given and the inlet temperatures are calculated based on solving a set of differential equations, an operation that is computationally intensive (each call takes around 1.4 seconds). The next step is regression on the functions \(F\) and \(M\) to linearize the problem. The function _regress_ in MATLAB is used and the data points are uniformly at random chosen from the defined ranges for cooling variables and server utilizations (the server utilizations are continuous). We set \(T_{ialle}=35\) and \(T_{busy}=27\) (degrees Celsius), \(V_{LB}=[1300,10]\) and \(V_{UB}=[2300,20]\). Additional details are provided in [17]. So, the matrices \(A\), \(B\) and \(E\) in problem (3) are known. We perform a (small) random perturbation of the matrices, each time that the algorithms are run. We assume the IT power consumption model is according to server consolidation with coefficients also coming from the model in [21]. In the simulation results presented in [17], we have shown that the solution of the linear system approximated by the proposed heuristic works well for the original nonlinear system. So, here we focus on the evaluation of the heuristics for the linearized system. We also assume \(w_{k}=w,\forall k=1,...,K\).
We use simple rounding (SR) as the baseline algorithm. In simple rounding for each time slot \(k\), the \(D_{k}^{*}\) largest values in \(\rho^{*(k)}\) are rounded to one. We also solve the single time slot problem for each of the \(K\) time slots (without switching cost) using the _intlinprog_ function in MATLAB and calculate the cost of the solution for the multiple time slot problem (with switching cost). This scheme is called Sep in the results. For this data center example, we show that although the performance of SR and H2 are very close for the one time slot problem according to [17], for the multiple time slot problem DCVS clearly works better.
The evaluation includes three parts. In the first part, the sensitivity of the approaches to the value of \(w\) is evaluated. In the second part, the performance is evaluated in the presence of demand fluctuations with different patterns. Finally, the performance of the integrated MPC approach is evaluated for the actual and noisy demand values. We present the average and the worst case ratios (avg and wrc columns in the results) when the solution is compared with the solution of the relaxed problem (4).
The first results correspond to sensitivity to \(w\). As \(w\) increases, the switching cost becomes more dominant. Starting from \(w=1\), the value of \(w\) doubles. The number of intervals \(K\) is equal to 3. The pair of demands \((D_{1},D_{3})\) covers all possible combinations, where the values for the demand are chosen from \(D=\{1,2,....,24\}\). For each combination, \(D_{2}\) is randomly chosen from \(D\). The results are reported in Table I, with an extra column OPTi corresponding to solving the problem using the _intlinprog_ function in MATLAB. Although OPTi has the best performance, in [17] we showed that the running time does not scale well for larger problem sizes. The results show that the performance of DCVS is more resilient to changes in \(w\) and for larger values of \(w\), SR has poor performance with respect to the worst case ratio. The reason is that for the fractional optimal solution, the largest \(D_{k}^{*}\) utilizations in time slot \(k\) are not necessarily a subset of the largest \(D_{k+1}^{*}\) in time slot \(k+1\) or vice versa. So, when the switching cost is very large, SR leads to large costs due to the switching cost component. For example, we saw this effect for the input \([7,9,9]\) and \(w=1000\), when we used SR for several systems that are perturbed versions of the main system introduced in this section. This effect was seen for roughly one out of 10 of these systems. The utilizations for one of the servers were the same for the first and second time slots but SR rounded the first to 1 and the second to 0.
Another observation from Table I is that although the average ratio is better for DCVS, for smaller values of \(w\), the worst case ratio is more for DCVS as compared to SR. When DCVS redistributes the load, it considers both the cooling power consumption and the switching cost, so in the process of rounding, some redistributions that generate lower switching cost may be chosen although they have greater cooling power consumption. This may be problematic because when the rounding is complete, the switching cost may be the same for other redistributions with lower cooling power consumption, although the switching cost was greater in the process. We saw this effect for the demand sequence \([1,3,1]\) and \(w=4\). So, the greediness of DCVS may be problematic in some cases. The results also show the performance of the Sep scheme is
Fig. 2: The data center’s top view according to [21]
not as good as the others, specially for larger values of \(w\), as a result of Sep ignoring correlations between consecutive time slots providing opportunities to reduce switching costs.
In the second part, we evaluate the performance of the algorithms in the presence of workload fluctuations with different rates. The number of time slots is \(K=9\). For each round of simulations, three values are chosen from \(D\). So, there are three possibilities for the demand values. The rate of fluctuations is then controlled by a probability \(p\in\{0.1,0.5,0.9\}\). \(D_{1}\) is chosen randomly from the set of three demand values. For the next demand values \(D_{2}\) to \(D_{9}\), we pick the previous demand value with probability \(1-p\) or pick a different value randomly with probability \(p\). This procedure is repeated 100 times for each value of \(p\) in each round of simulations. We also set \(w=1000\). There are 20 round of simulations as reported in Table II. According to the results in Table II, DCVS has the best performance and is more resilient to the workload fluctuations with different rates. In addition, Table III reports the running times for \(K=20\), where at each round of simulations (each row in Table III) the demand sequences are generated as explained for the previous results in Table II. The results show that the running times for DCVS and SR are close and both are clearly faster than the _intlinprog_ function in MATLAB.
The final results correspond to the integrated MPC approach. The number of time slots is \(K=50\) and the size of the planning window \(W\) is varied between 1 and 10. To calculate the solution over \(K=50\) time slots, the MPC approach uses a total of \(K+W-1\) demand values. So with \(W_{max}=10\), the length of the required demand sequence is \(50+10-1=59\). We consider six scenarios for generation of demand sequences. There are three cases for the range of demand values. Case 1 corresponds to choosing the demand values from \(D=\{1,...,24\}\) uniformly at random. In Case 2, the next demand \(D_{k+1}\) is chosen from the range \([max(D_{k}-5,1),min(D_{k}+5,24])\). In Case 3, the range for choosing \(D_{k+1}\) is \([max(D_{k}-2,1),min(D_{k}+2,24])\). There is also a probability \(p\) that is the probability of changing the demand value for the next time slot. We chose two values of 0.2 and 0.8 for \(p\). For example, in Case 3 and \(p=0.2\), with probability 0.2, the next demand value \(D_{k+1}\) is different from \(D_{k}\) and it is chosen from the range \([max(D_{k}-2,1),min(D_{k}+2,24)]/\{D_{k}\}\). We also set \(w=1000\). The total optimal cooling power consumption for each time slot ranges between 1500 and 2000. The IT power consumption for each working server is also around 375 Watts (223.4 + 154.5). The simulation is repeated 10 times for each scenario. For each round, the values in \(\rho^{(0)}\) are chosen randomly from \(\{0,1\}\).
In addition to using the actual demands, we assume that we have a noisy version of demands coming from demand predictions. For a window of size \(W\), starting from the time slot \(s\), we assume that \(D_{s}\) is the actual value. However for \(D_{s+1},...,D_{s+W-1}\), noise is added to the actual demand. The value of noise for the time slot \(s+k-1,k=2,...W,\) is randomly chosen from the interval \([-\eta\times k,\eta\times k]\), where \(\eta\) is the basic noise value. So, when the window shifts to the next time slot, the added noise is resampled with an updated
\begin{table}
\begin{tabular}{|c c c c c c|} \hline & Sep & SR & DCVS \\ \hline avg & wcr & avg & wcr & avg & wrc \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table} TABLE II: Performance of the Algorithms in the Presence of Workload Fluctuations with Different Rates
\begin{table}
\begin{tabular}{|c c c c c c c c|} \hline & OPT1 & \multicolumn{2}{c}{Sep} & \multicolumn{2}{c}{SR} & \multicolumn{2}{c}{DCVS} \\ \hline \(w\) & avg & wrc & avg & wrc & avg & wrc & avg & wrc \\ \hline
1 & 1.01 & 1.02 & 1.01 & 1.10 & 1.02 & 1.05 & 1.01 & 1.05 \\
2 & 1.01 & 1.02 & 1.01 & 1.06 & 1.02 & 1.06 & 1.01 & 1.06 \\
4 & 1.01 & 1.02 & 1.01 & 1.10 & 1.02 & 1.04 & 1.01 & 1.07 \\
8 & 1.01 & 1.02 & 1.02 & 1.06 & 1.02 & 1.05 & 1.01 & 1.07 \\
16 & 1.01 & 1.02 & 1.01 & 1.08 & 1.02 & 1.05 & 1.01 & 1.06 \\
32 & 1.01 & 1.02 & 1.01 & 1.07 & 1.01 & 1.05 & 1.01 & 1.08 \\
64 & 1.01 & 1.02 & 1.02 & 1.13 & 1.01 & 1.04 & 1.01 & 1.05 \\
128 & 1.01 & 1.01 & 1.02 & 1.13 & 1.01 & 1.05 & 1.01 & 1.05 \\
512 & 1.01 & 1.01 & 1.04 & 1.24 & 1.01 & 1.04 & 1.01 & 1.04 \\
1024 & 1.00 & 1.01 & 1.11 & 1.55 & 1.01 & 1.15 & 1.00 & 1.04 \\
2048 & 1.00 & 1.01 & 1.21 & 1.87 & 1.01 & 1.16 & 1.01 & 1.02 \\
4096 & 1.00 & 1.01 & 1.36 & 2.04 & 1.01 & 1.09 & 1.00 & 1.03 \\
8192 & 1.00 & 1.00 & 1.49 & 2.63 & 1.01 & 1.16 & 1.00 & 1.02 \\
16384 & 1.00 & 1.00 & 1.56 & 2.81 & 1.00 & 1.18 & 1.00 & 1.01 \\
32768 & 1.00 & 1.00 & 1.63 & 2.96 & 1.01 & 1.23 & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table} TABLE I: Performance of the Algorithms for Different Values of \(w\)
distribution because \(k\) changes to \(k-1\) for the same time slot (\(s\) changes to \(s+1\)). We apply the floor function to the noise value and add it to the actual demand. If the predicted demand is less than 1 or greater than 24, we choose the values 1 or 24, respectively. We consider three cases of \(\eta=0,\eta=1,\eta=3\), where \(\eta=0\) corresponds to the actual values without noise. For each value of \(\eta>0\) we repeat the procedure five times. The values reported in Table IV are the ratios to the cost calculated for the whole interval with actual demand values by using DCVS. The reference cost is very close to optimal.
When \(w\) is very small, the correlation between time slots is small, so using the MPC approach is not necessary. In this case, the problem can be solved by examining a single time slot problem. When \(w\) is larger, it becomes more important that the working servers in consecutive time slots are correlated. First, the load for each time slot is equal to the corresponding demand and in consecutive time slots, the set of servers with the smaller demand is a subset of the set of servers with larger demand. However, for larger values of \(w\), to decrease the switching cost, the number of working servers may also be greater than the corresponding demand. Finally, as \(w\) becomes very large, the working servers for all time slots are the same. In this case, it is only necessary to focus on the time slot with the largest demand and use that solution for all other time slots. We expect that using the MPC approach is beneficial when there is an actual trade-off between the switching costs and the other components of the cost function.
The results for \(w=1000\) are shown in Table IV. We focused on \(w=1000\) because it clearly shows the trade-off between the costs and the performance of the MPC approach. The results for \(\eta=0\), show that for smaller window sizes (in particular \(W=1\)), the performance is poor for all scenarios, with good performance achieved when \(W=4\). We can infer that in the short term the switching cost may be dominant. However, when the size of \(W\) increases the IT (and cooling) power consumption does not allow extra servers to be working in several time slots. As an example, for demand \([15,5,5]\), to decrease the switching cost, it may be beneficial to increase the number of working servers at the second and third time slots above 5, however for the demand \([15,5,5,5,5,5,15]\), it might not be beneficial to increase the load in all time slots with demand equal to 5, because of the increase in IT (and cooling) power consumption. In general, the long term and short term solutions may be different. It can be inferred that as long as the window size is not too short, the MPC approach is beneficial as is shown for the case of \(W=3\) or \(W=4\). Using a window size of 4 (or 3) is also acceptable in the presence of noise. Using larger window sizes may not be helpful, in particular in the presence of noise when it might even degrade the performance, since the predicted demand is far from the actual demand. We also see how the performance degrades when the noise increases. So, updating information in the MPC approach is also helpful to improve the performance.
## VI Conclusion
In this work, we formulated a nonlinear optimization problem for data centers that considers the switching costs in
\begin{table}
\begin{tabular}{|l|r r r r r r|} \hline \multicolumn{5}{|c|}{Case 1 with \(p=0.2\)} \\ \hline & \multicolumn{2}{c|}{\(\eta\)=0} & \multicolumn{2}{c|}{\(\eta\)=1} & \multicolumn{2}{c|}{\(\eta\)=3} \\ \hline \(W\) & avg & wrc & avg & wrc & avg & wrc \\ \hline
[MISSING_PAGE_POST]
\hline \multicolumn{5}{|c|}{Case 2 with \(p=0.8\)} \\ \hline
1 & 1.25 & 1.69 & 1.25 & 1.69 \\
2 & 1.06 & 1.15 & 1.06 & 1.15 & 1.07 & 1.15 \\
3 & 1.02 & 1.04 & 1.03 & 1.06 & 1.05 & 1.09 \\
4 & 1.02 & 1.03 & 1.03 & 1.04 & 1.04 & 1.07 \\
5 & 1.02 & 1.04 & 1.03 & 1.05 & 1.04 & 1.08 \\
8 & 1.02 & 1.03 & 1.03 & 1.05 & 1.04 & 1.09 \\
10 & 1.02 & 1.04 & 1.03 & 1.05 & 1.04 & 1.07 \\ \hline \multicolumn{5}{|c|}{Case 3 with \(p=0.2\)} \\ \hline
1 & 1.41 & 1.98 & 1.41 & 1.98 \\
2 & 1.04 & 1.14 & 1.04 & 1.18 & 1.05 & 1.25 \\
3 & 1.00 & 1.01 & 1.01 & 1.02 & 1.01 & 1.04 \\
4 & 1.00 & 1.01 & 1.01 & 1.03 & 1.02 & 1.07 \\
5 & 1.00 & 1.01 & 1.01 & 1.02 & 1.02 & 1.08 \\
8 & 1.00 & 1.01 & 1.01 & 1.04 & 1.02 & 1.09 \\
10 & 1.00 & 1.01 & 1.01 & 1.02 & 1.03 & 1.12 \\ \hline \multicolumn{5}{|c|}{Case 3 with \(p=0.8\)} \\ \hline
1 & 1.31 & 1.52 & 1.31 & 1.52 & 1.31 & 1.52 \\
2 & 1.04 & 1.08 & 1.05 & 1.10 & 1.05 & 1.10 \\
3 & 1.00 & 1.02 & 1.03 & 1.06 & 1.04 & 1.08 \\
4 & 1.01 & 1.03 & 1.03 & 1.06 & 1.04 & 1.08 \\
5 & 1.01 & 1.03 & 1.02 & 1.05 & 1.04 & 1.09 \\
8 & 1.01 & 1.03 & 1.02 & 1.05 & 1.03 & 1.07 \\
10 & 1.01 & 1.03 & 1.03 & 1.05 & 1.04 & 1.06 \\ \hline \end{tabular}
\end{table} TABLE III: Running Time of the Algorithms (in seconds) in the Presence of Workload Fluctuations with Different Rates
addition to cooling and IT power consumption. Workload transitions among the servers due to dynamic demands are not beneficial in terms of the switching costs, however they may decrease the power consumption. Next, we used a linearization approach proposed in [17] to approximate the solution. The steps were linearization of the problem and the development of a heuristic to approximate the solution of the linear problem. Finally, we proposed an integrated MPC approach with our proposed heuristics which is helpful to decrease the size of the problem and to incorporate demand predictions. The simulation results show that the proposed schemes are helpful to find a near-optimal solution efficiently. We showed that using an appropriate window size in the MPC approach is important and beneficial. As future work, integrating transient thermal models with dynamic demands would be of interest. Modifying the proposed heuristic to address more problems, for example the case of heterogeneous data centers, is also a possibility. A procedure should also be defined for determining appropriate weights for switching costs. Finding a suitable window size for the MPC approach also needs more investigation.
|
2307.05878 | Exponential relaxation data analysis by parametrized regularization of
severely ill-posed Fredholm integral equations of the first kind | This paper presents a novel approach to construct regularizing operators for
severely ill-posed Fredholm integral equations of the first kind by introducing
parametrized discretization. The optimal values of discretization and
regularization parameters are computed simultaneously by solving a minimization
problem formulated based on a regularization parameter search criterion. The
effectiveness of the proposed approach is demonstrated through examples of
noisy Laplace transform inversions and the deconvolution of nuclear magnetic
resonance relaxation data. | Vladimir V Kryzhniy | 2023-07-12T02:44:55Z | http://arxiv.org/abs/2307.05878v3 | Exponential relaxation data analysis by parametrized regularization of severely ill-posed Fredholm integral equations of the first kind
###### Abstract
This paper presents a novel approach to construct regularizing operators for severely ill-posed Fredholm integral equations of the first kind by introducing parametrized discretization. The optimal values of discretization and regularization parameters are computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data.
## 1 Introduction
The analysis of exponential relaxation data poses a significant challenge in various fields such as experimental physics, chemistry, electrochemistry, and biophysics [2]. This problem involves determining the distribution function \(f(t)\) from experimentally measured function \(g(s)\) by performing the inversion of the Laplace transformation (1):
\[g(s)=\int_{0}^{\infty}\mathrm{e}^{-st}f(t)\mathrm{d}t, \tag{1}\]
A similar problem arises in nuclear magnetic resonance relaxometry (NMR) [8], where the desired distribution is obtained by deconvolving the following integral equation: 1
Footnote 1: Although the equation (2) can be written as a Laplace transform with the help of substitution \(\tau=t^{-1}\), a direct regularized solution of equation (2) is advantageous when \(g(s)\) is noisy.
\[g(s)=\int_{0}^{\infty}\mathrm{e}^{-s/t}f(t)\mathrm{d}t. \tag{2}\]
These equations are inherently ill-posed problems and are extremely sensitive to small perturbations in the right-hand side \(g(s)\)[3]. Due to smoothing property of exponential kernel the equations (1), (2) are severely ill-posed problems [3].
Following [5], let us illustrate why these equations are severely ill-posed. By discretizing the integral equation (1) or (2) using an appropriate quadrature formula, we obtain the corresponding matrix equation:
\[Kf=g, \tag{3}\]
where \(K\) is an \(m\times n\) matrix, \(f\) and \(g\) are column vectors of length \(n\) and \(m\), respectively.
By representing matrix \(K\) in equation (3) using singular value decomposition (SVD):
\[K=UDV^{T},\]
where \(U\) and \(V\) are orthogonal matrices, \(D=\mbox{diag}(s_{i})\), and \(s_{1}\geq s_{2}\geq\cdots\geq 0\), we obtain the formal solution of Eq. (3):
\[f=K^{-1}g=\sum_{i=1}^{n}s_{i}^{-1}(u_{i}^{T}g)v_{i}. \tag{4}\]
Due to division by decreasing singular values \(s_{i}\) in equation (4), the formal solution is highly oscillating and unreasonable. Truncated SVD solution regularizes the problem by limiting the number of terms in equation (4) by a certain number \(n_{0}\):
\[f_{r}=K_{r}^{-1}g=\sum_{i=1}^{n_{0}}s_{i}^{-1}(u_{i}^{T}g)v_{i}, \tag{5}\]
where \(f_{r}\) and \(K_{r}^{-1}\) represent the regularized solution and regularizing operator, respectively.
The number of terms in equation (5) depends on the noise level in the data \(\epsilon\) and the rate of decrease of singular values \(s_{i}\)[3, 5]. For an exponential kernel, the singular values decrease rapidly, resulting in only a few terms in the sum of equation (5). Consequently, the regularized solution \(f_{r}\) is represented as a linear combination of a small number of vectors \(v_{i}\):
\[f_{r}=\frac{\beta_{1}}{s_{1}}v_{1}+\frac{\beta_{2}}{s_{2}}v_{2}+\ldots, \tag{6}\]
where \(\beta=U^{T}g\).
It is evident that such a computed regularized solution is generally inaccurate. This conclusion holds true for all known regularization techniques [5].
Towards regularization of severely ill-posed problems
All terms in equation (6) depend on the SVD of the kernel matrix \(K\), which, in turn, depends on the quadrature formula, selected nodes, and the number of points. Although we cannot change the fact that the singular values of the exponential kernel rapidly tend to zero, we can anticipate that a few terms in equation (6) will yield more accurate results with carefully tailored discretization.
Thus, for obtaining a reasonable solution of matrix equation (3), we need to find an appropriate discretization and value of the regularization parameter.
This idea of introducing flexible discretization seems very natural. The most robust quadrature programs were developed a long time ago [6], and they are based on fine-tuned partitioning of the interval of integration. Nevertheless, seemingly all modern recommendations for discretizing the integral equations are based on the standard quadrature formulae, and regularization methods do not include discretization into consideration [3]. That is, currently, discretization and regularization are two disjointed steps for solving integral equations.
The reason for disjointed consideration of discretization and regularization may be attributed to the absence of theory for constructing regularizing operators that include additional parameters along with the regularization parameter [1]. At present time, we can rely on a precedent.
For inverting real Laplace transforms, the author has derived an integral form of the regularizing operator that includes two additional parameters \(a,b\) along with the regularization parameter \(r\)[9]:
\[f_{r}(t)=\int_{0}^{\infty}g(u)\Pi(r,a,b;tu)\mathrm{d}u, \tag{7}\]
where \(f_{r}(t)\) represents the regularized solution, and \(f_{r}(t)\to f(t)\) as \(r\rightarrow\infty\). The exact formula for the kernel \(\Pi\) can be found in the referenced article or in software available on GitHub [11].
The accuracy of the computed regularized inverse \(f_{r}(t)\) significantly depends not only on the optimal value of the regularization parameter but also on the appropriate values of additional method parameters. The author has proposed a heuristic criterion for determining acceptable values of all method parameters by minimizing the difference between two closely related regularized solutions \(f_{r}^{(1)}\) and \(f_{r}^{(2)}\)[10]:
\[\min_{r,a,\alpha}\sum_{i=1}^{n}\left\{f_{r}^{(1)}(a,b,r;t_{i})-f_{r}^{(2)}(a, b,r;t_{i})\right\}^{2}. \tag{8}\]
In cases where the additional parameters \(a\) and \(b\) are fixed, the comparison of two regularized solutions (8) becomes a criterion for finding the regularization parameter.
## 3 Parametrized regularization
Calculation of the regularized inverse \(f_{r}(t)\) with the help of formulae (7), (8) suggests that using semilogarithmic coordinates is more suitable for discretizing the integral (1) [11]. In this case, we have three discretization parameters \(n,t_{\min},t_{\max}\), representing the number of points \(n\) logarithmically distributed on an integration interval \((t_{\min},t_{\max})\). As a result, the dimension and elements of the kernel matrix \(K\) in equation (3) will depend on discretization parameters.
Consequently, for computing the regularized solution of equation (3) using Tikhonov regularization with a stabilizing matrix \(\Omega\):
\[\min\left\{||Kf-g||^{2}+\alpha||\Omega f||^{2}\right\}, \tag{9}\]
we need to compute the optimal values of the regularization parameter \(\alpha\) as well as the optimal values of discretization parameters \(n,t_{\min},t_{\max}\).
From results described in the previous section, we can conjecture that the quasi-optimal values of all method parameters can be found by solving a minimization problem formulated based on a criterion for finding the regularization parameter. Then, for the chosen parametrized discretization, we encounter a four-dimensional minimization problem involving the regularization and discretization parameters.
In particular, the generalized cross-validation criterion [7] is convenient in the case of a variable number of integration points. In this case, the quasi-optimal values of discretization and regularization parameters are obtained by solving the following minimization problem [4]:
\[\min_{\alpha,n,t_{\min},t_{\max}}\frac{||Kf_{r}-g||^{2}}{\text{trace}(I-KK^{ \sharp})^{2}}, \tag{10}\]
where the regularized solution \(f_{r}=K^{\sharp}g\), and \(I\) is the identity matrix.
The minimization problem for finding the quasi-optimal values of discretization and regularization parameters can also be formulated based on other criteria for searching the regularization parameter [1, 4] and/or other approaches for parameterizing discretization.
As we will see in the next section, the quasi-optimal values of discretization and regularization parameters computed with the help of (10) allow us to obtain quite satisfactory results and extract significantly more information from data in hand than conventional regularization techniques.
## 4 Numerical Examples
To illustrate the effectiveness of the proposed approach, we present a few examples of inverting Laplace transformations (1) and solving problems of nuclear magnetic resonance relaxometry (2).
All examples were computed by using Tikhonov regularization (9) with stabilizing matrices \(\Omega=I\) and \(\Omega=L_{2}\), where \(I\) is the identity matrix and \(L_{2}\) is
a matrix that approximates the second derivative of the solution. It is evident that matrix \(L_{2}\) also depends on discretization parameters.
As a first example, let's consider the restoration of the pre-image function \(f(t)=\frac{1}{2t\sqrt{2\pi t}}\exp(-1/8t)\) from its Laplace transform \(g(s)=\exp(-\sqrt{s/2})\) contaminated by adding normally distributed noise with \(\sigma=0.001\). As can be seen from the graph in Figure 1, we obtain a highly satisfactory solution for the selected level of noise.
The rest of the examples were constructed using a sum of log-normal distributions:
\[f(t)=\sum_{i=1}^{n}a_{i}f(t;S_{i},\theta_{i}), \tag{11}\]
where
\[f(t;S,\theta)=\frac{1}{t\sqrt{2\pi S}}\exp\left(\frac{\log t-\log\theta}{2S} \right), \tag{12}\]
For \(S>1\), the log-normal distribution reaches its maximum at \(t\approx\theta\), and as \(S\rightarrow\infty\), \(f(t;S,\theta)\) tends to \(\delta(t-\theta)\).
The image \(g(s)\) was computed numerically, and the experimental data have been simulated by adding normally distributed noise with standard deviation \(\sigma=const\).
Graph in Figure 2 shows restoration of a three-peak function from its Laplace transform. As can be seen, all peaks were resolved, and smoother peaks are restored more accurately. As observed, two regularized solutions are close to each other.
Graph in Figure 3 shows the restoration of a sum of two slightly overlapping log-normal distributions. The results shown in the figure are quite satisfactory.
Moving on to the application of the method in nuclear magnetic resonance relaxometry, Figure 4 demonstrates the deconvolution of NMR relaxation data. We simulate a three-peak distribution as a sum (11) with parameters \(a=[1,4,10]\), \(S=[10,13,15]\), \(\theta=[0.1,3,10]\), and \(\sigma=0.01\). The results of deconvolving equation (2) are shown in Figure 4, indicating successful restoration of all peaks. As can be seen, the kernel of the integral equation (2) suppresses information about the function \(f(t)\) for small \(t\), consequently, the results for small \(t\) are less reliable.
The graph in Figure 5 shows the results of deconvolution of NMR relaxation data in the case where components of a sum of distributions are highly over
lapping. As can be seen from the figure, in this case, the restored distribution smoothes out tiny details of the exact distribution. As observed, the closeness of solutions \(f_{r}^{(I)}\) and \(f_{r}^{(L)}\) does not necessarily mean the closeness to the exact solution.
Finally, figure 6 showcases the deconvolution of experimentally measured NMR relaxation data. The noise-like residues depicted in the lower frame of the figure validate that the software performs as expected, providing a good fit to the data. The disagreement between two regularized solutions \(f_{r}^{(I)},f_{r}^{(L)}\) for small \(t\) indicates that the results are not reliable in that area.
Additional testing results can be found in Jupyter notebooks available on Google Drive [12].
## 5 Conclusion
In this paper, through examples of inversion of noisy Laplace transforms and NMR relaxation data, we have demonstrated that it is essential to introduce an appropriate parametrized discretization into the process of constructing regularizing operators for solving severely ill-posed problems. The optimal values
Figure 3: Restoration of a two-peak distribution. \(\sigma=0.01,a=[1,6],\theta=[0.2,1],S=[10,5]\) by inverse Laplace transformation
for the regularization and discretization parameters can be computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion.
Hence, it is reasonable to expect that other Fredholm integral equations of the first kind can be resolved more accurately by introducing an appropriate parametrized discretization.
**Acknowledgments**
The author is grateful to his wife, Helen Kryzhnyaya, for supporting his decades-long voluntary research on the inversion of real-valued Laplace transforms.
The author is also thankful to Green Imaging Technology, Inc. for providing experimental data for testing purposes and granting permission to use it in this paper.
|
2308.02992 | Binary Code Similarity Detection | Binary code similarity detection is to detect the similarity of code at
binary (assembly) level without source code. Existing works have their
limitations when dealing with mutated binary code generated by different
compiling options. In this paper, we propose a novel approach to addressing
this problem. By inspecting the binary code, we found that generally, within a
function, some instructions aim to calculate (prepare) values for other
instructions. The latter instructions are defined by us as key instructions.
Currently, we define four categories of key instructions: calling subfunctions,
comparing instruction, returning instruction, and memory-store instruction.
Thus if we symbolically execute similar binary codes, symbolic values at these
key instructions are expected to be similar. As such, we implement a prototype
tool, which has three steps. First, it symbolically executes binary code;
Second, it extracts symbolic values at defined key instructions into a graph;
Last, it compares the symbolic graph similarity. In our implementation, we also
address some problems, including path explosion and loop handling. | Zian Liu | 2023-08-06T02:24:42Z | http://arxiv.org/abs/2308.02992v1 | # Binary Code Similarity Detection
###### Abstract
Binary code similarity detection is to detect the similarity of code at binary (assembly) level without source code. Existing works have their limitations when dealing with mutated binary code generated by different compiling options. In this paper, we propose a novel approach to addressing this problem. By inspecting the binary code, we found that generally, within a function, some instructions aim to calculate (prepare) values for other instructions. The latter instructions are defined by us as _key_ instructions. Currently, we define four categories of key instructions: calling subfunctions, comparing instruction, returning instruction, and memory-store instruction. Thus if we symbolically execute similar binary codes, symbolic values at these key instructions are expected to be similar. As such, we implement a prototype tool, which has three steps. _First_, it symbolically executes binary code; _Second_, it extracts symbolic values at defined key instructions into a graph; _Last_, it compares the symbolic graph similarity. In our implementation, we also address some problems, including path explosion and loop handling.
Binary code, code analysis, symbolic execution
## I Introduction
My PhD goal aims at analyzing binary code to improve binary code security. This includes similarity detection, code diffing, bug finding, etc., at binary level. I plan to achieve this goal by implementing novel approaches in a binary code similarity detecting tool. The current process is that the tool's framework has finished. I am fine-tuning this tool to conducting on a large scale dataset.
With the rapid development of the software industry, binary code similarity detection is playing an increasingly critical role. Software producers release their products as binary code mainly to protect their source code. Binary code similarity detection can be used in many fields such as bug search, malware detection, malware clustering, malware image, patch generation and analysis, porting information, software theft detection [1], etc. However, binary code similarity detection is challenging, which is mainly due to, 1) The same source code can be compiled on different architectures, resulting in different binary codes using different instruction sets; 2) During compiling process, different compilers and compiling options may produce significantly different binary code for the same source code; 3) Producers can use many code protecting tools to obfuscate the code. There are some existing solutions to this problem. These works can be divided into three categories with different standards. In Section II, we mainly introduce these works according to their comparison type: syntactic, semantic, and structural similarity. A typical syntactical approach represents the code by statisticizing the occurrence of specific strings in the code. Alternatively, one can use machine learning based method to learn the vectorized representation of one or more sentences in the code. A semantic approach evaluates whether two codes have similar functionality or impact. Structural approach compares graph features such as control flow graph and call graph. However, these solutions still have some limitations. For example, syntactic similarity focus on the 'appearance' of binary code thus sometimes failing to explore semantic similarity. Semantic similarity, however, is not scalable and may not cover all of the code or situations. Moreover, Ren et al. pointed out that existing binary code similarity detection overlooked the impact of compiling options [2]. They claimed that the testing set used in current work lacked enough compiling options mutation. They tested several state-of-the-art binary similarity detection tools with their crafted dataset. The detecting result all turned out to be less accurate. Structural similarity is vulnerable for cross-architecture comparison since they assume that basic blocks remain their feature and relationship with other basic blocks under all circumstances. However, by utilizing Ren el al's method, one can tune the compiling configuration to have less structural similarity. Therefore, our research question is how can we detect binary code similarity even when these compiling options are deployed.
By analyzing the binary code, we observed that some instructions calculate value for some other instructions. The latter instructions are more important for deciding the similarity while the former instructions should be given less consideration. This is because the symbolic value of the former can propagate to latter instructions. Only inspecting the symbolic value of these latter instructions can reserve the main symbolic meaning of the function. We thus define these important instructions as Key Instructions. In our tool, they are translated to Intermediate Representation (IR) and connected based on control flow to form a Key IR graph. We implemented our idea in a prototype tool. This tool is able to lift different architectures' binary code into Key IR graphs and compare their similarity. This tool consists of three phases: 1) symbolic execution, 2) Key IR graph construction, and 3) Key IR graph similarity detection. In summary, our contributions are: 1) We proposed a Key IR graph to abstract the semantic of binary code. 2) We implemented a prototype tool, including the
symbolic execution engine, IR lifting engine, and comparison engine. This tool currently supports two architectures, i.e., x86-64 and ARM. 3) We proposed a novel method to compare the similarity of Key IR graphs. Currently we have finished the implementation process and on our way to evaluation.
## II Background
As [1] mentioned in their survey, existing works on binary code similarity can be classified according to different standards. In terms of the comparison granularity, these works can be divided into instruction level, basic block level, function level, and whole program level. In the binary similarity comparison, one input is used as a query to compare with the target. Thus according to input-target number, they can be classified into one-to-one, one-to-many, and many-to-many comparisons. According to supported architectures, they can be divided into a single architecture and cross-architecture. Single architecture only supports one kind of assembly instruction set, while cross-architecture supports more than one. According to analysis type, they can be categorized into static analysis, static analysis, and hybrid analysis. Static analysis analyzes the code without running the code, while dynamic analysis executes the code with some input. The advantage of static analysis is scalability and code coverage. Dynamic analysis is resource consuming but helpful to gain semantic information. According to comparison type, they can be classified into syntax, structural, and semantics similarity comparison. Syntax similarity captures instruction representation similarity, while structural similarity focuses on similarity in terms of graph representation of the binary code. Semantic similarity highlights the similarity of code's impact.
We introduce various techniques used according to different comparison types. For syntax similarity, common strategies include hashing, embedding, and alignment. [3, 4, 5] all use hashing technique to output various instructions sequences into fixed length of output hashing value. Equivalent output value implies syntax similarity. [6, 7, 8, 9] generate an embedding from sequences. [10, 11, 12, 13] automatically learn the embeddings for each instruction and use them to produce the basic-level or function-level embedding. [14, 15, 16] align two sequences and decide their similarity. For structural similarity, common methods are optimization solution, k-subgraph matching, path similarity, and graph embedding. [17, 18, 19, 3] transform the problem into finding the mapping between two CFGs with minimum cost (optimization solution). [20, 21, 22, 18] divide the graph into k subgraphs, so that each one has k connected nodes. Matched number of those subgraphs indicates the extent of the similarity. [23, 24, 16] determine similarity based on paths. [25, 26, 19] extract features from graphs into feature vectors and determine the vector similarity. For semantic similarity, general techniques are instruction classification, input-output pairs, symbolic execution, theorem prover, and semantic hashes. [20, 22, 24] classified instruction based on their semantic in terms of arithmetic, logic or data transfer purpose. [27, 28, 29, 30, 5, 31] check whether output are the same to the input. [28, 23, 27, 28, 32, 33] use symbolic formulas to represent the binary code. With the symbolic formula, [32] uses Theorem Prover to check whether two different symbolic formula has similar output. [27, 34, 35] use symbolic hashes as an alternative to the theorem prover. [36, 37] determine the edit distance of the tree/graph of the symbolic formula.
## III Methodology
The comparison granularity of our tool is function level. Given two functions from two different input binary codes, our tool consists of three phases: 1. symbolic execution, 2. Key IR graph construction, and 3. Key IR graph similarity comparison. We firstly symbolically execute the binary code several times to get their possible symbolic values. Then for the most important instructions for comparison, we lift them and their symbolic values into Key IRs. Those Key IRs are then combined to produce the Key IR graph. Lastly, we compare the similarity of two IR graphs.
### _Symbolic execution_
In this section, we randomly select paths and symbolically execute the function to infer the results of each operand in each instruction. As shown in Figure. 1 and Figure. 2, a function may contain many paths and symbolic execution might have the path explosion problem. Thus we symbolically execute the function many times. For each run, we randomly select a main path by using deep-first algorithm to symbolic execute this path. However, if we only execute one main path each run, we
Fig. 1: 1st run
Fig. 2: 2nd run
still may miss many instructions after many times of run (e.g., we may miss instructions 3, 4, 5 in Figure.1, and instructions 2, 4, 5 in Figure.2). To cover all the instructions in each run, except the main path, we also select other paths to cover all the instructions. We do not propagate to the instructions we already encountered to avoid the path explosion problem in each run.
Since we randomly select paths in each run, some instructions demonstrate different symbolic values. In fact, not each instruction can have different states. Only the instructions at the joint of some paths can hold different symbolic values (e.g., instruction 6 in Figure.1 and 2). Other instructions can only hold one possible symbolic value (e.g., all other instructions in Figure.1 and 2). The percentage of these possible symbolic values revealed is dependent on the times of run. Generally, the more time the run, the more chances we can reveal all the possible values.
### _Key IR graph construction_
As we observed in the binary code, many instructions are making preparations for some other instructions, such as calling subfunction, comparing instructions, as shown in Figure. 3. In Figure.(a)a, parameters are firstly loaded into registers _rcx_ and _edx_ before calling the EVP_CIPHER_CTX_ctrl subfunction. In Figure.(b)b, values in register rbp are loaded from register _r13_ and subtracted with value in register _r14_. Value in register _rax_ is loaded from some memory address before comparing with _rbp_. In Figure.(c)c, value in register eax is set to 0FFFFFFFF before stack balancing instructions (i.e., pop instructions) and return instruction. In Figure.(d)d, value in somewhere of the memory address is loaded to register _rax_ and added to the value in register _r14_. Then this value is written to some other memory location [rsp+1D8h+var_170]. As we can see in all these examples, some instructions are preparing values for some later instructions. We define these preparing instructions as non-Key Instructions while the other instructions as Key Instructions. From our observation, during execution, the non-Key Instructions should propagate their values into the Key Instructions. And the Key Instructions better describe the behavior of the binary code. Similar binary code should contain similar Key Instructions with similar values (e.g., calling the same subfunction with similar parameters or comparing similar values to similar values). We define the Key Instructions to contain two types: control-flow impacting and control-flow irrelevant. Specifically, the control-flow impacting include three categories: 1. calling subfunction, 2. comparing instruction, and 3. returning instruction. We select them because Calling subfunctions leads the execution flow into other parts of the binary code. The result of comparing instructions decides which branches to take next. Returning instructions leads execution flow back to the caller function. The control-flow irrelevant instructions refer to the memory writing instructions since it does not affect the control flow.
With the result of the first module, symbolic execution, we can translate Key Instructions into Key IRs with symbolic values. Each instruction in the binary code corresponds to one Key IR node. Then we connect the Key IRs based on their control flow to form the Key IR graph. It is important to note that some nodes only contain one possible symbolic value among those Key IR nodes, while the other nodes might contain multiple symbolic values because they are the joint of multiple paths.
### _Key IR graph comparison_
Because of the impact of cross-architecture, different compiling options, and other factors, the Key IR graph might contain reordered nodes, inserted new nodes, and duplicated nodes. Sometimes Key IR graph might also lose some nodes and have other mutations because of the compilation options described in [2]. We propose a fuzzy method to match two Key IR graphs. To compare the similarity of two Key IR graphs, we check how many similar nodes in two graphs. To compare the similarity of a pair of node, we divide this process into two phases: 1. single node textual similarity, and 2. context similarity. In the first phase, we pick the most similar node pair from two Key IR graphs as potentially similar pairs if these two nodes' symbolic value has high textual similarity. The symbolic values have been simplified beforehand by msynth 1. Next, we compare the similarity of this node pairs' context. The context here refers to the neighbor nodes of a node within a given boundary. Again, we compare these neighbors' simplified symbolic value textual similarity. An example of
Fig. 3: Four types of Key Instructions
this graph comparison is shown in Figure.4. Suppose we find the potentially similar node pair (blue circle) with high similarity, and the context boundary is 1. Thus we examine the neighbor nodes within this bound (green circle) and check their similarity. When detecting the context similarity, the existence of similar nodes within the context indicates the similarity. The more similar nodes are, the more similar these binaries will be. We ignore the relations between these nodes because the complexity of compilation options can significantly mutate the relations.
## IV Implementation
Our implementation is an IDA Pro plugin written in C++ and python. Our tool is based on IDA pro's control flow graph and its API. We have finished our implementation and on our way for evaluation.
### _Symbolic execution_
#### Iv-A1 Symbolic values
At the beginning of each function, we assign VAR \(0\) to VAR \(n\) to parameters \(1\) to \(N\) of this function. With the execution of each instruction, those initial tags will be represented in various expressions and are propagated to different instructions as their operands' values. The similarity of the expressions reveals similar semantics. Thus we transform binary code similarity detection into expression similarity detection.
#### Iv-A2 Handling loops
During the execution, we might encounter loops. Since it is often difficult to decide the time of execution of a loop, it is challenging to propagate the values of instructions in the loop to some Key Instructions. Also, it is non-trivial to analyze which instructions' operands are invariant during loops and updated in each loop. To address these problems, we symbolically execute the instructions in a loop twice. The invariant operands keep their constant symbolic value, while the updated operands add an 'ITER()' notation outside their value to highlight that their values are changed in the loop.
### _Key IR graph construction_
We recover both control-flow impacting and control-flow irrelevant instructions into Key IRs. We identify those instructions if they match some patterns. The identification is implemented as a rule in the tool. For control-flow impacting instructions, we recover three types, including 1. calling subfunction instructions, 2. comparing instruction, and 3. returning instructions. For Type 1, we find the corresponding parameters for the subfunction. The patterns for matching Type 2 instruction are more complicated than other types since some compiler options can translate comparison instructions from source code to equivalent codes in an implicit way as in [2]. For Type 3, we find the last modification of register _rax_ in x86-64 or register _R0_ in ARM before the function returns. For control-flow irrelevant instruction, i.e., memory writing instruction, we identify them if the mnemonic is'mov' in x86-64 or 'STR' in ARM, and the destination operand is a memory address.
Each Key Instruction is then transformed to a node in the Key IR graph. It is important to note that each Key Instruction might have several possible symbolic values. Thus we reserve all of them for each Key Instruction. After we recovered all the nodes, we connect them based on the control flow provided by IDA pro. The result Key IR graph has the same control flow as IDA pro's control flow. The difference is that all the non-Key Instructions are removed.
### _Key IR graph comparison_
To compare the similarity of two symbolic formulas, we firstly simplify them using msynth [7]. Msynth is originally used as a binary code deobfuscation tool. It aims to simplify very complex binary code formulas into simple formulas. We use it here to facilitate our comparison. Moreover, various compilation options can be mitigated since the resulting formula will be more similar, even the original formula may be different.
## V Evaluation
We have collected many open-source benchmarks used widely in existing works, i.e., OpenSSL, Coreutils, SPEC CPU2006, and SPEC CPU 2017. We will answer two research questions in our experiment: 1. Can we effectively detect cross-architecture binary codes? 2. Can we mitigate the influence of compiling options? To answer the first question, we aim to compile our benchmark dataset on ARM and x64 architectures with different compilers such as GCC and Clang. Then we will prepare random pairs of similar and dissimilar function pairs with labels indicating their similarity. To answer the second question, we will compile benchmark dataset by using the compilation options described in [2] and prepare similar and dissimilar function pairs. In both experiments, the percentage of correctly detected similar and dissimilar function pairs will be regarded as the accuracy, which is to indicate the performance of our work.
Fig. 4: Key IR graph comparison |
2301.07339 | The Self-energy of Nucleon for the Photoproduction of Pion in the
Low-energy Region | The scattering amplitude of the photoproduction of $\pi^{+}$ is calculated
using the helicity formalism. The pseudovector coupling $\pi$-N interaction
with the non-perturbative term is applied to the perturbative expansion. The
parameters for the self-energy are taken from the results of the $\pi$-N
scattering in our previous study. | Susumu Kinpara | 2023-01-18T07:06:02Z | http://arxiv.org/abs/2301.07339v1 | ###### Abstract
###### Abstract
The scattering amplitude of the photoproduction of \(\pi^{+}\) is calculated using the helicity formalism. The pseudovector coupling \(\pi\)-N interaction with the non-perturbative term is applied to the perturbative expansion. The parameters for the self-energy are taken from the results of the \(\pi\)-N scattering in our previous study.
The Self-energy of Nucleon for the Photoproduction of Pion
in the Low-energy Region
Susumu Kinpara
_Institute for Quantum Medical Science_
_Chiba 263-8555, Japan_
## 1 Introduction
Nucleon is a fundamental object and plays a central role in a lot of nuclear phenomena. The assumption is made true by elucidating the interaction between particles and the mechanism of the reaction processes. To proceed the study the field theoretical method is essential and it is main purpose to examine the validity of the techniques of the calculation by applying to phenomena known as the experimental facts. For the nucleon is under the nuclear force and it is described by the meson-exchange model the elements of the system are mainly nucleon and pion which is endowed the lightest mass that is the longest range of the force.
Once the interaction is prepared the calculation of the observable is made by the perturbative expansion of the T-product. The procedure of the quantum electrodynamics is an example for the investigation of the pion-nucleon system to some extent. There are differences between them. In the latter case which is the subject of the present study the interaction is the derivative type and in usual the renormalization is not applicable. Therefore the non-perturbative relation on the basis of the equation of the motion is valuable to draw some conclusions about the vertex part. Since the axial-vector current is not conserved the non-perturbative term arises apart from the cancellation between the non-covariant terms in the perturbative calculation. It is interesting that the non-perturbative term left includes the self-energy and the divergence is removed along with the counter terms.
The renormalized propagator and the vertex with the non-perturbative term makes possible to calculate any processes related to nucleon. While the results are free from the divergences the coupling constant remains to be uncertain. The strength is an adjustable parameter associated with the degree of the off-shell behavior although the on-shell condition relates the pseudovector interaction to the pseudoscalar one. It should be allowed to shift the pseudovector coupling constant from the standard value \(f\sim 1\) due to the effect of the non-perturbative term. As long as the knowledge of the numerical results of us the magnitude of \(f\) is required to make smaller so as to understand the experimental results.
In spite of the modification of the \(\pi\)-N interaction for preparing the renormalized propagator it is not useful to consider the property of the \(\pi\)-N coupling constant since the self-energy does not depend on it in the present lowest-order calculation. Then the scattering process with the external line of pion is appropriate to examine the magnitude and the dependence on the variables such as the energy. The pion-nucleon scattering has two legs of pion on the diagram and it is a basic process capable of the approach by the systematic formulation as has been shown in our previous study [1].
The model of the non-perturbative method is suitable for the description of the intermediate energy region. Meanwhile the low-energy region below the \(\Delta\)-resonance needs another approach to avoid the inclusion of the higher-order terms which could give the drastic change of the form of the self-energy. The matrix inversion is a practical mean giving the low energy parameters of the \(\pi\)-N scattering in terms of the self-energy. Our purpose of this study is to investigate the role on the photoproduction of pion.
## 2 The self-energy for the helicity amplitude
The pion-nucleon system is formulated by the lagrangian of the fields and the type of the \(\pi\)-N interaction is the pseudovector coupling. It is known that the system is described by the pseudoscalar coupling too. The former interaction is still a controversial issue because the renormalization is not likely to give the finite result of the correction of the higher-order graphs for the nucleon propagator. For the vertex part of the interaction these two types are connected by the non-perturbative term containing the part of the
self-energy. In general the form of the self-energy is given as
\[\Sigma(p)=Mc_{1}(p^{2})-\gamma\cdot p\,c_{2}(p^{2}) \tag{1}\]
in terms of the coefficients \(c_{i}(p^{2})\) (\(i\)=1,2) as a function of the four-momentum \(p\) of nucleon. The \(M\) is the nucleon mass and the \(\gamma_{\mu}\) is the gamma matrix.
Assuming the expansion of \(c_{i}(p^{2})\) in the vicinity of \(p^{2}=M^{2}\) the form of \(\Sigma(p)\) is determined by preparing coefficients of the series expansion. In our previous study the scattering parameters of the \(S\)-wave for the pion-nucleon elastic scattering has been used to construct \(\Sigma(p)\) by solving the equation of the matrix inversion on four unknown coefficients up to the order of \((p^{2}-M^{2})^{2}\). The remaining two variables are dependent on the solutions and the relations between them are settled by the condition of the renormalization. For only the \(S\)-wave survives in the limit of the pion momentum \(\vec{p}\,^{2}\to 0\) of the \(\pi\)-N scattering the choice of them as the input is thought to be suitable for the description of the low-energy region.
The set of the output values (\(c\equiv c_{1}^{(0)}=c_{2}^{(0)}\,,c_{2}^{(1)}\,,c_{1}^{(2)}\,,c_{2}^{(2)}\,\)) is responsible for the construction of the propagator \(G(p)\) through \(\alpha(p^{2})\) and \(\beta(p^{2})\) in \(G(p)=(\alpha(p^{2})\gamma\cdot p+\beta(p^{2})M)/(p^{2}-M^{2})\). It is verified that the condition of the renormalization \(\alpha(M^{2})=\beta(M^{2})=1\) is satisfied automatically. The \(\alpha(p^{2})\) and \(\beta(p^{2})\) are expanded in powers of \(p^{2}-M^{2}\) similar to \(c_{i}(p^{2})\). Applying the renormalized propagator to the calculation of the amplitude the exact one is replaced with the approximate one \(\alpha(p^{2})\approx 1+\alpha^{(1)}(p^{2}-M^{2})\) eliminating the terms of the order \(O((p^{2}-M^{2})^{2})\). The procedure is also used to obtain \(\beta(p^{2})\).
These are expressed by the set of the coefficients and given as follows
\[\alpha^{(1)}=\frac{1}{4M^{2}}\cdot\frac{c^{2}}{1+c}-c_{2}^{(1)}+M^{2}(c_{1}^{ (2)}-c_{2}^{(2)}) \tag{2}\]
\[\beta^{(1)}=\alpha^{(1)}+\frac{1}{2M^{2}}\cdot\frac{c}{1+c} \tag{3}\]
In the previous study the term of the \(c_{i}^{(2)}\) has not been added [1]. Inclusion of the term makes the numerical results worse and then the terms of the higher-order would be required to improve the results. Turning the scalar meson \(\sigma\) off the coefficient \(c\) results in \(c\sim-1\) and the propagator \(G(p)\) becomes too large to obtain the reasonable values as seen in the \(\pi\)-N elastic scattering.
The process mediated by the \(\sigma\)-meson is essential to calculate the parameters of the \(\pi\)-N elastic scattering.
The self-energy has an effect on the vertex part to correct the scattering amplitudes for various phenomena. In the quantum electrodynamics (QED) the vertex part is connected to the propagator and the relation is known as the Ward-Takahashi (W-T) identity. This relation is satisfied also in the case of the nucleon interacting with the photon and the pion. Particularly the anomalous interaction of the photon-nucleon-nucleon vertex is ascribed to the identity. Using the value of the coupling parameter \(f\sim 0.8\) and choose one of the models of the self-energy the simultaneous fit to the strength of the anomalous part of proton and neutron is attained.
Similar to the prescription of the non-perturbative relation in QED the \(\pi\)-N-N vertex \(\Gamma(p,q)\) is found out to accompany the term as
\[\Gamma(p,q)=\Gamma_{0}(p,q)+G(p)^{-1}\,\gamma_{5}+\gamma_{5}\,G(q)^{-1} \tag{4}\]
\[\Gamma_{0}(p,q)=\gamma_{5}\,\gamma\cdot(p-q)+O((f/m)^{2}) \tag{5}\]
where \(\Gamma_{0}(p,q)\) is the proper vertex defined by the perturbative expansion in terms of the ratio of \(f\) and the mass of pion \(m\). Hence setting the on-shell conditions \(\gamma\cdot p-M\to 0\) and \(\gamma\cdot q-M\to 0\) for the incoming (\(q\)) and the outgoing (\(p\)) momenta the lowest-order of \(\Gamma(p,q)\) reduces to the vertex of the pseudoscalar interaction as \(\Gamma(p,q)\rightarrow-2M\gamma_{5}\) with the coupling constant \(G\equiv 2Mf/m\) besides the isospin.
Our interest is the calculation of the photoproduction of single pion and to examine whether the self-energy plays a decisive role or not. The process \(\gamma+p\rightarrow\pi^{+}+n\) is taken as an example. The other types of the photoproduction of pion reaction are treated like the present case. They are classified by the state of the isospin. For the \(\pi^{+}\) production the initial state is same as the compton scattering and the optical theorem is available. The procedure is tractable when the cross section in the forward direction is understood quantitatively by taking into account the scattering of photon by the cloud of the virtual pions or the photon-photon-proton-proton vertex. Alternatively the scattering amplitude is calculated by the lowest-order perturbative expansion. The perturbative part \(\Gamma_{0}(p,q)\) is approximated by \(\Gamma_{0}(p,q)\approx\gamma_{5}\,\gamma\cdot(p-q)\) neglecting the higher-order corrections.
The perturbative expansion of the \(S\)-matrix is used to calculate the process of the \(\pi^{+}\) production. The Feynman diagram is seen in Ref. [2]. The central part \(v_{s^{\prime}s}(p^{\prime},p)\) consists of a set of four terms put between the Dirac spinors of the initial proton state with the four-momentum \(p\) and the spin \(s\) and the final neutron state with the four-momentum \(p^{\prime}\) and the spin \(s^{\prime}\)
\[v_{s^{\prime}s}(p^{\prime},p)\equiv\bar{u}^{(s^{\prime})}(p^{\prime})\gamma_{5 }\left[\,a+b\,\gamma\cdot k\,\gamma\cdot\epsilon_{\chi}+c\,\gamma\cdot\epsilon _{\chi}+d\,\gamma\cdot k\,\right]u^{(s)}(p) \tag{6}\]
\[a=a^{\prime}\,q\cdot\epsilon_{\chi}=\frac{1}{q\cdot k}\,q\cdot\epsilon_{\chi} \tag{7}\]
\[b=-\frac{1}{2}\left(\,\frac{1}{p\cdot k}+\alpha+\beta\,\right)+\frac{\kappa_{ p}+\kappa_{n}}{4M^{2}}\]
\[-\frac{1}{2p\cdot k}[\,\kappa_{p}(1+\frac{p\cdot k}{2M^{2}})-\kappa_{n}\frac {p\cdot k}{p^{\prime}\cdot k}(1-\frac{p\cdot q}{2M^{2}}+\frac{m^{2}}{4M^{2}}) \,]\]
\[-\frac{\kappa_{p}+\kappa_{n}}{2}(\alpha+\beta)-\frac{(\kappa_{p}-\kappa_{n}) \alpha}{2M^{2}}(p\cdot q-\frac{m^{2}}{2})-\frac{\kappa_{p}\alpha\,q\cdot k}{2M ^{2}} \tag{8}\]
\[c=-\,\frac{p\cdot k}{M}\,\alpha-\frac{\kappa_{p}-\kappa_{n}}{2M}-\frac{\kappa_ {p}+\kappa_{n}}{2M}(\alpha+\beta)p\cdot k+\frac{\kappa_{n}(\alpha+\beta)\,q \cdot k}{2M} \tag{9}\]
\[d=d^{\,\prime}\,q\cdot\epsilon_{\chi}=\frac{\kappa_{n}}{2M}\left(\,\frac{1}{ p^{\,\prime}\cdot k}-\alpha-\beta\,\right)q\cdot\epsilon_{\chi} \tag{10}\]
in which \(k\), \(q\) are the four-momenta of photon and pion and \(\epsilon_{\chi}\) is the polarization vector of photon with the subscript specifying two axes perpendicular to the direction of the photon momentum respectively. The superscripts of \(\alpha^{(1)}\) and \(\beta^{(1)}\) are omitted for the sake of the brevity. When \(\alpha=\beta=0\) the \(v_{s^{\prime}s}(p^{\prime},p)\) reduces to that of the pseudoscalar coupling. The strength of the anomalous interactions of proton and neutron are taken from the experimental values as \(\kappa_{p}\) = 1.79 and \(\kappa_{n}\) = \(-1.91\) respectively.
To connect \(a\sim d\) to the elements of the helicity formalism [3][4] the quantization axis of the final spin state is required to be coincident with each other. The final helicity state is transformed to the one in the usual \(z\)-axis by operating the \(D\)-function. The resulting scattering amplitude is
a linear combination of the helicity amplitudes on the neutron spin parallel and anti-parallel to the direction of the momentum. Consequently the elements \(A_{+}^{J}\equiv A^{J}\), \(A_{-}^{J}\equiv B^{J}\), \(B_{-}^{J}\equiv C^{J}\) and \(B_{+}^{J}\equiv D^{J}\) for \(J=l+\frac{1}{2}\) (\(l\geq 0\)) are obtained by the integration of the integrand \(\xi_{i\pm}\) and \(\zeta_{i\pm}\) (\(i\!=\!0,1\)) on the scattering angle \(z=\cos\theta\) as
\[A_{\pm}^{J}=\frac{A}{2}\int_{-1}^{1}dz\,(\,P_{l}(z)\pm P_{l+1}(z)\,)(\,\xi_{0 \pm}+\xi_{1\pm}\,z\,) \tag{11}\]
\[B_{\pm}^{J+1}=\pm\sqrt{\frac{l(l+1)}{(l+2)(l+3)}}\,B_{\pm}^{J}\]
\[+\frac{A}{2}\sqrt{\frac{l+1}{l+3}}\int_{-1}^{1}dz\,(\,P_{l}(z)-P_{l+2}(z)\,)( \,\zeta_{0\pm}+\zeta_{1\pm}\,z\,) \tag{12}\]
in which the \(P_{l}(z)\) is the Legendre polynomial of the \(l\)-th order. The \(\xi_{i\pm}\) and \(\zeta_{i\pm}\) are given as
\[\xi_{0\pm}=W-Y\pm(X-2Z_{0}) \tag{13}\]
\[\xi_{1\pm}=-X\mp(2Z_{1}+W-Y) \tag{14}\]
\[\zeta_{0\pm}=\mp\,\zeta_{1\pm}=-X\pm(W+Y) \tag{15}\]
\[W=\frac{\omega\,q}{\sqrt{2}}\cdot\frac{c+b(E+M+\omega)}{(E^{\prime}+M)(E+M)}=- Z_{1} \tag{16}\]
\[X=\frac{q^{2}}{\sqrt{2}}\cdot\frac{a^{\,\prime}+d^{\,\prime}(E-M+\omega)}{E^{ \,\prime}+M} \tag{17}\]
\[Y=-\frac{\omega\,q}{\sqrt{2}}\,\{\frac{a^{\,\prime}}{E+M}-d^{\,\prime}(1+ \frac{\omega}{E+M})\}-W \tag{18}\]
\[Z_{0}=\frac{1}{\sqrt{2}}\{c-b(E-M+\omega)\} \tag{19}\]
and the overall constant is
\[A=\sqrt{\frac{\alpha_{e}\,G^{\,2}(E^{\,\prime}+M)\,q}{8\pi\,(E+M)\,\omega}} \tag{20}\]
with the neutron energy \(E^{\,\prime}\), the proton energy \(E\) and the photon energy \(\omega\) in the center of mass system (\(q\equiv|\,\vec{q}\,|\)). The \(\alpha_{e}\) is the fine structure constant.
## 3 The results of the calculation in the threshold region
When the scattering angle appears in the denominator the integral is performed for each term of the expansion series. Each of the coefficients \(a^{\prime}\), \(b\), \(c\) and \(d^{\,\prime}\) is a function of \(\cos\theta\) only through \(x\equiv q_{0}-q\cos\theta\). The property is helpful to examine the threshold regions (\(q\sim 0\)) where the replacement \(x\approx q_{0}\) is appropriate and the coefficients are independent of \(z=\cos\theta\).
Under the approximation the series of the elements stops at \(J\!=\!3/2\). The elements remaining are given as \(A_{\pm}^{3/2}=A\,\xi_{1\pm}/3\), \(A_{\pm}^{1/2}=\pm A_{\pm}^{3/2}+A\,\xi_{0\pm}\) and \(B_{\pm}^{3/2}=A\,\zeta_{0\pm}/\sqrt{3}\). The other elements (\(J\geq\!5/2\)) become 0. There exists the relation \(B_{\pm}^{5/2}=0\) unexpectedly due to the cancellation between the first and the second terms of the recursion relation in Eq. (12). The higher components of the partial wave dropped approximately return to their original values as the energy increases.
For the helicity elements in the low energy region the restriction \(J\leq 3/2\) is allowed and the cross section \(\sigma\) of the process \(\gamma+p\to\pi^{+}+n\)
\[\sigma=\pi\sum_{l=0}^{\infty}\,(l+1)\,(\,|A_{+}^{J}|^{2}+|A_{-}^{J}|^{2}+|B_{ +}^{J}|^{2}+|B_{-}^{J}|^{2}\,) \tag{21}\]
consists of a several of the terms where the summation of the \(B_{\pm}^{J}\) part starts from \(l=1\). At the region the elements of the \(J=3/2\) part has the approximate relation known as \(A^{3/2}+B^{3/2}\approx C^{3/2}+D^{3/2}\approx 0\) making possible to reduce the number of the terms furthermore. It arises from the form of the sum \(\sim X\sim O(q^{2})\) smaller than the main terms \(W\), \(Y\) and \(Z_{i}\) (\(i\!=\!0,1\)) in each element.
Using the \(x\approx q_{0}\) approximation the calculation of \(\sigma\) is probably applicable only to the low energy region. The lowest-order pseudoscalar model without the anomalous interaction shows the dependence of the energy having the broad peak around the region below the resonance energy favorable
to proceed the investigation of the correction. Regarding to the volume it is about half as large as the experimental fact and the additional effects are necessary to describe the mechanism of the pion production quantitatively. The anomalous interaction is not effective against the lack of the volume and makes the energy dependence of \(\sigma\) change to increase gradually over the resonance region.
To supply the strong positive effect the self-energy is taken into account for the diagram containing the Dirac part of the vertex of the electromagnetic interaction. In other words the parameters of the self-energy \(\alpha\), \(\beta\) and the anomalous interactions \(\kappa_{i}\,(i=p,n)\) are considered as roughly the same order and are retained up to the first-order of \(b\), \(c\) and \(d\,^{\prime}\). The \(a^{\prime}\) does not contain these parameters from the outset. In the actual calculation of the cross section the \(\alpha\) and \(\beta\) of the second order (\(\sim\kappa_{i}\alpha\) and \(\sim\kappa_{i}\beta\)) are replaced with the previous values. They are associated with the derivation of the anomalous magnetic moment to reproduce the experimental values.
The cross section \(\sigma\) is calculated as a function of the laboratory energy. While the curve has the property of the increasing tendency it does not change to the decrease above the resonance region. It means that the set of \(\alpha\) and \(\beta\) suitable for the region is different from that of the threshold region determined by the \(\pi\)-N scattering. The intermediate energy region is actually expressed by the other set on the basis of the perturbative calculation with the non-perturbative term. In fact the theoretical value of the anomalous magnetic moment of nucleon is obtained by the vertex correction with the previous set of the self-energy. The numerical values are small (\(\alpha,\beta\leq 1\)) in comparison with those of the low-energy \(\pi\)-N scattering. To remove or replace the \(\sim\alpha\kappa_{i}\) and \(\sim\beta\kappa_{i}\) (\(i=\!p,n\)) terms takes account of the anomalous interaction arising from the joint use of the vertex correction and the self-energy of nucleon. It is reasonable that the theoretical curve overestimates because of the coupling constant \(f\) which is reduced about 20% from the standard value to adjust the magnetic moment of proton and neutron to the experimental values simultaneously as shown in our previous study.
As the incident energy increases it is difficult to disregard the second term of \(x\). Particularly when \(q\gg m\), \(x\to 0\) in the forward direction. The correction of \(a\,^{\prime}\) is important compared with \(d\,^{\prime}\) in which the denominator is \(p^{\prime}\cdot k=\omega(E+\omega-x)\) and the effect is weaken by the other terms below the intermediate energy region. The \(a\,^{\prime}\) is expanded in powers of \(z\,q/q_{0}\) as \(a\,^{\prime}=\omega^{-1}\,q_{0}^{-1}+q\,\omega^{-1}\,q_{0}^{-2}\,z+O(z^{2})\). The correction \(\delta\,a\,^{\prime}=q\,\omega^{-1}\,q_{0}^{-2}\,z\) is important since it is comparable to \(\omega^{-1}\,q_{0}^{-1}\) at \(q_{0}\sim q\).
The present calculation achieves to reach the volume of \(\sigma\) at the laboratory energy \(\sim 0.3\,\)GeV and enables us to study the connection to the \(\Delta(1232)\) resonance. The application to the helicity states is interesting to examine the branching ratio observed by the experiment. In favor of the time reversal invariance the amplitude \(T_{\lambda}\) of the decay from the \(\Delta(1232)\) resonance to the helicity \(\lambda\) state is given as \(T_{1/2}\propto A_{+}^{3/2}-A_{-}^{3/2}\) and \(T_{3/2}\propto B_{-}^{3/2}-B_{+}^{3/2}\) excluding the common factor. The overall phase is arbitrarily determined by employing the extra phase in the polarization vector.
Due to the relation \(Z_{1}=-W\) in Eq. (16) the ratio is determined to be \(T_{3/2}\,/\,T_{1/2}=-\sqrt{3}\) irrespective of the detail of the amplitude under the \(x\approx q_{0}\). For the \(\lambda=3/2\) the branching ratio is \(|T_{3/2}|^{2}/(|T_{1/2}|^{2}+|T_{3/2}|^{2})=0.75\) a little smaller than the experimental value 0.78 - 0.79 [5]. The \(\delta\,a^{\,\prime}\) correction makes the value change from 0.75 to 0.84 by breaking the simple relation between the elements. Recently it has been verified that the use of the \(\sim\alpha\kappa_{i}\) and \(\sim\beta\kappa_{i}\) terms with the \(\alpha\) and \(\beta\) obtained from the method of the matrix inversion for the \(\pi\)-N scattering yields the numerical value 0.78 up to the \(O(z^{3})\) order of the \(\delta\,a^{\,\prime}\) correction.
## 4 Summary and remarks
The strength of the anomalous interaction is not only a parameter and it has a dynamical origin related to the vertex correction by the pion propagator. The pseudovector coupling constant is adjusted because of the non-perturbative term which creates the self-energy appropriate to the anomalous terms. In order to construct the photoproduction of pion from the threshold to the resonance region the amplitude needs the self-energy of nucleon determined by the method of the matrix inversion with the \(\sigma\) meson exchange process. The dispersion theoretical method would be effective along with the previous models of the self-energy for the \(\pi\)-N scattering to understand the decrease of the cross section above the resonance region in addition to the perturbative calculation for the intermediate region.
|
2301.00456 | Characteristic lengthscales of the electrically-induced
insulator-to-metal transition | Some correlated materials display an insulator-to-metal transition as the
temperature is increased. In most cases this transition can also be induced
electrically, resulting in volatile resistive switching due to the formation of
a conducting filament. While this phenomenon has attracted much attention due
to potential applications, many fundamental questions remain unaddressed. One
of them is its characteristic lengths: what sets the size of these filaments,
and how does this impact resistive switching properties. Here we use a
combination of wide-field and scattering-type scanning near-field optical
microscopies to characterize filament formation in NdNiO3 and SmNiO3 thin
films. We find a clear trend: smaller filaments increase the current density,
yielding sharper switching and a larger resistive drop. With the aid of
numerical simulations, we discuss the parameters controlling the filament width
and, hence, the switching properties. | Theodor Luibrand, Adrien Bercher, Rodolfo Rocco, Farnaz Tahouni-Bonab, Lucia Varbaro, Carl Willem Rischau, Claribel Domínguez, Yixi Zhou, Weiwei Luo, Soumen Bag, Lorenzo Fratino, Reinhold Kleiner, Stefano Gariglio, Dieter Koelle, Jean-Marc Triscone, Marcelo J. Rozenberg, Alexey B. Kuzmenko, Stefan Guénon, Javier del Valle | 2023-01-01T18:50:27Z | http://arxiv.org/abs/2301.00456v1 | # Characteristic lengthscales of the electrically-induced insulator-to-metal transition
###### Abstract
Some correlated materials display an insulator-to-metal transition as the temperature is increased. In most cases this transition can also be induced electrically, resulting in volatile resistive switching due to the formation of a conducting filament. While this phenomenon has attracted much attention due to potential applications, many fundamental questions remain unaddressed. One of them is its characteristic lengths: what sets the size of these filaments, and how does this impact resistive switching properties. Here we use a combination of wide-field and scattering-type scanning near-field optical microscopies to characterize filament formation in NdNiO\({}_{3}\) and SmNiO\({}_{3}\) thin films. We find a clear trend: smaller filaments increase the current density, yielding sharper switching and a larger resistive drop. With the aid of numerical simulations, we discuss the parameters controlling the filament width and, hence, the switching properties.
## Introduction
Many correlated materials, such as the vanadate and rare-earth nickelate families, are well-known for their insulator-to-metal transition (IMT) [1, 2, 3]. The transition into the metallic state can be
induced by increasing temperature, adding dopants or applying high pressures [4, 5, 6], but it can also be triggered electrically [7, 8, 9, 10, 11]. A large enough applied voltage or current can create a percolative metallic filament due to Joule heating [12], drastically reducing the resistance of the system [13, 14, 15, 16, 17, 18, 19]. We must note that this filament is not caused by the diffusion of ions under strong electric fields as commonly observed in resistive RAMs (random access memories) [20], but rather by a local phase transition from insulator to metal. When the voltage (current) is removed, the filament disappears, resulting in volatile resistive switching [21]. This phenomenon has attracted a lot of attention due to promising applications in emerging technologies, such as emulating neuronal spiking for neuromorphic computing [22, 23, 24, 25, 26, 27, 28], probabilistic bits for stochastic computing [29, 30, 31], or serving as electrooptical switches for optoelectronics [32, 33, 34, 35, 36]. In spite of this, many fundamental aspects of the electrically induced IMT are poorly understood. One of the most salient issues is the typical lengthscale of this process: what sets the size of the metallic filaments, or the number of them? And similarly, how do these characteristic lengths affect the resistive switching properties i.e. the sharpness of the switch (\(\partial V/\partial I\)) or the total resistive drop? Understanding this is not only of fundamental interest, but also key for designing device applications.
Here, we use a combination of wide-field optical microscopy [37] and scattering-type scanning near-field optical microscopy (s-SNOM) [38] to characterize filament lengthscales during the electrically-induced IMT in NdNiO\({}_{3}\) and SmNiO\({}_{3}\) microdevices. These compounds are two well-known members of the rare earth nickelate family [2, 6]. They both display an IMT concomitant with a structural phase transition, but there are rather important differences between the two. NdNiO\({}_{3}\) has a sharp IMT around 120 K (depending on epitaxial strain) with a resistivity drop of more than two orders of magnitude (Fig. 1a). SmNiO\({}_{3}\) on the other hand, displays a smooth IMT around 400 K, with a one order of magnitude resistivity change [2, 6]. Such different IMTs allow us to contrast the results from both materials and to determine which parameters govern filament lengthscales.
## Methods
We fabricated two-terminal microdevices on top of our NdNiO\({}_{3}\) and SmNiO\({}_{3}\) films, as depicted in Fig. 1b. The nickelate films (\(\sim\)40 nm thick, Fig. S1) were grown on LaAlO\({}_{3}\) (001) substrates using off-axis magnetron sputtering. We used a combination of optical lithography and ion etching to define isolated 360 \(\upmu\)m x 130 \(\upmu\)m nickelate islands, on top of which we patterned two planar Pt electrodes using a second optical lithography and on-axis Pt sputtering. The electrodes are 20 \(\upmu\)m wide, with a 20 \(\upmu\)m gap between them (10 \(\upmu\)m x 10 \(\upmu\)m for the s-SNOM experiments). The IMT can be triggered by applying a large enough voltage or current across the gap. To image this phenomenon, we take advantage of the large reflectivity change across the IMT [10]. We use optical microscopy to capture the distribution of metallic/insulating domains in the gap between electrodes [17]. We do so _in operando_ i.e. while applying a variable bias current, which allows us to capture clear images of the percolating filament. More details about the fabrication process, device characterization, and the experimental methods can be found in the supplementary information.
Figure 1: Sample characteristics. (a) Resistivity vs temperature for \(\sim\)40 nm thick NdNiO\({}_{3}\) (blue) and SmNiO\({}_{3}\) (red) thin films. Inset: Resistivity plotted as a function of _T-T\({}_{IMT}\)_. \(T_{IMT}\) was calculated finding the maximum of \(\partial\text{log}\left(\rho\right)/\partial T\), where \(\rho\) is the resistivity. Only the warm up branch is shown. (b) Schematic representation of the two-terminal devices. Nickelate islands (brown) were patterned on top of a LaAlO\({}_{3}\) substrate (blue). Two platinum electrodes (grey) were used to electrically trigger the IMT. The schematic is not at scale.
### Experimental results
Fig. 2a shows the voltage \(V\) as a function of the current \(I\) in a NdNiO\({}_{3}\) microdevice at several temperatures below the IMT temperature i.e. the film is in the insulating state when no current is applied. As the current is ramped up, the voltage rises steeply until a threshold is reached, after which a steep voltage reduction takes place. Such drop marks the moment when the electrically-induced IMT occurs and a filament percolates between the electrodes. This is a well-known phenomenon [13, 14, 15, 16, 17, 18, 19], and it can be readily observed with our imaging set-up. The filament widens when current is further increased, shrinks for decreasing current and disappears at low enough current values (Fig. S2 and supplementary videos), in accordance with the volatile _V-I_ curves in Fig. 2a. We must note that some of the _V-I_ curves feature two discontinuities. In the current range between them, the system is not stationary but rapidly oscillates between a high and a low resistance state, as further discussed in section 3 of the supplementary information. As expected, we find the filament width to be strongly dependent on the bias current, similarly to what has been reported in previous works [14, 15, 17]. The bulk of this paper focuses on how other factors, such as temperature or material properties, also play a key role at setting filament size.
Fig. 2a shows a clear trend: the voltage drop becomes sharper and larger as the temperature is lowered. This feature is also observed in SmNiO\({}_{3}\) microdevices (Fig. 2d), where resistive switching at room temperature is modest, very gradual and without discontinuities. As the temperature is lowered, the drop becomes steeper and discontinuous. Comparing NdNiO\({}_{3}\) and SmNiO\({}_{3}\), it is clear that resistive switching is sharper for the former. Figs. 2b and 2e show optical microscopy images of filaments in NdNiO\({}_{3}\) and SmNiO\({}_{3}\), respectively, for the same applied current, \(I\) = 20 mA. Images at two different temperatures are displayed for each material, showing distinctively thinner filaments at lower temperatures in both cases. This is better appreciated in Figs. 2c and 2f, where filament width (for \(I\) = 20 mA) is plotted as a function of temperature. Lower temperatures yield thinner filaments and, therefore, higher current densities. Moreover, comparing NdNiO\({}_{3}\) and SmNiO\({}_{3}\), we can see that filaments are much thinner in NdNiO\({}_{3}\). Therefore, for a fixed bias current, the filament size is strongly dependent on the material and the base temperature. Fig. 2 as a whole establishes a strong connection between filament size and _V-I_ characteristics: thinner filaments (higher current densities) lead to sharper and larger resistive switching.
While optical microscopy is a versatile tool to visualize metallic/insulating areas at multiple temperatures and currents, its spatial resolution is limited by diffraction. In order to get more detailed images, we performed _in-operando_ cryogenic s-SNOM measurements in NdNiO\({}_{3}\) devices, using a set-up such as the one depicted in Fig. 3a. The spatial resolution of this atomic force microscopy (AFM)-based technique is limited only by the tip radius (\(\sim\)20 nm) [38], allowing us to obtain high-resolution AFM and near-field images of the filaments. Figs. 3b and 3c show topography and SNOM images at 18 K and 70 K, respectively. Images for 0, 10 and 20 mA are displayed. For similar currents, filaments are thinner at lower temperatures, in accordance to wide-field optical images in Fig. 2. Moreover, s-SNOM allows us to resolve clear qualitative differences between both temperatures. Images taken at 18 K show a single, intense filament percolating
between electrodes, while at higher temperature multiple filaments appear. Thus, lower temperatures favor a _winner-takes-all_ situation in which a single filament carries all the current.
Figure 2: Connection between resistive switching properties and filament size. (a), (d) Voltage vs current curves for NdNiO\({}_{3}\) and SmNiO\({}_{3}\) microdevices, respectively. Several temperatures are plotted for each material. (b), (e), Wide-field optical microscopy images of filaments in NdNiO\({}_{3}\) and SmNiO\({}_{3}\), respectively. Current is 20 mA for all four images. Two different temperatures are shown for each material. For the NdNiO\({}_{3}\) images, reflectivity was normalized using a region far from the gap. For SmNiO\({}_{3}\), differential images are shown, where at each point, the reflectivity at _I_=0 mA is subtracted. (c), (f), Filament width vs temperature at _I_=20 mA. The width was determined using a gaussian fitting of linescans perpendicular to the filament direction, taking the full-width-half-maximum as filament width. The error bars show the standard deviation of the distribution of widths.
Figure 3: High resolution s-SNOM imaging and presence of multiple percolating filaments. (a) Schematic representation of the s-SNOM set-up. Infrared light (wavelength 10 μm) is focused at a metal-coated AFM tip, which further focuses light into an area comparable to the tip radius (\(\sim\)20 nm). The tip-scattered signal is determined by the optical conductivity of the area of the material directly underneath the tip, allowing to get high resolution images of metallic (high signal) and insulating (low signal) domains. The AFM is working in tapping mode and the s-SNOM signal is detected at the 3rd order harmonic to filter out the far-field component as described in the supplementary. (b), (c) Topography and s-SNOM amplitude images for NdNiO\({}_{3}\) microdevices at \(T\)=18 K and \(T\)=70 K, respectively. The s-SNOM signal was normalized using the Pt electrode as reference. S-SNOM data for three different current values is shown.
### Resistor network simulations
To understand these results, we performed numerical simulations in which we model our system as a two-dimensional resistor network (Fig. 4a) [10, 24]. Each node in the network can be either metallic or insulating, depending on the local temperature and a Landau free energy functional that mimics the IMT. The insulating state resistivity increases as the temperature is decreased, following a variable range hopping dependence (Inset Fig. 4b). Currents and voltages at each node are calculated by solving Kirchhoff's laws. Local temperature is updated in each simulation time step, considering Joule heating and heat conduction. A more detailed description of the simulations can be found in the methods section of the supplementary material. This simple model reproduces experimental results with only a few parameters (Fig. 4b), allowing us to identify which ones play a key role.
Fig. 4c shows simulated 2-dimensional resistivity maps of the devices for three different values of current and base temperature. As expected, filaments are strongly dependent on the bias current. Also, similarly to the experiments, filaments become narrower as the base temperature is lowered. The material confines current flow into a smaller region at lower temperatures. This in turn induces higher current densities, increasing local Joule heating and greatly affecting the temperature distribution across the device, as can be seen in Fig. 4d. As the filament narrows, its inner temperature increases. Therefore, thinner filaments induce a stronger current-temperature feedback, which is a key factor controlling switching dynamics [10, 18, 29]. A strong feedback makes the device susceptible to runaway effects which manifest as discontinuities in the experimental _V-I_ curves [29]. As a result, the thinner the filament and the higher its current-temperature feedback, the larger the discontinuities in the _V-I_ curves. Experimentally, such discontinuities are observed to be bigger and more frequent at lower temperatures, and especially for NdNiO\({}_{3}\). These are the same conditions in which the thinnest filaments are observed.
### Discussion
Since filament size determines switching properties, it is key to identify which parameters control the material's ability to confine current into smaller or larger areas. Here we analyze two contributions: the resistivity difference across the IMT and the thermal conductivity of the substrate. The resistivity contrast between the insulating (\(\rho_{\text{ins}}\)) and metallic (\(\rho_{\text{met}}\)) phases is expected to play a major role, since it corresponds to the resistivities outside and inside the filament. When \(\rho_{\text{ins}}\)\(>>\rho_{\text{met}}\), the current is strongly focused into the filament, reducing Joule heating outside. This keeps the insulating areas cold and confines the filament in a small region. But as \(\rho_{\text{ins}}\) decreases, the insulator becomes leaky, allowing current flow and power dissipation outside the filament and reducing its confinement.
Figure 4: Resistor network simulations and current focusing effect. (a) Schematic representation of the simulated resistor network (size \(W\) x \(L\)). Low resistance electrodes (yellow) define an oxide gap where individual nodes could be either metallic (orange) or insulating (dark brown), as described in the methods section of the supplementary material. (b) Simulated voltage vs. current curves for three different temperatures: 18 a.u. (black), 70 a.u. (red) and 90 a.u. (blue). Inset: Simulated resistance vs. temperature of the device. (c) Simulated, two-dimensional resistivity plots for all combinations of three currents (1, 3 and 5 a.u.), and three device base temperatures (18, 70 and 90 a.u.). Resistivity is plotted in logarithmic colorscale. (d) Simulated, two-dimensional temperature plots for all combinations of three currents (1, 3 and 5 a.u.), and three device base temperatures (18, 70 and 90 a.u.). Temperature is plotted in linear colorscale and normalized to the transition temperature (120 a.u.).
The temperature dependence of the resistivity implies that as temperature increases, so does power dissipation outside the filament, increasing its width. For instance, the voltage at _I_=5 a.u. is half as large for _T_=90 a.u. than for _T_=18 a.u. (Fig. 4b), but the insulating resistivity is nearly 8 times smaller. As a result, power dissipation outside the filament is increased by a factor of 2 at _T_=90 a.u. vs _T_=18 a.u. These differences in resistivity explain not only the temperature dependence of the filament size, but also the differences observed between NdNiO\({}_{3}\) and SmNiO\({}_{3}\). The former has a larger resistivity change across the IMT (Fig. 1a) [6], and is therefore expected to focus current into thinner filaments. Furthermore, the presence of single or multiple percolating filaments (Fig. 3) can be understood in a similar light. For \(\rho_{\rm ins}>>\rho_{\rm met}\), the current will crowd through the first hotspot that metallizes, favoring a _winner-takes-all_ scenario.
But the resistivity drop across the IMT is not the only mechanism that can explain the differences in filament size. Thermal properties are also expected to play a crucial role, and they can provide a similarly satisfactory explanation. The thermal conductivity of LaAlO\({}_{3}\) is not constant, but decreases from 0.6-0.7 W/cm\(\cdot\)K at 60 K to \(\sim\)0.15 W/cm\(\cdot\)K at 300 K [39]. This means that the substrate can more efficiently evacuate heat at lower temperatures, keeping cold the areas surrounding the filament. The filament is therefore confined to a smaller region, as simulations have recently shown [18]. Therefore, thermal properties can also explain the smaller filament size at lower temperatures, as well as accounting for the overall differences between NdNiO\({}_{3}\) and SmNiO\({}_{3}\), which have IMTs in very different temperature ranges.
Unfortunately, it is difficult to disentangle the contributions due to the resistivity change across the IMT and the substrate thermal conductivity. Both parameters decrease as temperature is increased. Within the nickelate family, it is observed that as the transition temperature increases (with the reduction of the rare-earth radius), the resistivity drop across the IMT decreases [2, 6]. A similar trend is observed in the V\({}_{2}\)O\({}_{3}\), VO\({}_{2}\), V\({}_{3}\)O\({}_{5}\) family [10]. Therefore, it is not feasible to compare a material featuring a high temperature IMT with a large resistivity change, with another system having a low temperature IMT with a small resistivity change.
A way to overcome this drawback is to compare, for the same material, samples with different IMT quality. We fabricated two NdNiO\({}_{3}\) films: one with a high quality IMT, and a second one subjected to a 120\({}^{\circ}\)C, 30 minutes annealing in vacuum. The annealing creates oxygen vacancies, reducing the resistivity change across the IMT, as seen in Fig. 5a. This has clear consequences in the resistive switching properties, which are smoother for the annealed sample (Fig. 5b). Furthermore, there are notable differences in the filament, as can be seen in Figs. 5c and 5d. The annealed device shows much less contrast and homogeneity within the metallic area, perhaps due to the formation of multiple filaments. This differs from the well-defined, intense filament for the non-annealed sample, and points out to a smaller degree of metallization and current focusing. This confirms that the resistivity change across the IMT is a key parameter controlling resistive switching and filament characteristics, although does not rule out important contributions from thermal conductivity.
Figure 5: Resistive switching and filament characteristic in pristine and annealed NdNiO\({}_{3}\). (a) Two-probe device resistance vs temperature on pristine (blue) and annealed (red) NdNiO\({}_{3}\) films. (b) Voltage vs current at \(T=50\) K for a pristine (blue) and annealed (red) sample. (c) Wide-field microscopy image of filament formation in pristine (left) and annealed (right) NdNiO\({}_{3}\). \(T=50\) K and \(I=20\) mA in both cases. Reflectivity was normalized using an area far from the gap region. (d) Zoomed image into the central part of the filament for pristine (top) and annealed (bottom) NdNiO\({}_{3}\). \(T=50\) K and \(I=20\) mA in both cases.
## Conclusions
In summary, we have used a combination of _in-operando_ standard and scanning near-field optical microscopies to study the characteristic lengths of filament formation during the electrically-induced IMT. We found that, in addition to bias current, filament width is strongly dependent on base temperature and the specific material. Lower set-up temperatures yield thinner filaments, increasing current density and local temperature, leading to sharper resistive switching properties. With the aid of resistor network simulations, we discussed the material properties that control filament size, underlining the importance of the resistivity drop across the IMT as well as the substrate's thermal conductivity.
Our results support recent works concerning another fundamental aspect of the electrically-induced IMT: switching dynamics [10, 11]. It was proposed that a large resistivity ratio between insulator and metal would induce higher current focusing, increasing local Joule heating within the filament and explaining the different switching timescales observed in V\({}_{2}\)O\({}_{3}\), VO\({}_{2}\), V\({}_{3}\)O\({}_{5}\), NdNiO\({}_{3}\) and SmNiO\({}_{3}\). However, direct evidence of this has been lacking so far. The present work provides a systematic study of the characteristic lengthscales of the electrically-induced IMT, unveiling a strong connection between resistivity, thermal properties, filament size and resistive switching characteristics. The mechanisms outlined here are simple and general, and could be applicable to other types of resistive switching, such as ReRAM. Considered together with recent developments in the field [10, 11, 21], it completes a simple and unified picture of the length- and time-scales of filament nucleation, growth and relaxation, as well as underlining their importance for developing new technologies based on the IMT.
## Acknowledgements
The authors thank Marco Lopes for his support during the fabrication and measurement of these samples. The sample fabrication and project coordination were funded by the Swiss National Science Foundation through an Ambizione Fellowship (#PZ00P2_185848). The oxide growth was supported by the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013)/ERC Grant Agreement 319286 Q-MAC and the Swiss National Science Foundation Project no. 200020-179155. W.R. was supported by the U.S. Office of Naval Research through the NICOP Grant N62909-21-1-2028. s-SNOM measurements were supported by the Swiss National Science Foundation through a Research Grant #200020_201096. Simulations were funded by the French ANR project "MoMA" ANR-19-CE30-0020. T. L. acknowledges support by the Cusanuswerk, Bischofliche Studienforderung. J.d.V acknowledges support from the Spanish Ministry of Science through a Ramon y Cajal Fellowship (#RYC2021-030952-I).
## References
* [1] F. J. Morin, Oxides Which Show a Metal-to-Insulator Transition at the Neel Temperature, _Phys. Rev. Lett._**3**, 34 (1959).
* [2] J. B. Torrance, P. Lacorre, A. I. Nazzal, E. J. Ansaldo, and C. Niedermayer, Systematic study of insulator-metal transitions in perovskites RNiO\({}_{3}\) (R=Pr,Nd,Sm,Eu) due to closing of charge-transfer gap, _Phys. Rev. B_**45**, 8209 (1992).
* [3] M. Imada, A. Fujimori, and Y. Tokura, Metal-insulator transitions, _Rev. Mod. Phys._**70**, 1039 (1998).
* [4] S. Lupi, L. Baldassarre, B. Mansart, A. Perucchi, A. Barinov, P. Dudin, E. Papalazarou, F. Rodolakis, J. P. Rueff, J. P. Itie, S. Ravy, D. Nicoletti, P. Postorino, P. Hansmann, N. Parragh, A. Toschi, T. Saha-Dasgupta, O. K. Andersen, G. Sangiovanni, K. Held, and M. Marsi, A microscopic view on the Mott transition in chromium-doped V\({}_{2}\)O\({}_{3}\), _Nat. Commun._**1**, 105 (2010).
* [5] J. H. Park, J. M. Coy, T. Serkan Kasirga, C. Huang, Z. Fei, S. Hunter, and D. H. Cobden, Measurement of a solid-state triple point at the metal-insulator transition in VO\({}_{2}\), _Nature_**500**, 431 (2013).
* [6] S. Catalano, M. Gibert, J. Fowlie, J. Iniguez, J. M. Triscone, and J. Kreisel, Rare-earth nickelates RNiO3: thin films and heterostructures, _Reports Prog. Phys._ 81, 046501 (2018).
* [7] G. Stefanovich, A. Pergament, and D. Stefanovich, Electrical switching and Mott transition in VO\({}_{2}\), _J. Phys. Condens. Matter_**12**, 8837 (2000).
* [8] G. Seo, B. J. Kim, C. Ko, Y. Cui, Y. W. Lee, J. H. Shin, S. Ramanathan, and H. T. Kim, Voltage-Pulse-Induced Switching Dynamics in VO\({}_{2}\), _IEEE Electron Device Lett._**32**, 1582 (2011).
* [9] P. Diener, E. Janod, B. Corraze, M. Querre, C. Adda, M. Guilloux-Viry, S. Cordier, A. Camjayi, M. Rozenberg, M. P. Besland, and L. Cario, How a dc Electric Field Drives Mott Insulators Out of Equilibrium, _Phys. Rev. Lett._**121**, 016601 (2018).
* [10] J. del Valle, N. M. Vargas, R. Rocco, P. Salev, Y. Kalcheim, P. N. Lapa, C. Adda, M.-H. Lee, P. Y. Wang, L. Fratino, M. J. Rozenberg, and I. K. Schuller, Spatiotemporal characterization of the field-induced insulator-to-metal transition, _Science_**373**, 907 (2021).
* [11] J. Del Valle, R. Rocco, C. Dominguez, J. Fowlie, S. Gariglio, M. J. Rozenberg, and J. M. Triscone, Dynamics of the electrically induced insulator-to-metal transition in rare-earth nickelates, _Phys. Rev. B_**104**, 165141 (2021).
* [12] A. Zimmers, L. Aigouy, M. Mortier, A. Sharoni, S. Wang, K. G. West, J. G. Ramirez, and I. K. Schuller, Role of Thermal Heating on the Voltage Induced Insulator-Metal Transition in VO\({}_{2}\), _Phys. Rev. Lett._**110**, 056601 (2013).
* [13] S. Kumar, M. D. Pickett, J. P. Strachan, G. Gibson, Y. Nishi, and R. S. Williams, Local Temperature Redistribution and Structural Transition During Joule-Heating-Driven Conductance Switching in VO\({}_{2}\), _Adv. Mater._**25**, 6128 (2013).
* [14] S. Guenon, S. Scharinger, S. Wang, J. G. Ramirez, D. Koelle, R. Kleiner, and I. K. Schuller, Electrical breakdown in a V\({}_{2}\)O\({}_{3}\) device at the insulator-to-metal transition, _Europhys. Lett._**101**, 57003 (2013).
* [15] H. Madan, M. Jerry, A. Pogrebnyakov, T. Mayer, and S. Datta, Quantitative Mapping of Phase Coexistence in Mott-Peierls Insulator during Electronic and Thermally Driven Phase Transition, _ACS Nano_**9**, 2009 (2015).
* [16] S. Kumar and R. S. Williams, Separation of current density and electric field domains caused by nonlinear electronic instabilities, _Nat. Commun._**9**, 2030 (2018).
* [17] M. Lange, S. Guenon, Y. Kalcheim, T. Luibrand, N. M. Vargas, D. Schwebius, R. Kleiner, I. K. Schuller, and D. Koelle, Imaging of Electrothermal Filament Formation in a Mott Insulator, _Phys. Rev. Applied_**16**, 54027 (2021).
* [18] R. Rocco, J. Del Valle, H. Navarro, P. Salev, I. K. Schuller, and M. Rozenberg, Exponential Escape Rate of Filamentary Incubation in Mott Spiking Neurons, _Phys. Rev. Applied._**17**, 24028 (2022).
* [19] C. Adda, M.-H. Lee, Y. Kalcheim, P. Salev, R. Rocco, N. M. Vargas, N. Ghazikhanian, C.-P. Li, G. Albright, M. Rozenberg, and I. K. Schuller, Direct Observation of the Electrically Triggered Insulator-Metal Transition in V\({}_{3}\)O\({}_{5}\) far below the Transition Temperature, _Phys. Rev. X_**12**, 11025 (2022).
* [20] J. J. Yang, D. B. Strukov, and D. R. Stewart, Memristive devices for computing, _Nat. Nanotechnol._**8**, 13 (2013).
* [21] J. del Valle, P. Salev, F. Tesler, N. M. Vargas, Y. Kalcheim, P. Wang, J. Trastoy, M. H. Lee, G. Kassabian, J. G. Ramirez, M. J. Rozenberg, and I. K. Schuller, Subthreshold firing in Mott nanodevices, _Nature_**569**, 388 (2019).
* [22] M. D. Pickett, G. Medeiros-Ribeiro, and R. S. Williams, A scalable neuristor built with Mott memristors, _Nat. Mater._**12**, 114 (2013).
* [23] M. Ignatov, M. Ziegler, M. Hansen, A. Petraru, and H. Kohlstedt, A memristive spiking neuron with firing rate coding, _Front. Neurosci._**9**, 376 (2015).
* [24] P. Stoliar, J. Tranchant, B. Corraze, E. Janod, M.-P. P. Besland, F. Tesler, M. Rozenberg, and L. Cario, A Leaky-Integrate-and-Fire Neuron Analog Realized with a Mott Insulator, _Adv. Funct. Mater._**27**, 1604740 (2017).
* [25] J. del Valle, P. Salev, Y. Kalcheim, and I. K. Schuller, A caloritronics-based mott neuristor, _Sci. Rep._**10**, 4292 (2020).
* [26] W. Yi, K. K. Tsang, S. K. Lam, X. Bai, J. A. Crowell, and E. A. Flores, Biological plausibility and stochasticity in scalable VO\({}_{2}\) active memristor neurons, _Nat. Commun._**9**, 4661 (2018).
* [27] S. M. Bohaichuk, S. Kumar, G. Pitner, C. J. McClellan, J. Jeong, M. G. Samant, H. S. P. Wong, S. S. P. Parkin, R. S. Williams, and E. Pop, Fast Spiking of a Mott VO\({}_{2}\)-Carbon Nanotube Composite Device, _Nano Lett._**19**, 6751 (2019).
* [28] S. Oh, Y. Shi, J. del Valle, P. Salev, Y. Lu, Z. Huang, Y. Kalcheim, I. K. Schuller, and D. Kuzum, Energy-efficient Mott activation neuron for full-hardware implementation of neural networks, _Nat. Nanotechnol._**16**, 680 (2021).
* [29] S. Kumar, J. P. Strachan, and R. S. Williams, Chaotic dynamics in nanoscale NbO\({}_{2}\) Mott memristors for analogue computing, _Nature_**548**, 318 (2017).
* [30] M. Jerry, K. Ni, A. Parihar, A. Raychowdhury, and S. Datta, Stochastic Insulator-to-Metal Phase Transition-Based True Random Number Generator, _IEEE Electron Device Lett._**39**, 139 (2018).
* [31] J. del Valle, P. Salev, S. Gariglio, Y. Kalcheim, I. K. Schuller, and J.-M. Triscone, Generation of Tunable Stochastic Sequences Using the Insulator-Metal Transition, _Nano Lett._**22**, 1251 (2022).
* [32] S. Cueff, J. John, Z. Zhang, J. Parra, J. Sun, R. Orobtchouk, S. Ramanathan, and P. Sanchis, Optical switching in hybrid VO\({}_{2}\)/Si waveguides thermally triggered by lateral microheaters, _APL Photonics_**5**, 110901 (2020).
* [33] G. Li, D. Xie, H. Zhong, Z. Zhang, X. Fu, Q. Zhou, Q. Li, H. Ni, J. Wang, E. jia Guo, M. He, C. Wang, G. Yang, K. Jin, and C. Ge, Photo-induced non-volatile VO\({}_{2}\) phase transition for neuromorphic ultraviolet sensors, _Nat. Commun._**13**, 1729 (2022).
* [34] I. Olivares, J.-P. Locquet, J. Parra, L. D. Sanchez, M. Menghini, P. Sanchis, and P. Homm, Experimental demonstration of a tunable transverse electric pass polarizer based on hybrid VO\({}_{2}\)/silicon technology, _Opt. Lett._**43**, 3650-3653 (2018).
* [35] K. J. Miller, K. A. Hallman, R. F. Haglund Jr, S. M. Weiss, Q. Xu, B. Schmidt, S. Pradhan, and M. Lipson, Silicon waveguide optical switch with embedded phase change material, _Opt. Express_**25**, 26527-26536 (2017).
* [36] D. Lee, J. Lee, K. Song, F. Xue, S. Y. Choi, Y. Ma, J. Podkaminer, D. Liu, S. C. Liu, B. Chung, W. Fan, S. J. Cho, W. Zhou, J. Lee, L. Q. Chen, S. H. Oh, Z. Ma, and C. B. Eom, Sharpened VO\({}_{2}\) Phase Transition via Controlled Release of Epitaxial Strain, _Nano Lett._**17**, 5614 (2017).
* [37] M. Lange, S. Guenon, F. Lever, R. Kleiner, and D. Koelle, A high-resolution combined scanning laser and widefield polarizing microscope for imaging at temperatures from 4 K to 300 K, _Rev. Sci. Instrum._**88**, 123705 (2017).
* [38] K. W. Post, A. S. McLeod, M. Hepting, M. Bluschke, Y. Wang, G. Cristiani, G. Logvenov, A. Charnukha, G. X. Ni, P. Radhakrishnan, M. Minola, A. Pasupathy, A. V Boris, E. Benckiser, K. A. Dahmen, E. W. Carlson, B. Keimer, and D. N. Basov, Coexisting first- and second-order electronic phase transitions in a correlated oxide, _Nat. Phys._**14**, 1056 (2018).
* [39] W. Schnelle, R. Fischer, and E. Gmelin, Specific heat capacity and thermal conductivity of NdGaO\({}_{3}\) and LaAlO\({}_{3}\) single crystals at low temperatures, _J. Phys. D. Appl. Phys._**34**, 846 (2001).
Supplementary Information
**Characteristic lengthscales of the electrically-induced insulator-to-metal transition**
Theodor Luibrand\({}^{\dagger 1}\), Adrien Bercher\({}^{\dagger 2}\), Rodolfo Rocco\({}^{\dagger 3}\), Farnaz Tahouni-Bonab\({}^{1}\), Lucia Varbaro\({}^{2}\), Carl Willem Rischau\({}^{2}\), Claribel Dominguez\({}^{2}\), Yixi Zhou\({}^{2}\), Weiwei Luo\({}^{2}\), Soumen Bag\({}^{3}\), Lorenzo Fratino\({}^{3,4}\), Reinhold Kleiner\({}^{1}\), Stefano Gariglio\({}^{2}\), Dieter Koelle\({}^{1}\), Jean-Marc Triscone\({}^{2}\), Marcelo J. Rozenberg\({}^{3}\), Alexey B. Kuzmenko\({}^{2}\), Stefan Guenon\({}^{1}\) and Javier del Valle\({}^{*2,5}\)
\({}^{1}\)Physikalisches Institut, Center for Quantum Science (CQ) and LISA+, Eberhard Karls Universitat Tubingen, Auf der Morgenstelle 14, Tubingen 72076, Germany
\({}^{2}\)Department of Quantum Matter Physics, University of Geneva, 24 Quai Ernest-Ansermet, 1211 Geneva, Switzerland
\({}^{3}\)Laboratoire de Physique des Solides, UMR8502 CNRS - Universite Paris-Sud, Universite Paris-Saclay, 91405 Orsay Cedex, France
\({}^{4}\)Laboratoire de Physique Theorique et Modelisation, CNRS UMR 8089, CY Cergy Paris Universite, 95302 Cergy-Pontoise Cedex, France
\({}^{5}\)Department of Physics, University of Oviedo, C/ Federico Garcia Lorca 18, 33007 Oviedo, Spain
\({}^{\dagger}\)These authors contributed equally to this work
*Corresponding author: [email protected]
## 1 Methods
### Thin film and device fabrication
We grew NdNiO\({}_{3}\) and SmNiO\({}_{3}\) oxide films on (001) oriented LaAlO\({}_{3}\) substrates using off-axis magnetron sputtering in an Ar:O\({}_{2}\) (3.5:1) mixture at a pressure of 180 mTorr and substrate temperature of 460\({}^{\rm o}\) C. Films are \(\sim\)40-45 nm thick and grow epitaxially, as can be seen using X-ray diffraction (Figure S1c and S1d).
For microdevice fabrication, a combination of techniques was used. First, we patterned isolated NdNiO\({}_{3}\) and SmNiO\({}_{3}\) islands using optical lithography and Ar ion beam milling. This allows us to measure each device independently, since it is electrically isolated from the others. After this we patterned Pt electrodes on top of these islands. For that we used optical lithography followed by on-axis Pt sputtering at room temperature and a lift-off in acetone. Pt thickness is around 40 nm and the gap size is 20 \(\upmu\)m x 20 \(\upmu\)m. For the s-SNOM measurements, a further lithographic step was used. Optical lithography does not create smooth electrode edges. This is very challenging for SNOM, since the tip is tripped by the electrode irregularities. To improve this, we used electron beam lithography and a second Pt evaporation to define 10 \(\upmu\)m x 10 \(\upmu\)m electrodes with smooth edges.
_In operando_ standard optical microscopy
We used an optical wide-field microscope that facilitates simultaneous imaging and electrical transport measurements [2]. The device under investigation is mounted in vacuum, on the cold finger of a liquid Helium continuous flow cryostat with a temperature range between 4.2 K and 300 K. The microscope has a spatial resolution of 500 nm, the illumination is a monochromatic
LED with a wavelength of 532 nm, and the maximum field of view is approximately 500 \(\mu\)m x 500 \(\mu\)m.
The electric transport properties were measured in a two-probe configuration. For NdNiO\({}_{3}\), we used a Keithley 2400 SourceMeter configured as a current source, whereas we used a highly stable self-built current source for SmNiO\({}_{3}\).
_Image Processing_: For NdNiO\({}_{3}\), the grey values were normalized to a NdNiO\({}_{3}\) area that is not influenced by the resistive switching (not in-between the electrodes). For SmNiO\({}_{3}\), all the images are differential - the image with zero bias current is subtractedd for each current.
_In operando_ s-SNOM
A cryogenic s-SNOM system (cryo-neaSNOM from neaspec/attocube GbmH) was used for nanoscopic imaging of the filaments in the devices. Infrared radiation from a Quantum Cascade Laser (Daylight Solutions) was focused at a metal-coated AFM tip (ARROW-NCPt-50 from NanoAndMore GmbH), which was grounded in order to reduce the electrostatic interaction between the tip and the sample. Despite that, we had to avoid applying voltages higher than 10 V in the microdevices as it would disturb the AFM in tapping mode. The tip size determines the spatial resolution (20 nm in this case). Pseudoheterodyne detection allows separating far-field and near-field contributions to the signal by using higher-order tapping harmonics (the 3rd harmonic is used in the present paper). The detected near field signal has an excellent spatial contrast between the insulating and metallic phases because of the large change of the optical conductivity across the IMT. More information about the s-SNOM operation can be found in [3].
_Resistor network simulations_
In the simulations presented in this work we use a phenomenological mesoscopic model known as Mott Resistors Network [4, 5]. The model describes the material as a matrix of cells, each containing four resistors, which connect the cell to its four nearest neighbors. Each cell corresponds to a small region of the material of the order of 10 nm. This scale is chosen in order to define a phase for the cell, which can be insulating or metallic. At first, all the resistors are initialized to a high insulating value, and all the cells are in the insulator phase. A voltage is applied to the mesh through the metallic electrodes that are situated at the top and the bottom and currents begin to circulate in the resistor network. These currents can be computed, knowing the initial resistance and the applied voltage, using Kirchhoff laws. When the current \(I\) flows through the resistors \(R_{ij}\), these generate heat according to Joule's law, wit power \(P=I^{2}R\). The heat generated by a cell is given by the sum of the contributions of its four resistors
\[P_{ij}(t)=\left(I_{1}^{2}(t)+I_{2}^{2}(t)+I_{3}^{2}(t)+I_{4}^{2}(t)\right)R_{ ij}(t)\]
where \(t\) indicates time in units of the simulation time-step, \(ij\) are the indexes which identify the cell, \(P_{ij}\)is the power generated by the cell, \(R_{ij}\)is the resistance of the four resistors (which are always assumed to share the same value) and \(I_{1},I_{2},I_{3},I_{4}\) are the four currents flowing through them. Setting to unity the geometrical dimension of the cell, we can identify the resistivity of the cell with \(R_{ij}\). The temperature of the cell will be the result of two contributions: Joule heating and
a dissipative term that includes the dissipation to the nearest neighbour cells and the dissipation to a substrate at a fixed temperature \(T_{0}\), with which all the cells are in contact. Therefore, using the heat transfer equation, we can write the temperature of the cell as follows
\[T_{ij}(t)=T_{ij}(t-1)+\frac{P_{ij}(t)}{C}-\frac{K}{C}\Bigg{(}5T_{ij}(t-1)-\sum_ {kl}^{NN}T_{kl}(t-1)-T_{0}\Bigg{)}\]
where \(K\) is the thermal conductivity, \(C\) the thermal capacity and the sum with indexes \(kl\) runs over the nearest neighbour cells. We note that we have made the non-essential assumption of choosing the same thermal conductivity for the dissipation to the substrate and to the nearest neighbours, and that the time-step of the simulation \(\Delta t\) is set to unity.
The first order transition of a cell from the insulator to the metal phase (and vice-versa) is described as a thermally activated behavior with a probability that depends on the temperature of the cell according to the following Arrhenius-like law
\[p_{ij}^{a\to b}(t)=exp\left(\frac{-E_{B}^{a\to b}\big{(}T_{ij}\big{)}}{T_{ij}(t)}\right)\]
where \(a\) and \(b\) are the states of cell (insulator or metal) and \(E_{B}\) is the energy barrier that separates the two corresponding local minima as described by the following Landau-type free energy (which is appropriate for a \(1^{\mathrm{st}}\) order thermally driven transition)
\[f(\eta)=h\eta+p\eta^{2}+c\eta^{4}\]
\[h=h_{1}\frac{T-T_{c}}{T_{c}}+h_{2}\]
\[p=p_{1}\frac{T-T_{c}}{T_{c}}+p_{2}\]
\(\eta\) is the order parameter and \(T_{c}\), \(h_{1}\), \(h_{2}\), \(p_{1}\) and \(p_{2}\) are constants. The resistivity of the cell is then chosen according to the state of the cell: low and constant (\(\rho_{met}\)) in the metal state and high and temperature dependent in the insulator state (\(\rho_{ins}(T)\)). In particular, we choose Mott's equation for variable range hopping [6] to describe the temperature dependence of the resistivity in the insulating state, since it has already been used to fit the resistivity of NdNiO3 samples [7, 8]
\[\rho_{ins}(T)=\rho_{0}e^{\Delta\big{(}\frac{1}{T}-\frac{1}{T_{IMT}}\big{)}^{1 /4}}\]
where \(\Delta\) is a constant, \(T_{IMT}\) is the metal-insulator transition temperature and \(\rho_{0}=\rho(T_{IMT})\) is the resistivity at the transition temperature. Nevertheless, the specific choice of the functional form does not change the main qualitative features of the results. Once the resistivity of the cell has been computed we can update the resistance of the resistors within it. When all the cells have been updated, the time-step is increased by one and the simulation continues as described above, starting again from the computation of Kirchhoff currents.
## 2 Filament size vs. Applied current
Apart from temperature and specific material choice (the focus of the paper), filament size depends strongly on the applied current. This is a well-known effect [9-13]. It can be best appreciated in the supplementary videos 1-10. Fig. S2 shows images of a NdNiO\({}_{3}\) microdevice at different points of a _V-I_ measurement cycle, for a temperature \(T=60\) K. As can be seen, the filament width grows with applied current, appearing and disappearing at the discontinuities of the ramp-up and ramp-down _V-I_ curves, respectively.
## 3 Self-oscillation of microdevices
Interpretation of filament size is not straightforward for all currents. As can be seen in Fig. 2a and 2d in the main text, for some temperatures there are two voltage discontinuities when current is ramped up (for instance, at \(T=50\) K in NdNiO\({}_{3}\)). In between the two, there is a range of currents where the \(V\)-\(I\) curve is smooth. In that range, the system is not stationary, but rather oscillates between two configurations, one with a percolating filament and one without it. These self-oscillations, which are in the 10 kHz range, are a well-known effect [14, 15] and they can be observed with an oscilloscope. The parasitic capacitance and the slow reaction time of current source are the main factors determining the oscillation regime.
All the analysis about filament size at different temperatures and materials shown in the paper is done for \(I=20\) mA. This is well-above the self-oscillation range, where the system is stationary again, so it does not affect our conclusions.
**Supplementary references**
|
2302.01218 | Laser-Induced Cavitation for Controlling Crystallization from Solution | We demonstrate that a cavitation bubble initiated by a Nd:YAG laser pulse
below breakdown threshold induces crystallization from supersaturated aqueous
solutions with supersaturation and laser-energy dependent nucleation kinetics.
Combining high-speed video microscopy and simulations, we argue that a
competition between the dissipation of absorbed laser energy as latent and
sensible heat dictates the solvent evaporation rate and creates a momentary
supersaturation peak at the vapor-liquid interface. The number and morphology
of crystals correlate to the characteristics of the simulated supersaturation
peak. | Nagaraj Nagalingam, Aswin Raghunathan, Vikram Korede, Christian Poelma, Carlas S. Smith, Remco Hartkamp, Johan T. Padding, Huseyin Burak Eral | 2023-02-02T16:57:08Z | http://arxiv.org/abs/2302.01218v3 | # Laser-Induced Cavitation for Controlling Crystallization from Solution
###### Abstract
We demonstrate that a cavitation bubble initiated by a Nd:YAG laser pulse below breakdown threshold induces crystallization from supersaturated aqueous solutions with supersaturation and laser-energy dependent nucleation kinetics. Combining high-speed video microscopy and simulations, we argue that a competition between the dissipation of absorbed laser energy as latent and sensible heat dictates the solvent evaporation rate and creates a momentary supersaturation peak at the vapor-liquid interface. The number and morphology of crystals correlate to the characteristics of the simulated supersaturation peak.
Controlling crystallization from solution, which is central to technological applications ranging from nanomaterial synthesis to pharmaceutical manufacturing [1; 2; 3], is still challenging our understanding of nucleation [4; 5; 6]. Among the strategies proposed to control kinetics and emerging crystal properties [7; 8; 9], non-photochemical laser-induced nucleation (NPLIN), where one or more unprocessed laser pulses trigger accelerated nucleation in supersaturated solutions [10; 11; 12], emerged as a promising approach due to its presumed non-chemical nature and ability to influence polymorphic form [13; 14]. At the reported laser pulse duration (\(\sim\)ns), wavelengths (532/1064 nm) and laser intensity (\(\sim\)MW/cm\({}^{2}\)), neither the solute nor the solvent have sufficiently strong absorption bands to induce photochemical effects. Several putative mechanistic hypotheses, ranging from molecular phenomena relying on (an)isotropic polarization and isotropic electronic polarizability of solute clusters [15] to microscale phenomena based on impurity heating and consequent cavitation, have been proposed in an attempt to explain the observations [16]. However, the exact mechanism behind NPLIN remains elusive [16].
Among many phenomenological mechanisms proposed for NPLIN [16], the impurity heating hypothesis suggests that laser energy absorbed by inherent insoluble impurities (nanoparticles) locally evaporates its surrounding solvent. It is hypothesized that this light-induced evaporation creates a cavitation bubble triggering nucleation. However, no direct measurements of the hypothesized bubble have been performed due to experimental difficulties. Transient micro vapor bubbles can be created in liquid environments with the absorption of laser pulses by dyes [17], nonlinear absorption [18] and nanoparticles [19; 20]. In literature, cavitation experiments via multiphoton absorption using focused ultrashort laser pulse (\(\sim\)fs) have been reported to trigger crystal nucleation [21]. However, the viable route to crystal nucleation remains shrouded in speculations involving photochemistry, shockwaves and enhanced solute concentration surrounding the vapor cavity due to evaporation [22]. Despite recent progress [23], the attempts to test the impurity heating hypothesis using numerical modeling have been limited by lack of concomitant experimental data. Additionally, fast fluorescence imaging of a protein in gel solution provided evidence of high-concentration regions surrounding the cavitation bubble, yet it did not quantify the local supersaturation or laser energy-dependent kinetics [24].
In this Letter, using high-speed microscopy and 1D finite element simulations, we demonstrate that a laser-induced cavitation bubble can act as a precursor to crystallization from solution. In experiments, using aqueous solutions of supersaturated potassium chloride (KCl), we record the size of the vapor bubbles created, the resulting number and morphology of crystal(s) formed, and the cumulative nucleation probability at a fixed time lag. Correspondingly, in simulations, we estimate the local temperature, solute concentration and solute supersaturation surrounding the bubble to complement the experiments. The quantitative agreement between experimental and simulated bubble dynamics validates the proposed model. Leveraging the model, we argue that a competition between the dissipation of absorbed laser energy as latent and sensible heat dictates the instantaneous solvent evaporation rate. A spike in evaporation rate during the cavitation bubble expansion creates a momentary supersaturation peak at the vapor-liquid interface (hereinafter referred to as "interface"). The experimentally acquired nucleation probabilities, number, and morphology of crystals formed correlate with the characteristics of the short-lived [\(O(\upmu\mathrm{s})\)] supersaturation peak surrounding the bubble from simulations. For the first time, we quantitatively correlate the likelihood of crystal formation due to an increase in the solute concentration at the interface through laser-induced nucleation with no expected photochemical reaction.
We use a frequency-doubled Nd:YAG pulsed laser with 532 nm wavelength and 4 ns pulse duration to induce the formation of a vapor bubble. Unlike the classical NPLIN experiments, where the position and composi
tion of impurities are random within the volume irradiated [\(O(100\,\upmu\)l)] using a collimated laser [16], we fix the location of bubble formation by focusing the laser. A small amount of potassium permanganate (KMnO\({}_{4}\)), \(3.26\,\mathrm{mg}\) per \(100\,\mathrm{g}\) water, is added to ensure that a known quantity of laser energy is absorbed by the solution at \(532\,\mathrm{nm}\). Addition of this light-absorbing soluble impurity removed a point of uncertainty in traditional NPLIN experiments critical for the impurity heating hypothesis. The minute amount of KMnO\({}_{4}\) added is comparable to soluble ppm level impurities in NPLIN experiments [26] and does not alter the solubility of KCl (see SI [27]). KCl solutions with a supersaturation range of \(0.999-1.029\) were used (solubility = \(35.97\,\mathrm{g}/100\,\mathrm{g}\)-H\({}_{2}\)O at \(25^{\circ}\mathrm{C}\)) with no pre-treatment for dissolved gases or filtration. A \(40\times\) objective (numerical aperture=0.6) is employed to both focus the laser and image the sample. Fig. 1 shows the architecture of the inverted microscope which employs two cameras: a high-speed camera operated at 330,000 frames per second (fps) to record the evolution of the bubble size and a low-speed camera operated at 50 fps which records the appearance of crystals. The laser is focused to a point within \(10\,\upmu\)m above the bottom surface. All formed hemispherical bubbles in this work have a standoff distance, \(h/R_{\mathrm{max}}>10\), to prevent the effect of side walls on the bubble dynamics [28]. The cover glass acts as a plane of symmetry for the semi-unbounded fluid surrounding the hemispherical bubble, allowing us to analyze the bubble as spherically unbound. Additionally, the negatively buoyant crystals sediment to the bottom, allowing in-situ observation. A layer of silicone oil (density = \(930\,\mathrm{kg}/\mathrm{m}^{3}\)) floating on top of the supersaturated solution prevents evaporation of the solution.
Figure 2(a) depicts the primary bubble formation, its subsequent expansion and collapse immediately after laser irradiation. The primary bubble then disintegrates into secondary bubbles followed by the emergence of crystals surrounding the laser focal point (Fig. 2(c)). After the primary bubble collapsed, we also observe a complex flow pattern that transports secondary bubbles and crystals. The direction of the resulting flow was observed to be random, consistent with previous observations [29]. Figure 2(b) displays a clear increase in the maximum radius (\(R_{\mathrm{max}}\)) and bubble lifetime with the supplied laser energy (\(E\)). For details on the experimental methodology and validation, see SII [27].
We quantify laser-induced crystallization following the bubble formation by plotting nucleation probability and crystal count for varying laser energy and supersaturation in the bulk, Fig. 3. The cumulative nucleation probability (\(p\)) is defined as the number of trials that resulted in crystal formation two minutes after laser irradiation to the number of trials performed. Overall, the nucleation probability increases with increasing laser
Figure 1: Sketch of the experimental setup to generate a photothermal microbubble. The setup construction is detailed in our previous work [25]. The green arrow indicates the direction of laser pulse.
Figure 3: Experimentally observed nucleation statistics: (a) cumulative nucleation probability (\(p\)) and (b) mean crystal count (\(N\)), for different laser energies (\(E\)) and solution supersaturation in the bulk (\(S_{\mathrm{bulk}}\)). The results are for 10 trials, each with a fixed lag of 2 minutes from the time of laser irradiation. The red dotted curve is a guide to the eye representing the threshold where the crystallization probability is \(\geq 0.5\). See SIII [27] for morphologies.
Figure 2: (a) Primary vapor bubble formation using a focused laser pulse of \(30\,\upmu\)l recorded at 330,000 fps with a reduced spatial resolution. The numbers on the right (2-13) represent the frame numbers. (b) Dynamic radius of the hemispherical bubble for different laser energies \(E\). The error bars represent the standard error on the mean of at least 20 independent trials. A bubble radius beyond \(\approx 300\,\upmu\)m exceeded the field of view of the camera. The symbols and lines correspond to experiments and simulations, respectively. (c) Secondary bubbles and emergence of crystals after collapse of the primary vapor bubble surrounding the laser focal spot visualized at 50 fps using the low-speed camera. The experiment is for \(E=75\,\upmu\)l and \(S=1.019\).
energy and solution supersaturation in the bulk (\(S_{\rm bulk}\)). From Fig. 3(a), we observe a minimum threshold laser energy for crystal formation related to \(S_{\rm bulk}\) and vice versa, an observation repeatedly reported in NPLIN experiments [22]. We recorded a very low crystallization probability (\(p\leq 0.1\)) for roughly saturated solution (\(S_{\rm bulk}=0.999\)) as the lack of supersaturation would inhibit crystal growth. We attribute the non-zero \(p\) value to the uncertainty in \(S_{\rm bulk}\) [\(O(10^{-3})\)] pertaining to the variation in room temperature (\(24.8-26.1\,^{\circ}\)C). No experiment was performed beyond \(S_{\rm bulk}=1.029\) as it was difficult to keep the solution stable during handling. In Fig. 3(b), similar to the nucleation probability, we see an increase in the number of crystals formed (\(N\)) with both laser energy and bulk supersaturation above the minimum laser intensity threshold. We predominantly observed cubic crystals with the probability of finding a rectangle or needle-like crystal increasing with \(E\) and \(S_{\rm bulk}\) (see SIII [27]). This observed change in morphology is in line with previous observations [30; 31], deduced using limited solvent availability per nuclei. In our experiments, we cannot measure local fluid properties surrounding the bubble, such as temperature and solute concentration, due to the small length and time scales involved. Therefore, we performed numerical simulations to calculate temporal and spatial values of these variables where the experimentally measured bubble radii and crystal count are used to validate the fluid flow and local supersaturation, respectively.
In the numerical simulations, we solve for combined momentum, heat and solute transport. For each phenomenon, the governing equations for an unbound sphere are used due to the plane of symmetry offered by the cover glass. We employ the Rayleigh-Plesset equation [32] to solve for the momentum surrounding the bubble,
\[R\frac{\mathrm{d}^{2}R}{\mathrm{d}t^{2}}+\frac{3}{2}\left(\frac{\mathrm{d}R}{ \mathrm{d}t}\right)^{2}=\frac{1}{\rho_{\rm L}}\left(p_{\rm V}-p_{\infty}-\frac {2\sigma}{R}-\frac{4\mu}{R}\frac{\mathrm{d}R}{\mathrm{d}t}\right), \tag{1}\]
where \(\rho_{\rm L}=1175\,\)kg/m\({}^{3}\) is the solution density, \(p_{\infty}=1.013\,\)bar is the ambient pressure and \(p_{\rm V}\) is the pressure within bubble, \(\sigma\) is the surface tension, \(\mu\) is the dynamic viscosity of the solution and \(R\) the distance of the interface from the laser focal point. The spherically symmetric heat dissipation surrounding the bubble is modeled using,
\[\frac{\partial T}{\partial t}+\frac{R^{2}}{r^{2}}\frac{\mathrm{d}R}{\mathrm{d }t}\frac{\partial T}{\partial r}=\frac{1}{r^{2}}\frac{\partial}{\partial r} \left(r^{2}\alpha\frac{\partial T}{\partial r}\right), \tag{2}\]
in which \(T\) is the temperature, \(\alpha\) is the thermal diffusivity of the solution and \(r\) (\(>R\)) the radial position from the bubble center. For solute transport, we use an analogous equation to Eq. (2) by substituting \(T\) with \(C^{*}\), the solute concentration in kg/kg of solution, and \(\alpha\) with \(D\), the mass diffusivity of the solute.
For simplicity, we assume the bubble to be a lumped system with an energy balance given by
\[\frac{\mathrm{d}(m_{\rm V}c_{\rm p}T_{\rm V})}{\mathrm{d}t}+\frac{\mathrm{d}m _{\rm V}}{\mathrm{d}t}H_{\rm L}=A_{\rm V}k\left(\frac{\partial T}{\partial r} \right)_{r=R}, \tag{3}\]
where \(m_{\rm V}\), \(A_{\rm V}\) and \(c_{\rm p}\) are the mass, surface area and specific heat capacity of the vapor bubble, respectively. \(H_{\rm L}\) is the latent heat of vaporization and \(k\) the thermal conductivity of the solution. At the interface, we enforce the boundary condition \(T_{\rm V}=T|_{r=R}\) at all times, where \(T_{\rm V}\) is the bubble temperature. The change in mass of the bubble is estimated using the corrected Hertz-Knudsen equation [33], \(\mathrm{d}m_{\rm V}/\mathrm{d}t=-(16A_{\rm V}/9\sqrt{2\pi\mathrm{R}_{\rm g}T})[ p_{\rm V}-p_{\rm sat}(T|_{r=R})]\), where \(\mathrm{R}_{\rm g}\) is the specific gas constant for water vapor and \(p_{\rm sat}(T|_{r=R})\) the saturation pressure of the solution at the interface. The \(p_{\rm V}\) is estimated using the ideal gas law, \(p_{\rm V}V_{\rm V}=m_{\rm V}\mathrm{R}_{\rm g}T_{\rm V}\), where \(V_{\rm V}\) is the bubble volume.
Energy is supplied to the system at the start of the simulations by initializing a thermal boundary layer profile surrounding an initial bubble of radius \(R|_{t=0}=0.5\) um. The initial temperature distribution is \(T(\xi)=T_{\infty}+(T_{\rm V}-T_{\infty})\exp[-(\xi/\delta_{\rm T})^{25}]\), where \(T_{\infty}\) is the ambient temperature, \(\delta_{\rm T}\) is the thermal boundary layer thickness and \(\xi=r-R\), is the radial distance from the interface. A high exponent of 25 is used to approximate a step function, while still being smooth enough to avoid numerical instabilities near \(\xi\approx\delta_{\rm T}\). The thermal energy supplied in the simulation is transformed to latent heat (vapor), sensible heat (vapor and liquid) and kinetic energy of the solution. Thus, by varying the \(\delta_{\rm T}\) value, we strictly control the system energy that dictates the bubble dynamics. The initial interface velocity is assumed to be zero and the surrounding solute concentration to be same as the bulk. For details on the numerical model and parameter values solved, refer SIV [27].
Figure 2(b) shows the numerically obtained bubble size for \(\delta_{\rm T}=21\), 25.5, 29.5, 32.5, 34.5 and 36 um, corresponding to the increasing laser energy values from experiments (see SIV [27]) for calculations). The deviation between experiments and simulations in \(R\) for higher energies can be attributed to the possible plasma formation due to non-linear absorption [34; 35]. The plasma can initiate high temperatures and pressures, leading to higher interface velocities [36]. Alternatively, it could be the effect of pressure waves reflected by the bottom wall for higher energies. The probability of bubble incidence with and without KMnO\({}_{4}\) in water was investigated for non-linear absorption, which supports the reasoning made for deviations in \(R\) (see SV [27]).
To get insight into the crystal formation surrounding the bubble, we look at the factors affecting the solute supersaturation using simulation. Figure 4(a,b) shows the temporal evolution of the solute concentration and temperature at the dynamic interface for three different laser energies at fixed bulk supersaturation. Initially, the temperature drops abruptly, in conjunction with a
steep rise in concentration due to high evaporation rates, \(O(100\,\mathrm{kg/(m^{2}s)})\). Then, the decrease in temperature is more gradual, while the decrease in concentration is steep. The drop in temperature can be attributed to heat diffusion away from the interface and advection resulting from bubble dynamics. Similarly, for the solute, there is dilution occurring at the interface due to condensation of the vapor in addition to diffusion and advection. During the latter half of the bubble lifetime, the concentration and temperature have minimal change due to lower driving potentials and short time range, \(O(10~{}\upmu\mathrm{s})\). The temperatures during bubble collapse estimated from the simulations are in good agreement with the empirical calculations (see SII [27]).
Figure 4(c) shows the temporal supersaturation at the interface calculated using profiles given in Fig. 4(a,b). We observe a peak in the local supersaturation ratio when the bubble is rapidly expanding, after which the supersaturation decreases and the interface stays undersaturated (\(S<1\)) within the bubble lifetime. This observation of a momentarily supersaturated state (\(S>1\)), highlighted in the close-up in Fig. 4(c), is a favorable condition for crystal nucleation. Moreover, both the peak supersaturation (\(S_{\mathrm{max}}\)) and the time during which the interface remains supersaturated (\(t_{\mathrm{S}}\)) increase with increasing \(E\). In the above analysis, we only look at the interface since heat diffuses faster than the solute and thus the maximum supersaturation ratio can exist only in the region closest to the bubble, i.e., at the interface. However, this supersaturation ratio at the interface is dynamic and is quantified only when the bubble exists. The induced flow and resulting temperature and solute distribution surrounding the laser focal point after the bubble collapses are outside the scope of this work. The simulated trends observed in Fig. 4(d,e) agree well with the presented experimental results in Fig. 3.
Subsequently, we correlate the simulated crystallization parameters, \(S_{\mathrm{max}}\) and \(t_{\mathrm{S}}\), with the experimentally acquired parameter, \(N\) (Fig. 3b). The nucleation rate (the number of nuclei formed per unit time per unit volume) can be expressed as [37],
\[J\propto S\exp[-16\pi v^{2}\gamma^{3}/(3k_{\mathrm{B}}^{3}T^{3}\log^{2}(S))]. \tag{4}\]
where \(\gamma\) is the solute-solution interfacial tension, \(k_{\mathrm{B}}\) is Boltzmann's constant and \(v\) the molecular volume. We relate \(J\propto N/t_{\mathrm{S}}\). Since the size of the bubbles for the time region where \(S>1\) are almost the same within the range of energies used, we leave out the shell volume sur
Figure 4: (a,b) Simulated temporal change in solute concentration (\(C\)) and temperature (\(T\)) at the interface for \(S_{\mathrm{bulk}}=1.019\) (at \(25^{\circ}\mathrm{C}\)). Since the laser pulse duration (\(4\,\mathrm{ns}\)) is negligible compared to the time scale of the phenomena (\(\upmu\mathrm{s}\)), we consider the energy transfer from the laser (\(E\)) to the solution to be instantaneous at \(t\)=0 (\(x\)-axis is scaled quadratically). (c) The supersaturation ratio calculated using the concentration and temperature plotted in (a) and (b), respectively. The \(x\)-axis scale is quadratic, while the \(y\)-axis scale is cubic. \(t_{\mathrm{S}}\) represents the time period for which \(S>1\). (d) The simulated maximum \(S\) values obtained for all the conditions in this work, similar to the examples from (c). The red dotted curve is the guide to the eye from Fig. 3, representing the crystallization probability \(\geq 0.5\) in experiments. (e) The time period for which the simulated \(S>1\), similar to the examples from (c).
Figure 5: (a) Estimate of nucleation rate \(J\) against simulated peak supersaturation (\(S_{\mathrm{max}}\)). \(J\propto N/t_{\mathrm{S}}\), where \(N\) is the mean crystal count from experiments and \(t_{\mathrm{S}}\) is the time for which \(S>1\) in simulations. (b) Maximum vapor bubble radius against energy supplied. The dotted and dashed lines represent the power law fit for the data from experiments and simulations, respectively. Error bars represent the standard error on the mean.
rounding the interface in the proportionality for \(J\). Using the slope from Fig. 5(a), we estimate \(\gamma\) in Eq. (4) to be \(3.7^{+0.47}_{-0.65}\,\mathrm{mJ/m^{2}}\) (at \(\approx 185-191\,^{\circ}\mathrm{C}\)). This value, when calculated for \(25\,^{\circ}\mathrm{C}\) (\(3.51\,\mathrm{mJ/m^{2}}\)), is within the reported values of \(2.19\)-\(5.283\,\mathrm{mJ/m^{2}}\)[38; 39; 40] (see SVI [27]). Note that the elevated temperature is also a favorable condition for crystal nucleation in addition to supersaturation (Eq. 4).
Figure 5(b) is an equivalent representation of Fig. 2(b), showing the dependence of maximum bubble size (\(R_{\mathrm{max}}\)) for varying supplied energies. The closely matching trends between experiments and simulations support the reliability of the boundary conditions and assumptions employed in the simulation. Thus, the control parameter in simulation, \(\delta_{\mathrm{T}}\), characterizes the energy available for a bubble to grow. In relation to the impurity heating mechanism proposed for NPLIN, we predict the size of microbubbles hypothesized to appear around heated nano-impurities using an empirical relation from literature [20] employing Mie theory [41]. Comparing the predicted bubble size to those observed in this work provides a sanity check. Using the nano-impurity size distribution, impurity composition, laser intensity and bulk supersaturation provided by Kacker _et al_[26], the empirically determined bubble size is \(\approx 170\,\upmu\mathrm{m}\) (SVII [27]). This value is in good agreement with the order of magnitude of bubble radius observed in this work for the inferred supersaturation from Kacker _et al_[26].
In summary, we have shown that primary nucleation in supersaturated aqueous KCl solution can be triggered by a thermo-cavitation bubble induced by a single Nd:YAG laser pulse below optical breakdown threshold. The nucleation probability as well as the number and morphology of crystals formed depends on bulk supersaturation and laser energy used. Combining high-speed microscopy experiments and finite element simulations, we propose a nucleation mechanism based on the solute accumulation at the interface due to solvent evaporation into the growing bubble. Simulations reveal that a momentary spike in supersaturation with a lifetime [\(O(\upmu\mathrm{s})\)] proportional to the bulk supersaturation and the supplied laser energy facilitates nucleation. It is argued that the mechanism proposed is distinct from other speculated routes to crystal nucleation in laser-induced cavitation experiments, for example, due to photochemistry [42] and shock waves created above optical breakdown threshold [43] (see SVII [27]). The proposed mechanism, verified by combining experiments and simulations, may shed light on the discussion of the working mechanism(s) behind NPLIN and sonocrystallization via cavitation [44; 45].
This work was funded by LightX project under NWO Open Technology Programme (project number 16714). We thank Dr. D. Irimia and Ing. E.F.J. Overmars for supporting the experiments, and Sara Banovska for supporting the solubility tests. A special thanks to Dr. H.J.M. Kramer, Dr. A.E.D.M. van der Heijden and members of the LightX user committee for the productive discussions.
|
2308.14128 | A Novel Reconfigurable Vector-Processed Interleaving Algorithm for a
DVB-RCS2 Turbo Encoder | Turbo-Codes (TC) are a family of convolutional codes enabling
Forward-Error-Correction (FEC) while approaching the theoretical limit of
channel capacity predicted by Shannons theorem. One of the bottlenecks of a
Turbo Encoder (TE) lies in the non-uniform interleaving stage. Interleaving
algorithms require stalling the input vector bits before the bit rearrangement
causing a delay in the overall process. This paper presents performance
enhancement via a parallel algorithm for the interleaving stage of a Turbo
Encoder application compliant with the DVB-RCS2 standard. The algorithm
efficiently implements the interleaving operation while utilizing attributes of
a given DSP. We will discuss and compare a serial model for the TE, with the
presented parallel processed algorithm. Results showed a speed-up factor of up
to 3.4 Total-Cycles, 4.8 Write and 7.3 Read. | Ohad Boxerman, Moshe Bensimon, Shlomo Greenberg, Yehuda Ben-Shimol | 2023-08-27T15:10:48Z | http://arxiv.org/abs/2308.14128v2 | # A Novel Reconfigurable Vector-Processed Interleaving Algorithm for a DVB-RCS2 Turbo Encoder
###### Abstract
Turbo-Codes (TC) are a family of convolutional codes enabling Forward-Error-Correction (FEC) while approaching the theoretical limit of channel capacity predicted by Shannon's theorem. One of the bottlenecks of a Turbo Encoder (TE) lies in the non-uniform interleaving stage. Interleaving algorithms require stalling the input vector bits before the bit rearrangement causing a delay in the overall process. This paper presents performance enhancement via a parallel algorithm for the interleaving stage of a Turbo Encoder application compliant with the DVB-RCS2 standard. The algorithm efficiently implements the interleaving operation while utilizing attributes of a given DSP. We will discuss and compare a serial model for the TE, with the presented parallel processed algorithm. Results showed a speed-up factor of up to 3.4 Total-Cycles, 4.8 Write and 7.3 Read.
Digital Signal Processing, Digital Video Broadcasting-Return Channel Satellite, Permutations, Turbo Codes, Vector Processor, Very Large Instruction Word.
## I Introduction
Modern age technology and the use of communicative devices dictate the need for efficient communication protocols, data transfers and eminent data manipulation. Nowadays, applications, such as video broadcasting and satellite communications, rely upon fast encoding and calculation tools. Parallelization of data processing requires, in some cases, innovative thinking and different memory access routines unlike the common-default ordered memory structures used. These approaches are the cause of major performance enhancements of many applications. Modern processors use multiple cores for processing data, but in many cases, the program that implements the processing algorithm is sequential and unsuited for the multi-core-vector attributes of the processor, hence not exploiting the full potential of the processor. For these reasons, there is a need for generic tools and techniques for converting known sequential algorithms to a more efficient parallel implementation over vector and multi-core processors existing in today's market.
### _Error Correction_
Wireless channels impose multiple noise deficiencies that affect the transmitted information. High data transfer rates require fast and efficient encoding that impose both rapid data transmission and eminent data integrity. Data integrity is achieved by data encoding that enables both error detection and correction at the receiver, that is, correction of the received data without the need of resending damaged data packets.
TC implements an approach to the mathematical bounds of data rates with respect to the given noise over given channels, found by C.E. Shannon [1, 2]. TC adds redundant bits to the original message to enable Forward Error Correction (FEC), detection and correction of errors at the receiving-end, thus improving data integrity. These techniques are based on the fact that some communication channels suffer from noises over certain segments of the frequency spectrum, and adding redundant bits with a known permutation enables a reliable detection and correction of defected messages with high probability. Additional examples of FEC algorithms include Reed-Solomon codes, Low-Density Parity-Check, Repeat-Accumulate codes and Product codes [3].
### _Turbo Encoders_
TC, a subdivision of the Parallel Concatenated Convolutional Codes family are widely used, mainly because of their adjust-ability for the purpose of real-time implementations (e.g. in satellite communications [4]). TC are in use in wireless ATMs, Third-Generation systems and in video broadcasting [5]. Discussion on the topic could be found in [6, 7, 8]. Every TE is constructed by two main operation blocks: an interleaver and a convolution-encoder. While the encoding is the purpose of a TE, interleaving causes substantial operation latency and performance skew. Interleaving is the action of creating a permutation of the input data by a deterministic shuffle. Modulo calculations are a common tool used in programming implementations in order to keep the data in line with a finite vector size or memory allocated array. Modulo calculations are a hardware obstacle which needs to be addressed, especially when dealing with various and large modulo bases. In the TE interleaving block, modulo is a repeated operation consuming both power and computation time. This article presents a vector-oriented implementation of the permutation stage of a specific TE described by the DVBRCS2 standard. Sec. II elaborates the specifications of the turbo code implemented in this research.
### _Vector Processor and Parallel Processing_
A Vector Processor (VP) differs from a Scalar Processor (SP) by its ability to process a single instruction on multiple data (SIMD), usually a one-dimensional array (vector). That 2 is, a single operation is performed simultaneously on N
independent elements (N being the given length of the vector). The VP can fetch N elements using a single load operation, thus saving time in both fetching and decoding the data and instructions. In a VP's architecture it is essential to redefine operations while indicating whether the operation is performed on a scalar or a vector of certain size. The vectors could be of constant/dynamic length. Amongst the applications suitable that can benefit from the accelerated performance on a VP are multimedia, signal processing, cryptography etc. Since the interleaving stage of the TE is a manipulation of long bit-arrays, the choice of a VP is a natural one. Requiring the entire packet of bits for the interleaving stage causes it to be a bottleneck in the TE application. Our simulations revealed that 78% of processing cycles were of this stage, therefore, improving the performance of the permutation algorithm will result with speedup of the whole application. Using a VP allows to efficiently manipulate the input data by adequate vector and parallel operations, thus enhancing performance of the runtime application both in memory access (R/W) and in total cycles. The algorithmic solution presented in this paper is general and can be reconfigured by the basic DVB-RCS2 parameters.
### _Related Work_
With growing demand for fast data transfers, in recent years we notice an increase in hardware accelerators specifically designed for turbo encoding. A number of hardware accelerators exists: (1) Texas Instruments offers the TCI6618 Multicore SoC DSP with up to 582Mbps throughput. (2) NXP Semiconductors developed the MSC8157 SoC DSP which can reach up to 330 Mbps throughput [9]. Algorithms matching vector-abilities and smart SIMD processing with given VP can achieve processing speedup of up to 35% [10] while more recent work of comparing vectorized vs. non-vectorized execution achieved up to 57.71% performance improvement [11]. This article reveals a novel algorithm solving the modulo calculation problem by exploiting the advantages of a VP, thus avoiding the modulo calculation itself. The organization of this paper is as follows: Sec. II, describes the specific TE implemented set by the DVB-RCS2 standard; Sec. III, describes the original serial permutation algorithm and implementation. This algorithm serves as a reference for the speedup factor calculations; Sec. IV, describes the proposed vectorial and parallel permutation algorithm; Sec. V, describes the simulation-based analysis and depicts the results graphically; Finally, Sec. VI, concludes the results and achievements and proposes future research possibilities.
## II The Turbo Encoder
Fig. 1 depicts the TE implemented by the DVB-RCS2 standard. It has a coding rate of 1:3 meaning for every input bit, there are 3 output bits. The input is a 2-bit stream denoted A and B. The output codeword is combined of 3 couplelets: (1) The original A and B couple unchanged. (2) A and B are encoded by the 'Encoder Core', a convolutional calculation block. (3) N couplelets are delayed and rearranged in the 'Permutation' block, then encoded via the 'Encoder Core' block.
The permutation stage of the TE is determined by five parameters, \(P,Q_{0},Q_{1},Q_{2}\) and \(Q_{3}\), with ranges defined by the DVB-RCS2 standard, and vector size N, the number of couplelets in bits. The specific parameters used in the implementation are in compliance with the DVB-RCS2 standard and are detailed in Table I[4].
1. The original A and B couple unchanged.
2. A and B are encoded by the 'Encoder Core', a convolutional calculation block.
3. \(N\) couplelets are delayed and rearranged in the 'Permutation' block, then encoded via the 'Encoder Core' block.
The permutation stage of the TE is determined by five parameters, \(P,Q_{0},Q_{1},Q_{2}\) and \(Q_{3}\), with ranges defined by the DVB-RCS2 standard, and vector size \(N\) (the number of couplelets in bits). The specific parameters used in the implementation are in compliance with the DVB-RCS2 standard and are detailed in Table I[4].
Initial simulations running a simple existing serial implementation of this TE resulted in the permutation stage taking up to 78% of the application (cycle-wise) and showed potential parallelism features such as repetitiveness modulo calculations which could be reconfigured to enhance performance. Fig. 2 depicts an example of permuted indexes by original indexes (for \(N=776\) and matching parameters from Table I) and shows the constant \(4P\) incrimination leading to the new parallel approach described in Sec. IV. The turbo encoder permutation stage is carried out in two levels: (1) describes a swap of the bit couplelets for every odd indexed component
\[\mathrm{if}\{j\ (\mathrm{mod}\ 2)=1\}\Rightarrow(A_{j},B_{j})\rightarrow(B_{j},A_{j }), \tag{1}\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline N[bits] & P & \(Q_{0}\) & \(Q_{1}\) & \(Q_{2}\) & \(Q_{3}\) \\ \hline \hline
56 & 9 & 2 & 2 & 8 & 0 \\
152 & 17 & 9 & 5 & 14 & 1 \\
236 & 23 & 10 & 2 & 11 & 1 \\
384 & 25 & 1 & 2 & 0 & 1 \\
432 & 29 & 1 & 4 & 1 & 1 \\
492 & 31 & 0 & 3 & 1 & 0 \\
520 & 31 & 0 & 1 & 2 & 0 \\
776 & 39 & 7 & 0 & 0 & 0 \\
1056 & 43 & 0 & 0 & 6 & 2 \\
1192 & 49 & 0 & 3 & 5 & 0 \\
2396 & 81 & 1 & 2 & 5 & 2 \\ \hline \end{tabular}
\end{table} TABLE I: Example of used Turbo Encoder parameter-sets
Fig. 1: Turbo Encoder Scheme
(2) describes a calculated rearrangement of the indexes based on the chosen parameters. Further elaboration of the latter can be found in Section IV.
\[i\equiv\Pi(j)=(P\cdot j+Q+3)\ (\mathrm{mod}\ N) \tag{2}\]
where
\[Q=\begin{cases}0,&j\ (\mathrm{mod}\ 4)=0\\ 4Q_{1},&j\ (\mathrm{mod}\ 4)=1\\ 4Q_{0}\cdot P+4Q_{2},&j\ (\mathrm{mod}\ 4)=2\\ 4Q_{0}\cdot P+4Q_{3},&j\ (\mathrm{mod}\ 4)=3\end{cases} \tag{3}\]
## III Serial Implementation
The serial implementation serves as reference for speedup and relies on Look Up Tables (LUT). For every input vector, given its length and matching parameters (taken from Table I), the program (i) creates LUT by deterministic calculations matching the permutation indexes of each input bit, (ii) loads each bit and its designated new index and (iii) stores the bit in a new output vector. This process is ineffective for two main reasons:
1. The LUT calculations are determined the DVB-RCS2 standard. The Modulo based calculations are complex for any processor and is a changing parameter disabling simple hardware solutions for these dynamic calculations. Given a finite collection of sets of vector length and parameters, saving all pre-calculated LUT is an option. However, real-time processors run on finite and usually small sized program memory space (MS) bounding the MS that can be reserved for LUT.
2. Saving the pre-calculated LUT in the memory doesn't solve the following issue. Once the LUT are calculated, the program loads bit-by-bit with it's matching new index from the LUT then the bit is stored to it's designated index in the output vector. Loading and storing bit-bybit (and index-by-index) results in excessive memory access.
The presented vector algorithm solves these problems by processing the data in max sized streams (loading and storing more than one bit at a time) and by replacing the modulo based calculations of the LUT with data and memory manipulations resulting with lower run-time and memory access. The asymptotic tight bound computational complexity of the serial algorithm is \(O(N2)\).
## IV New Approach: Vector Algorithm
Using a Vector DSP with Very Large Instruction Word (VLIW) architecture requires a different approach compared to the original straightforward approach detailed in Sec. III. The main idea is based on manipulating the data vectorwise instead of bit-by-bit and taking advantage of the modulo attributes originating from (2) and (3). Fig. 2 indicates modulo induced strides from one index to the next. There are four different groups of indexes derived from the four cases in (3). These strides constitute the basis of the developed vectorwise algorithm described in the following subsections. The algorithm is based on configurable parameters defined by a standard ensuring a comprehensive and generic solution, thus enabling further research possibilities elaborated in Sec. VI.
Basing the algorithm on _Load-Execute-Store_ operations fitted with the CEVA-XC4500 DSP attributes creates phases. Each phase consists of iterations of execution operations that are done in one work-cycle of the DSP. The following subsections elaborate these operations which combined implement the permutation block of the TE. The presented algorithm was constructed for a specific VP and a specific TE but could be easily modified to various VPs and or different TEs. We chose to test and demonstrate the method on the CEVA-XC4500 and implement the TE described by the DVB-RCS2 standard.
### _Bit to Word_
As in other DSP architectures, the CEVA-XC4500 DSP requires padding of the original input-data for more efficient data manipulation. Hence, the first phase in the algorithm is padding every input-data bit with 15 zeros. Consequently, every original input-data bit is now represented as one Word (16 bits), as shown in Fig. 3.
The CEVA-XC4500 DSP is capable of storing 512 bits in one store operation. Following that \(16\cdot 32=512\), loading 32 bits of the original input-data stream will resolve in 512 bits of padded data as the result of one iteration of this phase. The number of iterations needed for all the original inputdata stream to be processed \(\left\lceil\frac{2N}{32}\right\rceil\), where N is the number of couplets in bits (see Fig. 1). Hence, the output vector of this phase is of size \(16\times 2N[bits]\) or \(2N[words]\). Notes:
Fig. 3: Bit to Word - Padding every bit with 15 zeros.
Fig. 2: Permuted indexes for example vector length of 776. \(\Delta=4\cdot P\) is the offset between two consecutive indexes after permutation.
The CEVA-XC4500 DSP is capable of storing 512 bits in one _store_ operation. Following that \(16\cdot 32=512\), loading 32 bits of the original input-data stream results with 512 bits of padded data in one iteration of this phase. Therefore, the number of iterations needed for all the original input-data stream to be processed is \(\left\lceil\frac{2N}{32}\right\rceil\), where \(N\) is the number of couplelets in bits (see Fig.1). Hence, the output vector of this phase is of size \(16\times 2N[bits]\) or \(2N[words]\).
Notes:
1. The permutation stage is identical to every two bits of original input couplet (A and B) and so we refer to every padded couplet as a Double-Word (DW). In addition, The architecture of the CEVA-4500 DSP is designed to processes DWs, therefore the next two phases of permutation are performed on \(N\) DWs.
2. The Bit-to-Word phase is added as an input rearrangement requirement of the specific DSP implementation. 4 Other processors will/will not require different input rearrangements which will affect results.
### _Transpose_
Referring to the input DW vector as a virtual 2D matrix of \(\left[\left\lceil\frac{N}{2P}\right\rceil\right]\times[4P]\), the vector is accessed with constant strides defined by the number of columns \([4P]\) as seen in Fig. 4. This virtual 2D matrix is transposed such that indexes \(j+4P\) are the successors of indexes \(j\). The CEVA-XC4500 DSP best transposes blocks of \(4\times 16[DWs]\), therefore a block of 4 rows, each of \(16[DWs]\), is loaded from the memory. Fig. 5 shows an example first block of the virtual 2D matrix illustrated in Fig. 4. The \(i^{th}\) block to be transposed is defined by four rows of 16 consecutive DWs. The \(k^{th}\) row of block \(i\) starts at:
\[16i+k\cdot 4P;k=0,1,2,3,1\leq i\leq\left\lceil\frac{4P}{16}\right\rceil\]
and \(\left\lceil\frac{4P}{16}\right\rceil\) is the number of blocks to be transposed. The number of stuffed rows in the last block is \(16\cdot\left\lceil\frac{4P}{16}\right\rceil-4P\). Fig. 6 shows the same block illustrated in Fig. 5. Each row is stored in a continuous manner. The next row is stored at the end of the previous row. The next block rows are stored at the end of the previous block. Notes:
* An important part of this phase is swapping the words of odd-indexed (\(1,3,5,...,N-1\)) DWs, meaning that in every odd-indexed DW the high-word and the low-word are swapped. This swapping implements the permutation inside the bit-couplet, as required. The CEVA-XC4500 DSP can swap word while loading any DW, meaning the bit-level swapping is implemented without influencing processing time or memory access.
* The chosen vector size and parameter sets used in simulations require transposing matrices of up-to eight rows. An expansion of this algorithm for larger sized matrices might be possible but was not simulated.
### _Concatenation_
This phase creates four vectors by concatenating rows of the transposed 2D matrix of the previous phase. The four initial rows are chosen by four pre-calculated offsets determined by the DVB-RCS2 standard parameters \(Q_{0},Q_{1},Q_{2},Q_{3}\) and \(P\):
* \(\mathrm{offset}_{1}=3\)
* \(\mathrm{offset}_{2}=(3+4Q_{1}+P)\ (\mathrm{mod}\ N)\)
* \(\mathrm{offset}_{3}=(3+4(Q_{0}\cdot P+Q_{2})\ (\mathrm{mod}\ N)+2P)\)
* \(\mathrm{offset}_{4}=(3+4(Q_{0}\cdot P+Q_{3})\ (\mathrm{mod}\ N)+3P)\)
Fig. 4: Example of a virtual 2D matrix for: \(N=776;P=39\).
Fig. 5: The first block of the virtual 2D matrix (highlighted in Fig. 4) before transposition.
Fig. 6: Block in Fig. 5 after transposition.
The strides taken from one row to the next are determined by \(G=mod_{N}(4P\cdot\left\lceil\frac{N}{4P}\right\rceil)\). We continue and construct each vector concatenating the row G from it. Knowing there are \(4P\) rows in the transposed 2D matrix and 4 vectors, consequently there are \(P-1\) concatenations to be done:
In Fig. 7 all row numbers must wrap around, keeping the index in range: \(0\leq row_{idx}\leq 4\cdot(P-1)\). Some parameter sets dictate a starting point for concatenation that is not the first DW of the first chosen row. The first DW for each \(\mathrm{vector}_{i}\) is defined by:
\[L_{k}=\left\lceil\frac{\mathrm{offset}_{k}}{4P}\right\rceil;k=1,2,3,4\]
Fig. 8 depicts the virtual 2D matrix illustrated in Fig. 4. In this example \(N=776;P=39;Q_{0}=7;Q_{1}=0;Q_{2}=0;Q_{3}=0\). Hence, using the expressions above one may obtain: \(\mathrm{offset}_{1}=3\); \(\mathrm{offset}_{2}=42\); \(\mathrm{offset}_{3}=397\); \(\mathrm{offset}_{4}=436\); \(G=4\); \(L_{1}=0\) (always true since \(\mathrm{offset}_{1}\equiv 3\)); \(L_{2}=0\), \(L_{3}=2\); \(L_{4}=2\). Fig. 9 shows an example of four result vectors in positions \(\mathrm{offset}_{1}-\mathrm{offset}_{4}\) which are the output of the concatenation phase and serve as the input for the ordering phase.
### _Ordering_
The ordering phase organises it's input as one long vector. This is done by loading the four vectors and transposing them using a similar transposition process to the one described in the transpose phase. Here, every iteration transposes blocks of \(4\times 16\) [DWs] with the following differences:
1. The rows transposed are now stored in an orderly fashion which match the desired output vector (no need for reconstruction of the transposed matrix).
2. The vectors are of same size \((\frac{N}{4})\), meaning that stuffed data will exist only to fill the last block transposed.
Therefore, the number of blocks transposed is \(\left\lceil\frac{N}{4\cdot 16}\right\rceil\), and the number of stuffed columns in the last block is \(16\cdot\left\lfloor\frac{N}{4\cdot 16}\right\rceil-\frac{N}{4}\). Fig. 10 illustrates the \(4\times\frac{N}{4}\) [DWs] matrix before reordering and the output of this phase, \(N\) [DWs] after permutation (example).
### _Word To Bit_
The final operation of the algorithm is un-padding the output vector of the ordering phase from the redundant zeros that were added to the original data in the "bit to word" phase.
This concludes the vector-wise permutation stage implementation. Given \(N\) bit couplelets and matching parameters, following these phases, the result vector will contain \(N\) bit couplelets after permutation. The described are compatible with the DVB-RCS2 standard and are as generic as possible, having little data adjustments to best-fit CEVA's DSP can be reconstructed to fit any VP with very few and minor changes to the overall algorithm.
The asymptotic tight bound computational complexity of the vector-wise algorithm can be written as:
\[O(N+\left\lceil\frac{N}{P}\right\rceil\cdot P)\simeq O(N)\]
## V Simulations And Results
Performance analysis was carried out using the CEVA-TOOLBOX profiler designed for CEVA's DSPs. The profiler creates a table detailing (1) the cycle count of each function in the code and total cycle count results and (2) the number of _Read/Write_ operations performed in each function and total _Read/Write_ results. Unlike the approach of enhancing each
Fig. 8: Transposed 2D matrix (in Fig. 4) with offsets
Fig. 7: Concatenated \(\mathrm{Vector}_{i};1\leq i\leq 4\).
Fig. 9: Four concatenated vectors (generated from Fig. 8).
function of a code/algorithm individually, the proposed vector implementation revised the whole permutation process. Hence, serial and vector algorithms are fundamentally incompatible, therefore we consider only the overall results of the permutation stage detailed in Tables II and sec.III.
The simulations were executed for 11 vector sizes and matching parameter sets given in Table I. for each parameter set, we compared the serial and vector-wise results and calculated the speedup using:
\[Speedup=\frac{L_{old}-L_{new}}{L_{new}}\cdot 100\%.\]
where \(L_{old}\) and \(L_{new}\) are the counts of Read, Write and Total Cycles of the serial and vector-wise implementations, respectively.
The results of Tables II and III are also presented in Fig. 11. We notice a positive speed-up factor for all vector sizes. The reason for the non-monotonic plots is the different execution of the vector algorithm caused by the given parameters of the each vector size and not only by the vector size itself. Re-analyzing the vector algorithm phases by the parameters we see that for example the number of blocks transposed is a factor of P and can shorten/length the runtime of the application by reducing/adding blocks to transpose. See Sec. VI for more.
## VI Conclusion and Future Research
### _Conclusion_
The vector-wise algorithm implementation out performs the serial implementation for all tested cases. Speedups were achieved even though the DSP hardware required padding and unpadding of the original data and final result respectively. A hardware better fit for the parallel algorithm, enabling the _Transpose, Concatenation and Ordering_ phases to be executed on bits rather than words will improve the superiority of the parallel algorithm over the serial implementation.
### _Future Research_
Research possibilities include different paths:
1. Additional permutation algorithms can and be tested on the DSP.
2. The parallel algorithm can be implemented or simulated on other processors (e.g. GPUs, CPUs, etc.).
3. The encoding performed by this application is targeted at achieving maximum channel capacity with minimum bit error rate (BER). The parameters used in this research (shown in Table I) were given as constants. A wider approach would to optimize vector length with matching parameters, while considering the algorithm presented and it's vector properties, all this while striving to achieve better BER and throughput.
|
2310.01009 | Neyman-Pearson and equal opportunity: when efficiency meets fairness in
classification | Organizations often rely on statistical algorithms to make socially and
economically impactful decisions. We must address the fairness issues in these
important automated decisions. On the other hand, economic efficiency remains
instrumental in organizations' survival and success. Therefore, a proper dual
focus on fairness and efficiency is essential in promoting fairness in
real-world data science solutions. Among the first efforts towards this dual
focus, we incorporate the equal opportunity (EO) constraint into the
Neyman-Pearson (NP) classification paradigm. Under this new NP-EO framework, we
(a) derive the oracle classifier, (b) propose finite-sample based classifiers
that satisfy population-level fairness and efficiency constraints with high
probability, and (c) demonstrate statistical and social effectiveness of our
algorithms on simulated and real datasets. | Jianqing Fan, Xin Tong, Yanhui Wu, Shunan Yao | 2023-10-02T09:02:09Z | http://arxiv.org/abs/2310.01009v1 | # Neyman-Pearson and equal opportunity: when efficiency meets fairness in classification
###### Abstract
Organizations often rely on statistical algorithms to make socially and economically impactful decisions. We must address the fairness issues in these important automated decisions. On the other hand, economic efficiency remains instrumental in organizations' survival and success. Therefore, a proper dual focus on fairness and efficiency is essential in promoting fairness in real-world data science solutions. Among the first efforts towards this dual focus, we incorporate the equal opportunity (EO) constraint into the Neyman-Pearson (NP) classification paradigm. Under this new NP-EO framework, we derive the oracle classifier, propose finite-sample based classifiers that satisfy population-level fairness and efficiency constraints with high probability, and demonstrate statistical and social effectiveness of our algorithms on simulated and real datasets.
classification, fairness, efficiency, Neyman-Pearson, equal opportunity
J. Fan, Y. Wu, and S. Yao
## 1 Introduction
Recently, the U.S. Justice Department and the Equal Employment Opportunity Commission warned employers that used artificial intelligence to hire workers for potential unlawful racial discrimination.1 Earlier, Amazon was accused of gender bias against women in its deployment of machine learning algorithms to search for top talents.2 Evidence that algorithmic decision-making exhibits systematic bias against certain disadvantageous social groups has been accumulating in labor markets (Chalfin et al., 2016; Lambrecht and Tucker, 2019) and also growing in
many other areas, including credit lending, policing, court decisions, and healthcare treatment (Arnold et al., 2018; Kleinberg et al., 2018; Bartlett et al., 2022; Obermeyer et al., 2019; Fuster et al., 2022). To address the public concern of algorithmic fairness, a number of studies propose to regulate algorithmic design such that disadvantageous groups must receive non-disparate treatments (Barocas and Selbst, 2016; Kleinberg et al., 2017; Corbett-Davies et al., 2017; Barocas et al., 2019). Statistically, this means that, in carrying out its predictive task, an algorithm ought to prioritize the fairness-related construction, such as purposefully equalizing certain error types of concern. However, efficiency loss could occur as these fairness-related designs may limit the prediction accuracy (Kleinberg et al., 2017).
Consider that a bank uses an algorithmic classifier to decide whether to approve a loan application based on default status prediction. Here, fairness is a primary concern of the society and regulations; concretely, the disparity between denial rates of qualified applicants by sensitive attributes, such as gender or race, is not tolerated. The banks, however, concern intrinsically more about the efficiency, which can be decoupled into two parts, the false negative rate (i.e., the probability of misclassifying a default case as non-default) and the false positive rate (i.e., the probability of misclassifying a non-default case as default). The false negative rate, due to its connection to financial security, has a higher priority for the banks than the false positive rate. Here and in many other examples, social fairness and economic efficiency could be in conflict. To address this conflict, we propose a novel framework that accommodates a _dual focus_ on efficiency and fairness, as well as the asymmetric importance within efficiency consideration.
The _efficiency_ part of our framework is based on the Neyman-Pearson (NP) classification paradigm (Cannon et al., 2002; Scott and Nowak, 2005). This paradigm controls the type I error (i.e., the probability of misclassifying a 0 instance as 1) under some desired level \(\alpha\) (referred to as the NP constraint) while minimizing the type II error (i.e., the probability of misclassifying a 1 instance as 0). In the loan application example, if we label the default status as 0 and non-default status as 1, the type I error is the false negative rate and the type II error is the false positive rate. The asymmetric treatment of the NP paradigm permits a flexible control over the more-consequential error type. The _fairness_ part of our framework borrows a relaxation of the equality of opportunity (EO) concept (Hardt et al., 2016). Assuming class 1 is the favored outcomes, the EO constraint requires achieving the same type II error in all sensitive groups (e.g., race or gender); in the context of loan application, this means that denial rates of qualified applicants should be equalized in different groups. The relaxation we adopt eases the exact rate-equality requirement by allowing a pre-specified \(\varepsilon\) difference (Donini et al., 2018; Agarwal et al., 2018). In verbal discussion, we will still refer to this relaxation as the EO constraint.
Fusing the above efficiency and fairness parts together, we have the new NP-EO paradigm. A natural question is: for any given \(\alpha,\varepsilon\in(0,1)\), are the NP constraint for economic efficiency and the EO constraint for social fairness feasible simultaneously? We provide a positive answer to this question. Moreover, leveraging the generalized Neyman-Pearson Lemma, we derive an NP-EO oracle classifier.
Guided by the NP-EO oracle, we construct finite-sample based classifiers that respect the
population-level NP and EO constraints with high probability. The solution inspires us to take an umbrella algorithm perspective; that is, we wish to adjust the commonly-used methods (e.g, logistic regression, random forest, gradient boosting tree, neural nets) to the NP-EO paradigm in a universal way and propose a provable algorithm for this overaching goal. Similar in spirit to the original NP umbrella algorithm developed in Tong et al. (2018) and its variant for corrupted labels in Yao et al. (2022), we employ an order statistics approach and do not have distributional assumptions on data in the algorithmic development. But the technicalities here are much more involved than in the NP umbrella algorithms, because we need to determine two thresholds (instead of one) simultaneously. In simulation studies, we demonstrate that NP-EO classifiers are the only classifiers that guarantee both NP and EO constraints with high probability. This advantage of the NP-EO classifiers is further demonstrated on a credit card dataset.
This paper contributes to the emerging literature on algorithmic fairness. The overall goal of this scholarly endeavor is to promote algorithmic decision making for the social good, especially for the protection of socially disadvantageous groups. Existing studies have focused on algorithmic bias due to data sampling and engineering (Rambachan and Roth, 2019; Cowgill and Tucker, 2020), the construction of fairness conditions (Hardt et al., 2016; Kleinberg et al., 2017), and the way of incorporating ethical concerns into algorithmic optimization (Corbett-Davies et al., 2017), among others.
The fundamental social science problem, the tradeoff between economic efficiency and social equality, however, has not yet adequately addressed. Some researchers advocate a social-planning approach, in which the algorithmic designer models a social welfare function that captures an explicit preference for a certain socially desirable objective (Kleinberg et al., 2018; Rambachan et al., 2020). While this approach provides a useful benchmark to evaluate social welfare in the presence of ethical consideration, how to put it into practice is a great challenge. Social preferences are often difficult to measure and have to be approximated by some measurable outcomes. These proxies can be mismeasured and lead the predictive outcomes astray, as demonstrated in Mullainathan and Obermeyer (2017) and Obermeyer et al. (2019).
Alternative to the social-planning approach, our approach is from a regulatory perspective, in which a decision maker can pursue their own objective after obeying a certain regulatory constraint. Existing algorithmic designs under the regulatory framework (Corbett-Davies et al., 2017) do not explicitly cope with the efficiency-equality tradeoff. Regulatory failure is likely to occur when the efficiency loss caused by the fairness constraint is significant. Our proposed NP-EO approach provides a framework to detect algorithmic bias, evaluate the social loss caused by self-interested algorithms, and regulate algorithms to maintain the regulatory goal while permitting users sufficient freedom to achieve efficiency.
In the algorithmic fairness literature, many criteria were proposed to define "fairness"; see Barocas et al. (2019) and references within. Our work does not intend to introduce another new fairness criterion. Rather, our framework is flexible enough that the EO constraint can potentially be replaced by other well-defined fairness criteria, and the NP constraint can also be replaced by other efficiency priority. Such efficiency-fairness dual constraints have the potential
to be implemented as long as their population versions are simultaneously feasible.
The rest of the paper is organized as follows. Mathematical settings of the Neyman-Pearson equal opportunity (NP-EO) paradigm is introduced in Section 2. Then, Section 3 presents the NP-EO oracle classifier. We introduce two NP-EO umbrella algorithms and provide theoretical justification in Section 4. Numerical studies are presented in Section 5. Finally, we conclude with a discussion. Lemmas, proofs, and other technical materials are relegated to the Appendix.
## 2 Neyman-Pearson equal opportunity (NP-EO) paradigm
### Mathematical setting and preliminaries
Let \((X,S,Y)\) be a random triplet where \(X\in\mathcal{X}\subset\mathrm{I\!R}^{d}\) represents \(d\) features, \(S\) denotes a sensitive attribute that takes values from \(\{a,b\}\), and \(Y\) denotes the class label that takes values from \(\{0,1\}\). It is not necessary that every feature in \(X\) is _neutral_; we partition the features into \(X\) and \(S\) to emphasize that we will specifically consider a classifier's societal impacts related to \(S\). We denote by \(\mathrm{I\!P}\) a generic probability measure whose meaning will be clear in context, and denote respectively by \(\mathrm{I\!P}_{Z}\) and \(\mathrm{I\!P}_{\mathcal{B}}\) the probabilities taken with respect to the randomness of \(Z\) and \(\mathcal{B}\), for any random variable \(Z\) and random set \(\mathcal{B}\). Let \(\phi:\mathcal{X}\times\{a,b\}\mapsto\{0,1\}\) be a classifier. The (population-level) type I error and type II error of \(\phi\) are defined as
\[R_{0}(\phi):=\mathrm{I\!P}\left(\phi(X,S)\neq Y\mid Y=0\right)\quad\text{and }\quad R_{1}(\phi):=\mathrm{I\!P}\left(\phi(X,S)\neq Y\mid Y=1\right)\,,\]
respectively. Next, we denote the type I/II error conditional on the sensitive attribute by
\[R_{y}^{s}(\phi):=\mathrm{I\!P}\left(\phi(X,S)\neq Y\mid Y=y,S=s\right)\,,\]
for \(y\in\{0,1\}\) and \(s\in\{a,b\}\). Then it follows that,
\[R_{y}(\phi)=\mathrm{I\!P}(\phi(X,S)\neq Y|Y=y)=R_{y}^{a}(\phi)\cdot p_{a|y}+R _{y}^{b}(\phi)\cdot p_{b|y}\,, \tag{1}\]
where \(p_{s|y}=\mathrm{I\!P}(S=s\mid Y=y)\) for \(s\in\{a,b\}\). Each \(p_{s|y}\) is assumed to be non-zero, and we use \(X^{y,s}\) as a shorthand of \(X\mid\{Y=y,S=s\}\) for \(y\in\{0,1\}\) and \(s\in\{a,b\}\). Throughout the paper, we consider class 1 as the 'favored' outcome for _individuals_, such as 'being hired','receiving promotion', 'admission to a college', or 'non-default', and class 0 as the less-favored outcome for _individuals_. In the meantime, we understand class 0 as the class that _organizations_ concern about and try to avoid, such as 'default'.
### Equality of opportunity (EO)
Let \(L_{y}(\phi):=\big{|}R_{y}^{a}(\phi)-R_{y}^{b}(\phi)\big{|}\). In the literature of algorithmic fairness, a popular notion of fairness, coined as 'equalized odds' (or'separation'), requires absolute equality across social groups for any outcome, or \(L_{0}(\phi)=L_{1}(\phi)=0\) in our notation; see Barocas et al. (2019) and the references therein. Hardt et al. (2016) formulated a less-stringent condition, referred to as 'equality of opportunity', which only requires \(L_{1}(\phi)=0\). That is, qualified people from different social groups have equal opportunities to obtain the 'favored' outcome. This weaker notion of
fairness is consistent with the advocacy of productive equity in social science and is acceptable in a wide range of social contexts.
The requirement of absolute equality is, however, not practical for finite-sample based classifiers: due to the randomness of data, the population-level condition \(L_{1}(\phi)=0\) can hardly be achieved from any finite-sample training procedure. Thus, researchers (e.g., Donini et al. (2018); Agarwal et al. (2018)) worked on a relaxed criterion:
\[L_{1}(\phi)\leq\varepsilon\,, \tag{2}\]
for some pre-specified small \(\varepsilon\). This condition states that equality of opportunity is satisfied if for two groups, the difference in the probabilities of falsely classifying a "favored" outcome as "unfavored" is sufficiently small. This less stringent criterion offers a flexible level of tolerance and could be achieved by finite sample procedures with high probability. In this paper, we adopt the relaxed EO condition described by equation (2), and refer to it as the EO constraint. Furthermore, we refer to \(L_{1}(\phi)\) as the type II error disparity of the classifier \(\phi\).
### Neyman-Pearson (NP) paradigm
Like other fairness criteria, the EO constraint draws a boundary to incorporate the societal concern of fairness in algorithmic decision making. In the fairness literature, it was combined with some general loss functions (e.g., Woodworth et al. (2017)). For example, it was incorporated into the _classical_ classification paradigm, which minimizes the overall classification error, i.e., a weighted average of type I and type II errors, with the weights equal to the marginal probabilities of the two classes. In many applications, however, these weights do not reflect the relative importance of different error types; as a consequence, classifiers under the classical paradigm could have undesirably high type I error (or type II error). The inclusion of a fairness criterion can further complicate the problem by resulting in an (unintended) redistribution of the two types of classification errors, as will be shown by Example 1 in Section 3.
Recall the loan application example. A bank wishes to classify loan applicants so as to controlling the default risk (controlling the type I error) and gaining ample business opportunities (maximizing \(1\)\(-\) type II error). The problem is that the two types of errors are statistically in conflict and the bank has to balance the trade-off between the goals. Regulation from fairness concerns (e.g., through the EO constraint) may help lift the bank's bias against certain social groups and enlarge its business opportunities (lower type II error), but it could also expose the bank to greater default risk (higher type I error).
To cope with the above problem, we propose using the Neyman-Pearson (NP) paradigm (Cannon et al., 2002; Scott and Nowak, 2005; Rigollet and Tong, 2011), which solves:
\[\min_{\phi:R_{0}(\phi)\leq\alpha}R_{1}(\phi)\,, \tag{3}\]
where \(\alpha\in(0,1)\) is a user-specified constant. In the loan example, an NP oracle classifier would control the risk of classifying a default applicant as a non-default one, helping banks manage their financial risk; after securing the financial safety, it minimizes the chances of classifying a non-default applicant as a default one, giving banks the maximum possible business opportunities.
### NP-EO paradigm
We propose the NP-EO paradigm as follows:
\[\min_{R_{0}(\phi)\leq\alpha,L_{1}(\phi)\leq\varepsilon}R_{1}(\phi)\,, \tag{4}\]
where \(\alpha,\varepsilon\in(0,1)\) are pre-specified numbers. Program (4) has joint constraints: the NP constraint \(R_{0}(\phi)\leq\alpha\) which ensures the most important part of economic efficiency, and the EO constraint \(L_{1}(\phi)\leq\varepsilon\) which enforces the social fairness restriction. In this arrangement, the direct impact of the EO constraint on the type I error \(R_{0}\) is isolated and the conflict between efficiency and equality is absorbed by the type II error \(R_{1}\), which is assumed to be economically less consequential. On the population level, we will derive an NP-EO oracle classifier, i.e., a solution to program (4). On the sample level, we will construct finite sample based classifiers that respect the two constraints in (4) with high probability.
Returning to the loan application example, a bank is concerned with two private goals--controlling the default risk (\(R_{0}\)) and expanding business opportunity (\(R_{1}\))--and a social goal of maintaining equal opportunity (a small difference between \(R_{1}^{a}\) and \(R_{1}^{b}\)). With the NP-EO paradigm, the risk-control goal is achieved by the constraint \(R_{0}(\phi)\leq\alpha\), where \(\alpha\) is a risk level chosen by the bank, and the social goal is achieved by the constraint \(L_{1}(\phi)\leq\varepsilon\), where \(\varepsilon\) is determined by regulation or social norms. With these two goals, the bank has to be modest in the business-expansion goal -- potentially paying the cost of having a larger chance of misclassifying non-defaulters as defaulters. While this cost could be more significant for startup banks at the stage of customer expansion, it is small for established banks that have a large customer base.
## 3 NP-EO oracle classifier
In this section, we establish an NP-EO oracle classifier, a solution to the constrained optimization program (4). The establishment of an NP-EO oracle classifier demands efforts because (i) the simultaneous feasibility of the NP and EO constraints is not clear on surface, and (ii) the functional form of the oracle is unknown.
Let \(f_{y,s}(\cdot)\) be the density function of \(X^{y,s}\) and \(F_{y,s}(z)=\mathrm{I\!P}\left(f_{1,s}(X)\leq zf_{0,s}(X)\mid Y=y,S=s\right)\), for each \(y\in\{0,1\}\) and \(s\in\{a,b\}\). Moreover, we denote, for any \(c_{a},c_{b}\),
\[\phi_{c_{a},c_{b}}^{\#}(X,S)=\mathrm{I\!I}\{f_{1,a}(X)>c_{a}f_{0,a}(X)\}\cdot \mathrm{I\!I}\{S=a\}+\mathrm{I\!I}\{f_{1,b}(X)>c_{b}f_{0,b}(X)\}\cdot\mathrm{I \!I}\{S=b\}\,. \tag{5}\]
Then, the following theorem holds.
**Theorem 1**: _For each \(y\in\{0,1\}\) and \(s\in\{a,b\}\), we assume (i) \(f_{y,s}\) exists, (ii) \(F_{y,s}(z)\) is continuous on \([0,\infty)\), and (iii) \(F_{y,s}(0)=0\) and \(\lim_{z\to\infty}F_{y,s}(z)=1\). Then there exist two non-negative constants \(c_{a}^{*}\) and \(c_{b}^{*}\) such that \(\phi_{c_{a}^{*},c_{b}^{*}}^{\#}\) is an NP-EO oracle classifier._
The solution is intuitive: within each class, the choice should be a likelihood ratio and two different thresholds are required in order to satisfy two constraints. The proof of Theorem 1 is relegated to the Appendix. Here, we briefly sketch the idea. The existence assumption of \(f_{y,s}\)'s is necessary to write down a classifier in the form of equation (5). The assumptions on \(F_{0,a}\) and
\(F_{0,b}\) ensure that \(R_{0}^{a}\) and \(R_{0}^{b}\) can take any value in \((0,1)\) by varying thresholds \((c_{a},c_{b})\). Therefore, \(R_{0}\), as a convex combination of \(R_{0}^{a}\) and \(R_{0}^{b}\), can achieve an arbitrary level \(\alpha\in(0,1)\). Similarly, the conditions \(F_{1,a}\) and \(F_{1,b}\) guarantee that \(R_{1}^{a}\) and \(R_{1}^{b}\) can take any value in \((0,1)\). Thus, \(L_{1}=\varepsilon\) can be achieved for arbitrary \(\varepsilon\in(0,1)\). In sum, the conditions in Theorem 1 easily ensure that proper choices of thresholds are sufficient to satisfy either NP or EO constraint. The reasoning for simultaneous feasibility is more involved and we will demonstrate it on a special case shortly.
Note the Neyman-Pearson lemma implies that the NP oracle classifier (i.e., solution to program (3)) is of the form
\[\phi(x,s)=\mbox{1I}\left\{\frac{f_{1,s}(x)\cdot p_{s|1}}{f_{0,s}(x)\cdot p_{s |0}}>c\right\}=\mbox{1I}\left\{\frac{f_{1,a}(x)}{f_{0,a}(x)}>c\frac{p_{a|0}}{p _{a|1}}\right\}\cdot\mbox{1I}\{s=a\}+\mbox{1I}\left\{\frac{f_{1,b}(x)}{f_{0,b}( x)}>c\frac{p_{b|0}}{p_{b|1}}\right\}\cdot\mbox{1I}\{s=b\}\,,\]
for some constant \(c\) such that the NP constraint takes the boundary condition. It is easy to see that the last expression in the above display is of the form in equation (5). If the NP oracle classifier satisfies the EO constraint, then it is also an NP-EO oracle. If the NP oracle classifier fails to satisfy the EO constraint, the generalized Neyman-Pearson lemma (Theorem 6 in Appendix) indicates that the oracle NP-EO classifier is of the form in equation (5), given the existence of a pair of thresholds \((c_{a},c_{b})\) that achieves \(R_{0}=\alpha\) and \(L_{1}=\varepsilon\).
The existence of such a pair in one scenario is illustrated by Figure 1, where we assume that \(R_{1}^{a}-R_{1}^{b}>\varepsilon\) for the NP oracle. More general discussion can be found in the proof of Theorem 1. In Figure 1, the vertical and horizontal axes are \(c_{a}\) and \(c_{b}\), representing respectively the \(S=a\) and \(S=b\) part of the thresholds in the classifier in (5). Thus, every point in the first quadrant represents such a classifier. In this figure, \(c_{b}^{\prime}\) is the constant such that its corresponding
Figure 1: Feasibility of NP-EO oracle. The downward curve represents the critical values \(c_{a}\) and \(c_{b}\) in the classifier \((\ref{eq:1})\) such that the probability of type I error is \(\alpha\), whereas the upward curve depicts the classifiers satisfying \(R_{1}^{a}-R_{1}^{b}=\varepsilon\). The intersection of these two curves gives the critical values for the NP-EO classifier.
\(R_{1}^{b}=1-\varepsilon\). The solid downward curve represents pairs \((c_{a},c_{b})\) such that \(R_{0}=\alpha\); note that
\[R_{0}(\phi_{c_{a},c_{b}}^{\#})=(1-F_{0,a}(c_{a}))\cdot p_{a|0}+(1-F_{0,b}(c_{b}) )\cdot p_{b|0}\,,\]
so when \(R_{0}\) is fixed at \(\alpha\), \(c_{a}\) is non-increasing as \(c_{b}\) increases, which is shown in Figure 1. At the same time, the solid upward curve represents the threshold pairs \((c_{a},c_{b})\) such that \(R_{1}^{a}-R_{1}^{b}=\varepsilon\). Since \(R_{1}^{a}(\phi_{c_{a},c_{b}}^{\#})-R_{1}^{b}(\phi_{c_{a},c_{b}}^{\#})=F_{1,a}( c_{a})-F_{1,b}(c_{b})\), so when \(R_{1}^{a}-R_{1}^{b}\) is fixed at \(\varepsilon\), \(c_{a}\) is non-decreasing when \(c_{b}\) increases, and hence the curve should be upward. As indicated in Figure 1, it can be shown that there must be an intersection of the two curves, which satisfies both the NP and EO constraints. Then, the generalized Neyman-Pearson lemma implies that the intersection must be an NP-EO oracle classifier.
Now we rationalize results in Theorem 1 on an intuitive level. Theorem 1 states that an NP-EO oracle can be formed by two separate parts, namely, \(S=a\) component and \(S=b\) component. This is understandable because, as long as a classifier \(\phi\) takes into consideration the protected attribute \(S\), it can always be rewritten as a two-part form, i.e., \(\phi(X,S)=\phi^{a}(X)\cdot{\rm 1\kern-2.5ptI}\{S=a\}+\phi^{b}(X)\cdot{\rm 1 \kern-2.5ptI}\{S=b\}\), where \(\phi^{a}(\cdot)=\phi(\cdot,a)\) and \(\phi^{b}(\cdot)=\phi(\cdot,b)\). Then, given the two-part form, it is not surprising that the best \(\phi^{a}\) and \(\phi^{b}\), in terms of group-wise type II error performance for a type I error level, adopt density ratios as scoring functions. Thus, as long as the two thresholds are adjusted so that NP and EO constraints are satisfied, the classifier in the form of equation (5) will have smaller \(R_{1}^{a}\) and \(R_{1}^{b}\) than other feasible classifiers and thus a smaller \(R_{1}\).
We now present a simple example to illustrate the NP-EO oracle.
**Example 1**: _Let \(X^{0,a},X^{1,a},X^{0,b}\) and \(X^{1,b}\) be \(\mathcal{N}(0,1),\mathcal{N}(4,1),\mathcal{N}(0,9)\) and \(\mathcal{N}(4,9)\) distributed random variables, respectively, and set \({\rm I\kern-2.5ptP}(S=a,Y=0)={\rm I\kern-2.5ptP}(S=a,Y=1)={\rm I\kern-2.5ptP}( S=b,Y=1)=0.25\). Then, the Bayes classifier is \(\phi_{\text{Bayes}}={\rm 1\kern-2.5ptI}\{X>2\}\) and the NP oracle classifier for \(\alpha=0.1\) is \(\phi_{\text{NP}}={\rm 1\kern-2.5ptI}\{X>2.58\}\).3 If \(\alpha=\varepsilon=0.1\), the NP-EO oracle classifier is \(\phi_{\text{NP-EO}}={\rm 1\kern-2.5ptI}\{X>3.20\}{\rm 1\kern-2.5ptI}\{S=a\}+{\rm 1 \kern-2.5ptI}\{X>2.53\}{\rm 1\kern-2.5ptI}\{S=b\}\). The graphical illustration of this example is depicted in Figure 2. We can calculate that \(R_{0}(\phi_{\text{Bayes}})=0.137\), \(R_{1}(\phi_{\text{Bayes}})=0.137\) and \(L_{1}(\phi_{\text{Bayes}})=0.23\), violating both NP and EO constraints. The NP oracle, compared with the Bayes classifier, has a larger threshold. Consequently, \(R_{0}(\phi_{\text{NP}})=0.1\), \(R_{1}(\phi_{\text{NP}})=0.198\) and \(L_{1}(\phi_{\text{NP}})=0.24\). The NP oracle classifier satisfies the NP constraint but violates the EO constraint. The NP-EO oracle classifier is more subtle. Its \(S=a\) part threshold is larger than that of NP oracle classifier whereas the \(S=b\) part threshold is slightly smaller, resulting in \(R_{0}(\phi_{\text{NP-EO}})=0.100\), \(R_{1}(\phi_{\text{NP-EO}})=0.262\) and \(L_{1}(\phi_{\text{NP-EO}})=0.1\), so that the NP-EO oracle classifier satisfies both NP and EO constraints._
Footnote 3: In this example, the sensitive attribute \(S\) does not appear in the Bayes classifier or in the NP oracle classifier because the thresholds are the same for the \(S=a\) and \(S=b\) components. Thus, \(S\) can be omitted due to the specific setup of this model.
An NP-EO oracle classifier has a nice property: it is invariant to the changes in the proportions of class labels. This insight is concretized by the following proposition.
Proposition 1.: _Under conditions of Theorem 1, an NP-EO oracle classifier is invariant to the change in \(\mathds{P}(Y=0)\) (or equivalently \(\mathds{P}(Y=1)\)), as long as the distributions of \(X\mid(Y=y,S=s)\) (i.e., \(X^{y,s}\)) and \(S\mid(Y=y)\) stay the same for each \(y\in\{0,1\}\) and \(s\in\{a,b\}\)._
## 4 Methodology
In this section, we propose two sample-based NP-EO umbrella algorithms. Theorem 1 indicates that the density ratios are the best scores, with proper threshold choices. Hence plugging the density ratio estimates in equation (5) would lead to classifiers with good theoretical properties. In practice and more generally, however, practitioners can and might prefer to use scores from canonical classification methods (e.g., logistic regression and neural networks), which we also refer to as _base algorithms_. Inspired by (5), we construct classifiers of the generic form
\[\widehat{\phi}(X,S)=\mathds{1}\{T^{a}(X)>c_{a}\}\cdot\mathds{1}\{S=a\}+ \mathds{1}\{T^{b}(X)>c_{b}\}\cdot\mathds{1}\{S=b\}\,, \tag{6}\]
Figure 2: Plots of three classifiers in Example 1. The three rows, from top to bottom, represent figure illustration of the Bayes classifier, NP oracle classifier and NP-EO oracle classifier, respectively. The left panel illustrates the densities of \(X^{0,a}\) and \(X^{1,a}\) and the right panel those of \(X^{0,b}\) and \(X^{1,b}\). In every sub-figure, the green curve represents class \(0\) density and the orange curve represents class \(1\) density. In each row, the two thresholds of the classifier are indicated by the two black vertical lines. The type I and type II errors conditional on sensitive attribute are depicted respectively as the light green and light orange regions in every sub-figure with their values marked.
where \(T^{a}(\cdot)\) and \(T^{b}(\cdot)\) are given scoring functions for groups \(S=a\) and \(S=b\), respectively, and our task is to choose proper data-driven thresholds \(c_{a}\) and \(c_{b}\) that take into account the NP and EO constraints. This form is inspired by the NP-EO oracle classifier in the previous section by regarding \(T^{a}\) and \(T^{b}\) as the density ratios. We leave the more theory-oriented investigation on density ratio plug-ins for the future.
The classifier \(\widehat{\phi}\) in (6) is trained on finite sample; thus it is random due to randomness of the sample, and the constraints in program (4) cannot be satisfied with probability \(1\) in general. Therefore, we aim to achieve high-probability NP and EO constraints as follows,
\[\mathrm{I\!P}\left(R_{0}(\widehat{\phi})>\alpha\right)\leq\delta\,, \tag{7}\]
\[\mathrm{I\!P}\left(L_{1}(\widehat{\phi})>\varepsilon\right)\leq\gamma\,, \tag{8}\]
for pre-specified small \(\delta,\gamma\in(0,1)\). Here, \(\mathrm{I\!P}\) is taken over the randomness of the training sample.
In Sections 4.1 and 4.2, we will present two umbrella algorithms: NP-EO\({}_{\mathrm{OP}}\) and NP-EO\({}_{\mathrm{MP}}\). The meaning of their names will become clear later. NP-EO\({}_{\mathrm{OP}}\) is simpler and computationally lighter than NP-EO\({}_{\mathrm{MP}}\). It is also "safer" in the sense that it achieves at least \(1-\delta\) probability type I error control whereas NP-EO\({}_{\mathrm{MP}}\) is only theoretically guaranteed to achieve at least \(1-\delta^{+}\) probability control for some \(\delta^{+}\searrow\delta\) as sample size grows. However, NP-EO\({}_{\mathrm{OP}}\) sacrifices the power. In contrast, NP-EO\({}_{\mathrm{MP}}\) achieves smaller type II error and does not violate exact high-probability NP constraint in numerical analysis, as demonstrated in Section 5. Moreover, NP-EO\({}_{\mathrm{MP}}\) is a generalization of NP-EO\({}_{\mathrm{OP}}\) in terms of threshold selection. Thus, it is convenient for readers to encounter NP-EO\({}_{\mathrm{OP}}\) first.
### The NP-EO\({}_{\mathrm{OP}}\) umbrella algorithm
We now construct an algorithm that respects (7) and (8)4, and achieves type II error as small as possible. Denote by \(\mathcal{S}^{y,s}\) the set of \(X\) feature observations whose labels are \(y\) and sensitive attributes are \(s\), where \(y\in\{0,1\}\) and \(s\in\{a,b\}\). We assume that all the \(\mathcal{S}^{y,s}\)'s are independent, and instances within each \(\mathcal{S}^{y,s}\) are i.i.d. Each \(\mathcal{S}^{y,s}\) is divided into two halves: \(\mathcal{S}^{y,s}_{\mathrm{train}}\) for training scoring functions, and \(\mathcal{S}^{y,s}_{\mathrm{left-out}}\) for estimating the thresholds in the classifier (6).
Footnote 4: Strictly speaking, we only achieve \(\gamma^{+}\) in (8), where \(\gamma^{+}\searrow\gamma\) as sample size diverges.
First, all \(\mathcal{S}^{y,s}_{\mathrm{train}}\)'s are combined together to train a scoring function (e.g., sigmoid function in logistic regression) \(T:\mathcal{X}\times\{a,b\}\mapsto\mathrm{I\!R}\); then we take \(T^{a}(\cdot)=T(\cdot,a)\) and \(T^{b}(\cdot)=T(\cdot,b)\). To determine \(c_{a}\) and \(c_{b}\), we select pivots to fulfill the NP constraint first and then adjust them for the EO constraint. A prior result leveraged to achieve the high-probability NP constraint is the _NP umbrella algorithm_ developed by Tong et al. (2018). This algorithm adapts to all scoring-type classification methods (e.g., logistic regression and neural-nets), which we now describe. For an arbitrary (random) scoring function \(S:\mathcal{X}\mapsto\mathrm{I\!R}\) and i.i.d. class \(0\) observations \(\{X_{1}^{0},X_{2}^{0},\cdots,X_{n}^{0}\}\), a classifier that controls type I error under \(\alpha\) with probability at least \(1-\delta\) and achieves small type II error can be built as \(\mathrm{I\!I}\{S(X)>s_{(k^{*})}\}\), where \(s_{(k^{*})}\) is the \((k^{*})^{\mathrm{th}}\) order statistic of \(\{s_{1},s_{2},\cdots,s_{n}\}:=\{S(X_{1}^{0}),S(X_{2}^{0}),\cdots,S(X_{n}^{0})\}\) and \(k^{*}\) is the smallest \(k\in\{1,2,\cdots,n\}\) such
that \(\sum_{j=k}^{n}\binom{n}{j}(1-\alpha)^{j}\alpha^{n-j}\leq\delta.\) The smallest such \(k\) is chosen to achieve the smallest type II error. The only condition for this high-probability type I error control is \(n\geq\lceil\log\delta/\log(1-\alpha)\rceil\), a mild sample size requirement. More details of this algorithm are recollected from Tong et al. (2018) and provided in Appendix A.1.
Motivated by the NP umbrella algorithm, we apply \(T^{s}(\cdot)\) to each instance in \(\mathcal{S}^{y,s}_{\text{left-out}}\) to obtain \(\mathcal{T}^{y,s}=\{t_{1}^{y,s},t_{2}^{y,s},\cdots,t_{n_{s}^{2}}^{y,s}\}\), where \(n_{s}^{y}=|\mathcal{S}^{y,s}_{\text{left-out}}|\), \(y\in\{0,1\}\), and \(s\in\{a,b\}\). A natural starting point is to apply the NP umbrella algorithm (Tong et al., 2018) to the data with sensitive attributes \(a\) and \(b\) separately so that they both satisfy the NP constraint (7). Concretely, from the sorted set \(\mathcal{T}^{0,a}=\{t_{(1)}^{0,a},t_{(2)}^{0,a},\cdots,t_{(n_{a}^{0})}^{0,a}\}\), the pivot \(t_{(k_{s}^{0,a})}^{0,a}\) is selected as the \(\left(k_{*}^{0,a}\right)^{\text{th}}\) order statistic in \(\mathcal{T}^{0,a}\), where \(k_{*}^{0,a}\) is the smallest \(k\in\{1,\cdots,n_{a}^{0}\}\) such that \(\sum_{j=k}^{n_{a}^{0}}\binom{n_{a}^{0}}{j}(1-\alpha)^{j}\alpha^{n_{a}^{0}-j}\leq\delta\). The pivot \(t_{(k_{*}^{0,a})}^{0,b}\) is selected similarly on \(\mathcal{T}^{0,b}\). If \(c_{a}\geq t_{(k_{*}^{0,a})}^{0,a}\) and \(c_{b}\geq t_{(k_{*}^{0,b})}^{0,b}\), then the classifier \(\widehat{\phi}\) in (6) satisfies
\[\mathrm{I\!P}\left(R_{0}^{a}(\widehat{\phi})>\alpha\right)\leq\delta\quad\text { and }\quad\mathrm{I\!P}\left(R_{0}^{b}(\widehat{\phi})>\alpha\right)\leq\delta\,, \tag{9}\]
by Proposition 1 in Tong et al. (2018). In view of (1), the above inequalities guarantee that the NP constraint can be achieved with probability at least \(1-2\delta\). If we want to strictly enforce the \(1-\delta\) probability type I error control in theory as in inequality (7), the \(\delta\) parameter in our algorithm can be replaced by \(\delta/2\)5.
Footnote 5: However, numerical results in Section 5 suggest that this extra cautionary measure does not seem to be necessary in practice, because the subsequent EO adjustment step gears our algorithm towards the more conservative direction for type I error control.
The next step is to adjust the thresholds so that the resulting classifier also satisfies inequality (8), i.e., the high-probability EO constraint. To keep the NP constraint, we increase the values of thresholds for both groups. Similar to \(\mathcal{T}^{0,a}\) and \(\mathcal{T}^{0,b}\), we denote the sorted \(\mathcal{T}^{1,s}=\{t_{(1)}^{1,s},t_{(2)}^{1,s},\cdots,t_{(n_{s}^{1})}^{1,s}\}\) for \(s\in\{a,b\}\) and select \(c_{a}\) from \(\mathcal{T}^{1,a}\) and \(c_{b}\) from \(\mathcal{T}^{1,b}\) in order to facilitate the power calculation. Let
\[l_{a}=\sum_{j=1}^{n_{s}^{1}}\mathrm{I\!I}\left\{t_{j}^{1,a}\leq t_{(k_{*}^{0,a })}^{0,a}\right\}\quad\text{ and }\quad l_{b}=\sum_{j=1}^{n_{b}^{1}}\mathrm{I\!I}\left\{t_{j}^{1,b}\leq t_{(k_{* }^{0,b})}^{0,b}\right\}\,. \tag{10}\]
Then, \(c_{a}\) is selected from \(\{t_{(j)}^{1,a}:l_{a}<j\leq n_{a}^{1}\}\) and \(c_{b}\) is selected from \(\{t_{(j)}^{1,b}:l_{b}<j\leq n_{b}^{1}\}\) so that (9) holds. To this end, we investigate the distributions of
\[r_{1}^{a}(i)=\mathrm{I\!P}_{X^{1,a}}\left(T^{a}(X^{1,a})\leq t_{(i)}^{1,a} \right)\quad\text{ and }\quad r_{1}^{b}(j)=\mathrm{I\!P}_{X^{1,b}}\left(T^{b}(X^{1,b})\leq t_{(j)}^ {1,b}\right)\,,\]
for \(i>l_{a}\) and \(j>l_{b}\). They are respectively the \(S=a\) and \(S=b\) components of the type II error of the classifier in (6), if we take \(c_{a}=t_{(i)}^{1,a}\) and \(c_{b}=t_{(j)}^{1,b}\); they are random because only the randomness of \(X^{1,a}\) and \(X^{1,b}\) are taken in \(\mathrm{I\!P}_{X^{1,a}}\) and \(\mathrm{I\!P}_{X^{1,b}}\). We need to understand these two quantities, so as to choosing from all eligible pairs \(i\) and \(j\) that satisfy the EO constraint.
The left hand side of the inequality in equation (8) can be written as \(\mathrm{I\!P}\left(\left|r_{1}^{a}(i)-r_{1}^{b}(j)\right|>\varepsilon\right),\) since we can consider the scoring function \(T(\cdot)\) (and hence \(T^{a}(\cdot)\) and \(T^{b}(\cdot)\)) as fixed due to independent pretraining of \(T(\cdot)\). Since the random variables \(r_{1}^{a}(i)\) and \(r_{1}^{b}(j)\) are independent and admit similar definitions, we need only to study one of them as follows.
Let \(X\) and \(Y_{1},Y_{2},\cdots,Y_{n}\) be continuous, independent and identically distributed random variables. Moreover, let \(c\) be a random variable that is independent of \(X,Y_{1},\cdots,Y_{n}\) and define by \(l=\sum_{j=1}^{n}\mathds{1}\{Y_{j}\leq c\}\). Our goal is to approximate the distribution of \(\mathds{P}_{X}(X\leq Y_{(k)})\) conditional on \(l\) for \(k>l\), which is needed for \(r_{1}^{a}(i)\) and \(r_{1}^{b}(j)\). Note that the conditional probability does not depend on the original distribution of \(X\) and
\[\mathds{P}_{X}(X\leq Y_{(k)}\mid l)=\mathds{P}_{X}(X\leq Y_{(l)}\mid l)+ \mathds{P}_{X}(Y_{(l)}<X\leq Y_{(k)}\mid l)\,.\]
By using the property of the uniform order statistics, it can be shown that the above quantity has the same distribuion as \(g_{c,l}+\left(1-g_{c,l}\right)B_{k-l,n-k+1}\) for \(k>l\) with independent random variables \(g_{c,l}=\mathds{P}(Y_{1}\leq c\mid l)\) and \(B_{k-l,n-k+1}\sim\mathrm{Beta}(k-l,n-k+1)\). It remains to approximate the distribution of \(g_{c,l}\), which is \(l/n\) if \(c\) is a constant. Recall that \(c\) is a random variable and \(g_{c,l}=\mathbb{E}(F(c)|l)\) where \(F\) is the cdf of \(Y_{1}\). Writing \(\theta=F(c)\), from the Bayesian point of view, the distribution of \(g_{c,l}\) is the posterior distribution of \(\theta\) given \(n\) i.i.d. Bernoulli\((\theta)\) observations with sufficient statistic \(l\). By Bernstein-von Mises theorem, \(g_{c,l}\) is "close" to be normally distributed with mean \(l/n\) (MLE in frequestist view) and variance equal to the Fisher information of the Bernoulli trial at MLE: \(n^{-1}(l/n)(1-l/n)\).
The above discussion reveals that the distribution of \((r_{1}^{a}(i)\mid l_{a})\) can be approximated by \(G^{1,a}+\left(1-G^{1,a}\right)B_{i-l_{a},n_{a}^{1}-i+1}\) where \(G^{1,a}\sim\mathcal{N}\left(\frac{l_{a}}{n_{a}^{1}},\frac{l_{a}/n_{a}^{1}(1- l_{a}/n_{a}^{1})}{n_{a}^{1}}\right).\) Similarly, the distribution of \((r_{1}^{b}(j)\mid l_{b})\) can be approximated. Let \(F^{1,a}(i)\) and \(F^{1,b}(j)\) be two independent random variables such that \(F^{1,a}(i)=G^{1,a}+\left(1-G^{1,a}\right)B_{i-l_{a},n_{a}^{1}-i+1}\), in distribution and \(F^{1,b}(j)\) is defined analogously. Then, we can pick \((i,j)\) such that
\[\mathds{P}\left(\left|F^{1,a}(i)-F^{1,b}(j)\right|>\varepsilon\right)\leq \gamma\,. \tag{11}\]
Among these feasible pairs, the one that minimizes the empirical type II error, which can be calculated as \(\left((i-1)+(j-1)\right)/(n_{a}^{1}+n_{b}^{1})\), should be selected; i.e., we select
\[(k_{a}^{*},k_{b}^{*})=\min_{\text{all feasible $(i,j)$ that satisfy}(11)}\frac{i+j-2}{n_{a}^{1}+n_{b}^{1}}\,. \tag{12}\]
The process to arrive at \((k_{a}^{*},k_{b}^{*})\) is illustrated in Figure 3. We propose an NP-EO classifier
\[\widehat{\phi}^{*}(X,S)=\mathds{1}\{T^{a}(X)>t_{(k_{a}^{*})}^{1,a}\}\cdot \mathds{1}\{S=a\}+\mathds{1}\{T^{b}(X)>t_{(k_{b}^{*})}^{1,b}\}\cdot\mathds{1 }\{S=b\}\,.\]
Note that, if none of \(i\in\{l_{a}+1,\cdots,n_{a}^{1}\}\) and \(j\in\{l_{b}+1,\cdots,n_{b}^{1}\}\) satisfy inequality (11), we say our algorithm does not provide a viable NP-EO classifier. This kind of exceptions have not occured in simulation or real data studies.
We summarize the above NP-EO umbrella algorithm in Algorithm 1. Note that in Step 8, the NP violation rate control at \(\delta/2\) is needed for theoretical purpose (c.f. Theorem 2 and its proof). We will demonstrate through numerical analysis that it suffices to use \(\delta\) instead. We also note that the steps to reach \((k_{a}^{*},k_{b}^{*})\) is summarized as the _EO violation algorithm_ (Step 10) inside Algorithm 1, also presented separately as Algorithm 3 in the appendix for clarity. The next theorem provides a theoretical guarantee for \(\widehat{\phi}^{*}(X,S)\).
**Theorem 2**: _Let \(\widehat{\phi}^{*}(\cdot,\cdot)\) be the classifier output by Algorithm 1 with parameters \((\alpha,\delta/2,\varepsilon,\gamma)\). Assume that the scoring function \(T(\cdot,\cdot)\) is trained such that \(T^{s}(X^{y,s})\) is a continuous random variable whose distribution function is strictly monotone for each \(y\in\{0,1\}\) and \(s\in\{a,b\}\), and that all distribution functions for \(T^{s}(X^{y,s})\) have the same support. Furthermore, assume that
Figure 3: A cartoon illustration of the choices of \(k_{a}^{*}\) and \(k_{b}^{*}\). They are moved in the NP contrained feasible region (to the left) to search for the pairs that satisfy the EO constraint and to pick the most powerful pair. For every \(\mathcal{T}^{y,s}\), the circles, or squares, in its corresponding row represent its sorted elements, ascending from left to right.
\(\min\{n_{a}^{0},n_{b}^{0}\}\geq\log(\delta/2)/\log(1-\alpha)\) Then it holds simultaneously that_
\[\text{(a)}\ \mathrm{I\!P}\left(R_{0}(\widehat{\phi}^{*})>\alpha\right)\leq \delta\quad\text{ and }\quad\text{(b)}\ \mathrm{I\!P}\left(|R_{1}^{a}(\widehat{\phi}^{*})-R_{1}^{b}(\widehat{\phi}^{ *})|>\varepsilon\right)\leq\gamma+\xi(n_{a}^{1},n_{b}^{1})\,,\]
_in which \(\xi(n_{a}^{1},n_{b}^{1})\) converges to \(0\) as \(n_{a}^{1}\) and \(n_{b}^{1}\) diverge._
In Theorem 2, the conditions for distributions of \(T^{s}(X^{y,s})\) ensure that the Bernstein-von Mises theorem can be invoked. Indeed, take the \(S=a\) component for example, this theorem is applied to the class of binomial sample \(l_{a}\) defined in equation (10), whose probability of success rate is \(\mathrm{I\!P}_{X^{1,a}}\left(T^{a}(X^{1,a})\leq t_{(i)}^{1,a}\right)\). The key issue here is that this random probability needs to be in the interior of \([0,1]\) with probability \(1\), which is guaranteed by assumptions on \(T^{s}(X^{y,s})\). Next, the assumptions for \(n_{a}^{0}\) and \(n_{b}^{0}\), adapted from Tong et al. (2018), are mild sample size requirements to ensure the high-probability NP constraint (c.f. part (a) of Theorem 2). We note that part (b) of Theorem 2 states that the type II error disparity violation rate can be controlled by \(\gamma\) plus a term that vanishes asymptotically. This extra term, asymptotically negligible, is the price for the errors of Gaussian approximation on the distributions of \(r_{1}^{a}\) and \(r_{1}^{b}\).
### The NP-EO\({}_{\text{MP}}\) umbrella algorithm
Algorithm 1 (NP-EO\({}_{\text{OP}}\)) employs a "conservative" approach. Concretely, one pair of pivots, selected to ensure high-probability control on \(R_{0}^{a}\) and \(R_{0}^{b}\) simultaneously, serves as the lower bounds for the final thresholds. However, it could be suboptimal to control both \(R_{0}^{a}\) and \(R_{0}^{b}\), as our goal is to control \(R_{0}\); indeed, it can induce unnecessarily small \(R_{0}\), leading to large \(R_{1}\) and hurting the power of the classifier. To amend this, we can start from a sensitive-attribute-agnostic NP classifier, and then adjust the thresholds for both groups while maintaining the overall type I error control. This gives us a wider class of pivots (than in the NP-EO\({}_{\text{OP}}\) algorithm), and thus enables us to search for a more powerful classifier.
In our second and more general version of the NP-EO umbrella algorithm, we assume a slightly different sampling scheme for theoretical purpose. Denote by \(\mathcal{S}^{y}\) the set of \((X,S)\) feature observations whose labels are \(y\), where \(y\in\{0,1\}\). We assume that \(\mathcal{S}^{0}\) and \(\mathcal{S}^{1}\) are independent and the instances within each \(\mathcal{S}^{y}\) are i.i.d. Let \(\mathcal{S}^{y,s}\) be the set of \(X\) feature observations within \(\mathcal{S}^{y}\) whose sensitive attribute is \(s\), where \(s\in\{a,b\}\). Under this sampling scheme, we assume that \(n^{y}=|\mathcal{S}^{y}|\) is deterministic for \(y\in\{0,1\}\). Denote by \(n_{s}^{y}=|\mathcal{S}^{y,s}|\); then \(n_{a}^{y}\) and \(n_{b}^{y}\) are random, and \(n^{y}=n_{a}^{y}+n_{b}^{y}\). Recall that we also denote \(p_{s|y}=\mathrm{I\!P}(S=s\mid Y=y)\). Each \(\mathcal{S}^{y,s}\) is split equally into \(\mathcal{S}^{y,s}_{\text{train}}\) and \(\mathcal{S}^{y,s}_{\text{left-out}}\). Training of scoring function \(T\) (and thus \(T^{a}\) and \(T^{b}\)) is the same as in Algorithm 1, and the scoring function \(T\) is again applied to all elements in \(\mathcal{S}^{y,s}_{\text{left-out}}\) to obtain the set of scores, \(\mathcal{T}^{y,s}\), where \(y\in\{0,1\}\) and \(s\in\{a,b\}\). Similar to the approach outlined in Section 4.1, we first address the NP constraint. However, instead of two sensitive-attribute-specific thresholds, we start with an intermediate classifier that has the same threshold for both groups:
\[\widehat{\phi}_{*}(X,S) = \mathrm{I\!I}\{T(X,S)>t_{(k_{*})}^{0}\} \tag{13}\] \[= \mathrm{I\!I}\{T^{a}(X)>t_{(k_{*})}^{0}\}\mathrm{I\!I}\{S=a\}+ \mathrm{I\!I}\{T^{b}(X)>t_{(k_{*})}^{0}\}\mathrm{I\!I}\{S=b\}\,,\]
where \(t^{0}_{(k_{*})}\) is the \((k_{*})^{\text{th}}\) order statistic in \(\mathcal{T}^{0}=\mathcal{T}^{0,a}\cup\mathcal{T}^{0,b}\) and \(k_{*}\) is selected by the NP umbrella algorithm on \(\mathcal{T}^{0}\). This threshold selection guarantees that \(R_{0}(\widehat{\phi}_{*})\) is controlled under \(\alpha\) with high probability. We will use \(\widehat{\phi}_{*}\) as a bridge. Concretely, if a classifier of the form in (6) admits the same empirical type I error on \(\mathcal{T}^{0}\) as \(\widehat{\phi}_{*}\), their population-level type I errors should be close, and thus they can be both controlled under \(\alpha\) with probability close to \(1-\alpha\). One can see that \(\widehat{\phi}_{*}\) makes \(k_{a}^{0}+k_{b}^{0}\) correct classifications on \(\mathcal{T}^{0}\), where
\[k_{a}^{0}=\sum_{j=1}^{n_{a}^{0}}\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l}\{t_{j}^{0,a}\leq t_{(k_{*})}^{0}\} \quad\text{and}\quad k_{b}^{0}=\sum_{j=1}^{n_{b}^{0}}\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l}\{\rm 1\mskip-5.0mu l}\{t_{j}^{0,b}\leq t _{(k_{*})}^{0}\}\,. \tag{14}\]
In fact, if any \(t_{(k_{*})}^{0,a}\in\mathcal{T}_{0,a}\) and \(t_{(k_{b})}^{0,b}\in\mathcal{T}_{0,b}\), where \(k_{a}\in[n_{a}^{0}]\) and \(k_{b}\in[n_{b}^{0}]\), are chosen as the thresholds for \(T^{a}\) and \(T^{b}\) respectively, then as long as \(k_{a}+k_{b}=k_{a}^{0}+k_{b}^{0}\), a classifier would have the same empirical type I error on \(\mathcal{T}^{0}\) as \(\widehat{\phi}_{*}\). Thus, to respect the high-probability NP constraint, we might choose any pair of thresholds \(c_{a},c_{b}\) such that \(c_{a}\geq t_{(k_{a})}^{0,a}\) and \(c_{b}\geq t_{(k_{b})}^{0,b}\), where the pivots \(t_{(k_{a})}^{0,a}\) and \(t_{(k_{b})}^{0,b}\) satisfy \(k_{a}+k_{b}=k_{a}^{0}+k_{b}^{0}\). This larger collection of pivot pairs makes power improvement possible.
The next goal is to satisfy the high-probability EO constraint. Here, the steps and reasoning are similar to Algorithm 1. Let \(l_{a}(k_{a})\) and \(l_{b}(k_{b})\), functions of \(k_{a}\) and \(k_{b}\), be defined analogously to (10), with \(t_{(k_{a}^{0,a})}^{0,a}\) and \(t_{(k_{b}^{0,b})}^{0,b}\) replaced by \(t_{(k_{a})}^{0,a}\) and \(t_{(k_{b})}^{0,b}\), respectively. Denote by \(\ell_{a}=\{l_{a}(1),\cdots,l_{a}(n_{a}^{0})\}\), and \(\ell_{b}=\{l_{b}(1),\cdots,l_{b}(n_{b}^{0})\}\). Similar to (10), as long as the two thresholds \(c_{a},c_{b}\) are selected from \(\left\{t_{(j)}^{1,a}:l_{a}(k_{a})+1<j\leq l_{a}(k_{a}+1)\right\}\) and \(\left\{t_{(j)}^{1,b}:l_{b}(k_{b})+1<j\leq l_{b}(k_{b}+1)\right\}\), respectively,6 and \(k_{a}+k_{b}=k_{a}^{0}+k_{b}^{0}\), the high probability NP constraint can be respected. Write
Footnote 6: For simplicity of narrative, \(l_{a}(n_{a}^{0}+1)\) and \(l_{a}(n_{a}^{0}+1)\) are set to \(n_{a}^{1}\) and \(n_{b}^{1}\), respectively.
\[\operatorname{I\mskip-2.0mu P}\left(\left|r_{1}^{a}(i)-r_{1}^{b}(j)\right|> \varepsilon\right)=\operatorname{I\mskip-2.0mu E}_{s_{r},\ell_{a},\ell_{b}} \operatorname{I\mskip-2.0mu P}\left(\left|r_{1}^{a}(i)-r_{1}^{b}(j)\right|> \varepsilon\mid s_{r},\ell_{a},\ell_{b}\right)\,. \tag{15}\]
In the above, \(s_{r}\) stores the vector of the sensitive attributes associated with all instances in \(\mathcal{S}_{\text{left-out}}^{\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$ \text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$ \text{\tiny$\text{\tiny$\text{\tiny$\text{\text{\tiny$}$\text{\tiny$}$\text{$ \text{\tiny$\text{$\text{$\text{\text{\texttexttexttexttexttexttext{\text{\text{\text{\text{\text{\texttext{\text{\text \texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext \texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext \ \text{\text{\text{\text{\text{\texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext \text{\text{\text{\text{\text{\texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext \text{\text{\text{\text{\text{\texttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttexttext \text
\[G_{c,\ell}:=\left[G_{c,\ell}^{(1)},G_{c,\ell}^{(2)},\cdots,G_{c,\ell}^{(m)}\right]^ {\top}:=\left[\mathds{P}_{Y_{1}}(Y_{1}\leq c_{(1)}),\mathds{P}_{Y_{1}}(Y_{1}\leq c _{(2)}),\cdots,\mathds{P}_{Y_{1}}(Y_{1}\leq c_{(m)})\right]^{\top}\mid\ell\,.\]
Here, \(G_{c,\ell}\) and the Beta random variables are independent. The next step is to approximate the distribution of \(G_{c,\ell}\). With a slight abuse of notation, denote \(c_{(0)}=-\infty,c_{(m+1)}=+\infty\) and \(l_{(0)}=0,l_{(m+1)}=n\). It suffices to consider the joint distribution of the quantity
\[\Delta G_{c}\mid\Delta\ell:=\left[\mathds{P}_{Y_{1}}\left(c_{(j-1)}<Y_{1}\leq c _{(j)}\right),j\in[m+1]\right]^{\top}\mid[l_{(i)}-l_{(i-1)},i\in[m]]^{\top}\,.\]
For fixed \(c_{j}\), \(\Delta G_{c}=\left[\mathds{P}_{Y_{1}}\left(c_{(j-1)}<Y_{1}\leq c_{(j)}\right),j\in[m+1]\right]^{\top}\) can be viewed as the vector of probabilities for a multinomial distribution, and \(\Delta\ell=[l_{(i)}-l_{(i-1)},i\in[m+1]]^{\top}\) is a multinomial random variable of size \(n\) generated from this distribution. Then, the maximum likelihood estimator for \(\Delta G_{c}\) is \(\frac{\Delta\ell}{n}\). Therefore, when \(c_{j}\) is random for \(j\in[m]\), the distribution of \(\Delta G_{c}\mid\Delta\ell\) is the posterior distribution of \(\Delta G_{c}\) given \(\Delta\ell\), and thus, by invoking Bernstein-von Mises theorem again, is "close to" Gaussian centered at \(\frac{\Delta\ell}{n}\) with covariance matrix \(\Sigma\) where
\[\Sigma_{i,j}=\begin{cases}\frac{1}{n}\mathds{P}_{Y_{1}}\left(c_{(j-1)}<Y_{1} \leq c_{(j)}\right)\left(1-\mathds{P}_{Y_{1}}\left(c_{(j-1)}<Y_{1}\leq c_{(j) }\right)\right),&i=j\,,\\ -\frac{1}{n}\mathds{P}_{Y_{1}}\left(c_{(i-1)}<Y_{1}\leq c_{(i)}\right) \mathds{P}_{Y_{1}}\left(c_{(j-1)}<Y_{1}\leq c_{(j)}\right),&i\neq j\,.\end{cases}\]
Furthermore, we can use \((l_{(j)}-l_{(j-1)})/n\) to replace \(\mathds{P}_{Y_{1}}\left(c_{(j-1)}<Y_{1}\leq c_{(j)}\right)\) and obtain an estimated covariance matrix \(\tilde{\Sigma}\). Thus, the estimation of \(\mathds{P}_{X}(X\leq Y_{(k)})\mid\ell\) is finished.
Despite being lengthy, it is actually straightforward to relate this example with the problem in this section. Recall that in view of (15), the goal is to approximate the distribution of \((r_{1}^{a}(i)\mid\ell_{a})\). Note that conditional on scoring function \(T^{a}\) and \(s_{r}\), the scores \(t^{1,a},t_{1}^{1,a},t_{2}^{1,a},\cdots,t_{n_{a}^{1}}^{1,a}\) are i.i.d. random variables, and \(t_{1}^{0,a},t_{2}^{0,a},\cdots,t_{n_{a}^{0}}^{0,a}\) are also i.i.d. random variables. Furthermore, the two groups of random variables are mutually independent. Moreover, \(r_{1}^{a}(i)=\mathds{P}_{t^{1,a}}\left(t^{1,a}\leq t_{(i)}^{1,a}\right)\) and \(l_{a}(j)=\sum_{h=1}^{n_{a}^{1}}\mathds{I}\{t_{h}^{1,a}\leq t_{(j)}^{0,a}\}\) for every \(i\in[n_{a}^{1}]\) and \(j\in[n_{a}^{0}]\). Therefore, the problem setting is in line with the previous motivating example, and thus, the distribution of \(r_{1}^{a}(i)\mid\ell_{a}\) can be approximated in the same way. And the same procedure can be applied to the \(S=b\) component. To conclude, we select \(i\) and \(j\) such that
\[\mathds{P}\left(\left|\tilde{F}^{1,a}(i)-\tilde{F}^{1,b}(j)\right|>\varepsilon \right)\leq\gamma\,, \tag{16}\]
where
\[\tilde{F}^{1,a}(i)\stackrel{{ d}}{{=}}\begin{cases}B_{k,l_{a}(1)-k+ 1}\tilde{G}_{1}^{1,a},&k\leq l_{a}(1)\,,\\ \tilde{G}_{p}^{1,a}+\left(\tilde{G}_{p+1}^{1,a}-\tilde{G}_{p}^{1,a}\right)B_{k -l_{a}(p),l_{a}(p+1)-k+1},&l_{a}(p)<k\leq l_{a}(p+1),p\in[n_{a}^{0}-1]\,,\\ \tilde{G}_{n_{a}^{0}}^{1,a}+(1-\tilde{G}_{n_{a}^{0}}^{1,a})B_{k-l_{a}(n_{a}^{0} ),n_{a}^{1}-k+1},&k>l_{a}(n_{a}^{0})\,,\end{cases}\]
and \(\tilde{G}^{1,a}=\left[G_{1}^{1,a},\cdots,G_{n_{a}^{0}}^{1,a}\right]^{\top}\) is a Gaussian vector with mean \([l_{a}(1)/n_{a}^{1},\ldots,l_{a}(n_{a}^{0})/n_{a}^{1}]^{\top}\) and covariance matrix
\[\begin{bmatrix}\frac{(d_{a}(1)/n_{a}^{1})(1-d_{a}(1)/n_{a}^{1})}{n_{a}^{1}}&- \frac{(d_{a}(1)/n_{a}^{1})(d_{a}(2)/n_{a}^{1})}{n_{a}^{1}}&\cdots&-\frac{(d_{a}(1 )/n_{a}^{1})(d_{a}(n_{a}^{0}+1)/n_{a}^{1})}{n_{a}^{1}}\\ -\frac{(d_{a}(2)/n_{a}^{1})(d_{a}(1)/n_{a}^{1})}{n_{a}^{1}}&\frac{(d_{a}(2)/n_{a }^{1})(1-d_{a}(2)/n_{a}^{1})}{n_{a}^{1}}&\cdots&-\frac{(d_{a}(2)/n_{a}^{1})(d_{a }^{0}+1)/n_{a}^{1}}{n_{a}^{1}}\\ \vdots&\vdots&\ddots&\vdots\\ -\frac{(d_{a}(n_{a}^{1}+1)/n_{a}^{1})(d_{a}(1)/n_{a}^{1})}{n_{a}^{1}}&-\frac{(d_ {a}(n_{a}^{1}+1)/n_{a}^{1})(d_{a}(2)/n_{a}^{1})}{n_{a}^{1}}&\cdots&\frac{(d_{a}(n _{a}^{0}+1)/n_{a}^{1})(1-d_{a}(n_{a}^{0}+1)/n_{a}^{1})}{n_{a}^{1}}\end{bmatrix}\,.\]
Here,
\[d_{a}(k)=\begin{cases}l_{a}(1),&k=1\,,\\ l_{a}(k+1)-l_{a}(k),&k=2,3,\cdots,n_{a}^{0}-1\,,\\ n_{a}^{1}-l_{a}(n_{a}^{0}),&k=n_{a}^{0}\,.\end{cases}\]
Moreover, \(\tilde{F}^{1,b}(j)\) is defined analogously. Details of this approximation can be found in Algorithm 4 in the Appendix. Next, one pair of \(i\) and \(j\) needs to be selected among all possible pairs satisfying (16). In Algorithm 1, we traverse all feasible pairs of \(i\) and \(j\) and choose one that minimizes the empirical type II error. It was computationally feasible because only \(i,j\) such that \(t_{(i)}^{1,a}>t_{(k_{a}^{0,a})}^{0,a}\) and \(t_{(j)}^{1,b}>t_{(k_{a}^{0,b})}^{0,b}\) were considered. However, our generalized algorithm \(\text{NP-EO}_{\text{MP}}\) has multiple pairs of pivots and it could be time-consuming to do the same. Therefore, we adopt the following heuristics:
1. Compute \(t_{(k_{a})}^{0}\) by the NP umbrella algorithm. Then, select \(k_{a}^{0}\) and \(k_{b}^{0}\) by (14) and set \(k_{a}=k_{a}^{0}\) and \(k_{b}=k_{b}^{0}\).
2. Given \(k_{a}\) and \(k_{b}\), set \(i=l_{a}(k_{a})+1\) and \(j=l_{b}(k_{b})+1\), i.e., \(i\) is such that \(t_{(i)}^{1,a}\) is the smallest element in \(\mathcal{T}^{1,a}\) larger than \(t_{(k_{a})}^{0,a}\), and \(j\) is selected analogously.
3. Apply Algorithm 4 to \(i,j^{7}\) to calculate the approximate one-sided EO violation rates \(\operatorname{\rm I\!P}(\tilde{F}^{1,a}(i)-\tilde{F}^{1,b}(j)\geq\varepsilon)\) and \(\operatorname{\rm I\!P}(\tilde{F}^{1,b}(j)-\tilde{F}^{1,a}(i)\geq\varepsilon)\). If the former approximation is larger than \(\gamma\), i.e., \(\tilde{F}^{1,a}(i)\) is too large, increase \(k_{b}\) by \(1\) and decrease \(k_{a}\) by \(1\). If the latter approximation is larger than \(\gamma\), increase \(k_{a}\) by \(1\) and decrease \(k_{b}\) by \(1\).
4. Repeat Steps (b) - (c) until the approximate value \(\operatorname{\rm I\!P}(|\tilde{F}^{1,a}(i)-\tilde{F}^{1,b}(j)|\geq\varepsilon)\) is smaller than or equal to \(\gamma\), then use \(t_{(i)}^{1,a}\) and \(t_{(j)}^{1,b}\) as thresholds.8 Footnote 8: There are exceptions where Step (d) cannot be achieved by repeating Steps (b) - (c). However, these can be handled subtly by adjusting \(i\) and \(j\). Details are included in Algorithm 5 in the Appendix.
Let us briefly discuss the above procedure. After key quantities \(t_{(k_{a})}^{0},k_{a}^{0}\), and \(k_{b}^{0}\) are determined, \(k_{a}\) and \(k_{b}\) are set to \(k_{a}^{0}\) and \(k_{b}^{0}\), respectively, in Step (a). In Step (b) and (c), an iterative method is used to find \(i\) and \(j\) that satisfy (16). For a pair of \(k_{a}\) and \(k_{b}\), we only look at \(i\) and \(j\) such that \(t_{(i)}^{1,a}\) and \(t_{(j)}^{1,b}\) are the smallest elements in \(\mathcal{T}^{1,a}\) and \(\mathcal{T}^{1,b}\) that are larger than \(t_{(k_{a})}^{0,a}\) and \(t_{(k_{b})}^{0,b}\), respectively. If this pair of \(i\) and \(j\) fails to satisfy (16), we adjust \(k_{a}\) and \(k_{b}\), and then update \(i\) and \(j\) accordingly. For example, if \(\operatorname{\rm I\!P}(\tilde{F}^{1,a}(i)-\tilde{F}^{1,b}(j)\geq\varepsilon)>\gamma\), i.e., \(\tilde{F}^{1,a}(i)\) is too large and \(\tilde{F}^{1,b}(j)\) is too small, we decrease \(k_{a}\) by \(1\) and increase \(k_{b}\) by \(1\), so that \(k_{a}+k_{b}=k_{a}^{0}+k_{b}^{0}\) and thus high-probability NP constraint is respected. After \(k_{a}\) and \(k_{b}\) are updated, \(i\) and \(j\) are selected in the same way described above. This updating procedure can be done iteratively until (16) is reached. Then, the scores \(t_{(i)}^{1,a}\) and \(t_{(j)}^{1,b}\) are selected as the thresholds of the resulting classifier.
This more general version of NP-EO umbrella algorithm is summarized as Algorithm 2. Instead of using only one pair of pivots in Algorithm 1, Algorithm 2 uses multiple pairs. Concretely, the two pivots \(t_{(k_{a})}^{0,a}\) and \(t_{(k_{b})}^{0,b}\) can be increased or decreased based on their resulting one-sided
type II error disparities. Algorithm 1 controls \(R_{0}^{a}\) and \(R_{0}^{b}\) simultaneously to achieve the high-probability NP constraint. Algorithm 2, however, relieves the control on one of them but uses the empirical type I errors as a bridge to have an "approximate control" on the population-level type I error. This increases the risk of failing the exact probability target of type I error control. However, the advantage of this less conservative approach is obvious: lowering the pivot on one side allows a higher classification power. Indeed, numerical evidence from Section 5.1 suggests that Algorithm 2 has a lower type II error compared to Algorithm 1 and a higher type I error. Furthermore, both algorithms satisfy high-probability NP and EO constraints. Same as in Section 4.1, in theory there could be exceptions that no \((i,j)\) satisfies (16). However, we have not met this exception in data analysis.
Now we are ready to present the theoretical guarantee for Algorithm 2. Since the empirical type I errors are used as a bridge to link the population-level type I errors for different pairs of pivots, a concentration of empirical type I errors towards population-level type I error is needed. Thus, in the following theoretical result, we allow an \(\eta\)-error between empirical and population-level type I errors. That is, the target probability for type I error control will be set at \(\alpha-\eta\) where \(\eta\) is a small number compared with \(\alpha\). However, this is not needed in numerical implementation of Algorithm 2.
```
Input :\(\mathcal{S}^{y,s}\): \(X\) observations whose label \(y\in\{0,1\}\) and sensitive attribute \(s\in\{a,b\}\) \(\alpha\): upper bound for type I error \(\delta\): type I error violation rate target \(\varepsilon\): upper bound for the type II error disparity \(\gamma\): type II error disparity violation rate target \(\mathcal{S}^{y,s}_{\text{train}},\mathcal{S}^{y,s}_{\text{left-out}}\leftarrow\) random split on \(\mathcal{S}^{y,s}\) for \(y\in\{0,1\}\) and \(s\in\{a,b\}\) \(\mathcal{S}_{\text{train}}\leftarrow\mathcal{S}^{0,a}_{\text{train}}\cup \mathcal{S}^{0,b}_{\text{train}}\cup\mathcal{S}^{1,a}_{\text{train}}\cup \mathcal{S}^{1,b}_{\text{train}}\) \(T\leftarrow\) base classification algorithm(\(\mathcal{S}_{\text{train}}\)) ; // \(T(\cdot,\cdot):\mathcal{X}\times\{a,b\}\mapsto\mathrm{I\!R}\) \(T^{s}(\cdot)\gets T(\cdot,s)\) for \(s\in\{a,b\}\) \(\mathcal{T}^{y,s}\gets T^{s}(\mathcal{S}^{y,s}_{\text{left-out}})\) for \(y\in\{0,1\}\) and \(s\in\{a,b\}\) \(n^{y}_{s}\leftarrow|\mathcal{T}^{y,s}|\) for \(y\in\{0,1\}\) and \(s\in\{a,b\}\) \(\mathcal{T}^{0}=\mathcal{T}^{0,a}\cup\mathcal{T}^{0,b}=\{t^{0}_{(1)},t^{0}_{(2 )},\cdots,t^{0}_{(n^{0})}\}\), where \(n^{0}=n^{0}_{a}+n^{0}_{b}\) \(\mathcal{T}^{y,s}=\{t^{y,s}_{(1)},t^{y,s}_{(2)},\cdots,t^{y,s}_{(n^{s}_{2})}\}\) for \(y\in\{0,1\}\) and \(s\in\{a,b\}\) \(k_{*}\leftarrow\) the NP umbrella algorithm(\(n^{0},\alpha,\delta\)) \(\{l_{s}(1),\cdots,l_{s}(n^{0}_{s})\}\leftarrow\left\{\sum_{j=1}^{n^{1}_{s}} \mathds{1}\{t^{1,s}_{j}\leq t^{1,s}_{(1)}\},\cdots,\sum_{j=1}^{n^{1}_{s}} \mathds{1}\{t^{1,s}_{j}\leq t^{1,s}_{(n^{1}_{j})}\}\right\}\) for \(s\in\{a,b\}\) \(k_{s}\gets k^{0}_{s}\leftarrow\sum_{j=1}^{n^{0}_{s}}\mathds{1}\{t^{0,s}_{j }\leq t^{0}_{(k_{*})}\}\) for \(s\in\{a,b\}\) \((k^{*}_{a},k^{*}_{b})\leftarrow\) Order selection algorithm(\(k_{s},n^{y}_{s},l_{s}(1),\cdots,l_{s}(n^{0}_{s}),\varepsilon,\gamma\)) for \(s\in\{a,b\}\) Output :\(\widehat{\phi}^{**}(X,S)=\mathds{1}\{T^{a}(X)>t^{1,a}_{(k^{*}_{a})}\}\cdot \mathds{1}\{S=a\}+\mathds{1}\{T^{b}(X)>t^{1,b}_{(k^{*}_{b})}\}\cdot\mathds{1} \{S=b\}\)
```
**Algorithm 2**NP-EO\({}_{\text{MP}}\) umbrella algorithm ["MP" means Multiple (Pairs of) Pivots]
**Theorem 3**: _Let \(\widehat{\phi}^{**}(\cdot,\cdot)\) be the classifier output by Algorithm 2 with parameters \((\alpha-\eta,\delta,\varepsilon,\gamma)\) for \(0<\eta\ll\alpha\). Assume that the scoring function \(T(\cdot,\cdot)\) is trained such that the same conditions
_in Theorem 2 hold and that \(n^{0}\geq\log(\delta)/\log(1-\alpha)\). Then it holds simultaneously that_
\[(a) \mathrm{I\!P}\left(R_{0}(\widehat{\phi}^{**})>\alpha\right)\leq \delta+2e^{-\frac{1}{32}n^{0}(p_{\mathrm{n}|0}-\eta/8)\eta^{2}}+2e^{-\frac{1}{3 2}n^{0}(p_{\mathrm{n}|0}-\eta/8)\eta^{2}}+2e^{-\frac{1}{32}n^{0}\eta^{2}}+2e^{ -\frac{1}{2}n^{0}\eta^{2}}\,,\] \[(b) \mathrm{I\!P}\left(|R_{1}^{a}(\widehat{\phi}^{**})-R_{1}^{b}( \widehat{\phi}^{**})|>\varepsilon\right)\leq\gamma+\xi^{\prime}(n^{1})\,,\]
_in which \(\xi^{\prime}(n^{1})\) converges to \(0\) as \(n^{1}=n^{1}_{a}+n^{1}_{b}\) diverges._
The proof of this theorem is presented in the Appendix. Here, we remark that the main difference between Theorems 2 and 3 is in part (a). In Theorem 2, the type I error is controlled with probability at least \(1-\delta\), whereas in Theorem 3, \(\widehat{\phi}^{**}\) only gives an "approximately" \(1-\delta\) type I error control. This is not surprising since we use empirical type I errors to estimate population-level type I error for \(\widehat{\phi}^{**}\) and thus to make sure their population-level type I errors are close by matching the empirical type I errors. As such, the exponential terms in part (a) of Theorem 3 compensate for this estimation.
## 5 Numerical results
In this section, we present simulation and real-data evidence that supports the effectiveness of the newly proposed NP-EO algorithms. In each simulation setting, all trained algorithms are evaluated on a large test set to the approximate the (population-level) type I and type II errors. This procedure is repeated 1,000 times and thus 1,000 copies of (approximate) type I and type II errors can be acquired. Then, the NP violation rate is computed as the proportion of type I error exceeding the target level defined in the NP constraint. Similarly, the EO violation rate is computed as the proportion of type II error disparity exceeding the target level defined in the EO constraint. Finally, recall that for NP-EO\({}_{\mathrm{OP}}\) algorithm, we use \(\delta\), instead of \(\delta/2\), in Algorithm (1).
### Simulation
In all settings, for each \(y\in\{0,1\}\) and \(s\in\{a,b\}\), we generate \(n^{y,s}\) training observations and \(100n^{y,s}\) test observations. We compare the NP-EO\({}_{\mathrm{OP}}\) and NP-EO\({}_{\mathrm{MP}}\) algorithms with three existing algorithms, namely, the classical algorithm, NP umbrella algorithm, and NP umbrella algorithm mixed with random guesses. Here, the classical algorithm (e.g., logistic regression, support vector machines) is the base algorithm without any adjustment for either the NP or EO constraint. The NP umbrella algorithm adjusts base algorithms for the NP constraint and it is described in Section A.1.
The NP umbrella algorithm mixed with random guesses, inspired by Hardt et al. (2016), works as follows. We start with an NP classifier, \(\widehat{\phi}_{\mathrm{NP}}\), trained by the NP umbrella algorithm. Without loss of generality, we assume \(R_{1}^{a}(\widehat{\phi}_{\mathrm{NP}})>R_{1}^{b}(\widehat{\phi}_{\mathrm{NP}})\). A naive method to make the EO constraint satisfied is to increase type II error for \(S=b\) by adding noise via a random guess classifier \(\phi_{\mathrm{RG}}\) with \(\mathrm{I\!P}(\phi_{\mathrm{RG}}=1)=\alpha\). Then, for an observation in the testing sample with \(S=a\), we use \(\widehat{\phi}_{\mathrm{NP}}\) only; for an observation with \(S=b\), with probability \(p\), \(\widehat{\phi}_{\mathrm{NP}}\) is selected to classify this observation, and with probability \(1-p\), \(\phi_{\mathrm{RG}}\) is used. Note that \(R_{1}^{b}(\phi_{\mathrm{RG}})=1-\alpha\). Then, for
this mixed classifier \(\widehat{\phi}_{\text{mixed}}\), \(R_{1}^{a}(\widehat{\phi}_{\text{mixed}})=R_{1}^{a}(\widehat{\phi}_{\text{NP}})\) and \(R_{1}^{b}(\widehat{\phi}_{\text{mixed}})=pR_{1}^{b}(\widehat{\phi}_{\text{NP}})+ (1-p)(1-\alpha)\). As long as \(\widehat{\phi}_{\text{NP}}\) is more powerful than \(\phi_{\text{RG}}\) on group \(a\), i.e., \(R_{1}^{a}(\widehat{\phi}_{\text{NP}})\leq 1-\alpha\), \(\widehat{\phi}_{\text{mixed}}\) can achieve equality of opportunity by choosing \(p\) properly.
In this simulation, we choose the probability \(p\) by 20-fold cross validation: we train an NP classifier on 19 folds of the training data and compute the estimated \(R_{1}^{a}(\widehat{\phi}_{\text{NP}})\) and \(R_{1}^{b}(\widehat{\phi}_{\text{NP}})\) on the left-out fold. Since \(R_{1}^{a}(\phi_{\text{RG}})\) and \(R_{1}^{b}(\phi_{\text{RG}})\) are explicit, we can directly estimate \(R_{1}^{a}(\widehat{\phi}_{\text{mixed}})\), \(R_{1}^{b}(\widehat{\phi}_{\text{mixed}})\) and thus type II error disparity for every value of \(p\) and the option of adding random guesses for either \(S=a\) or \(S=b\). We traverse all the combinations of \(p=0,0.1,0.2,0.3,\cdots,0.9\) and the options of adding random guesses to both \(S\) components. Next, for every combination, we calculate the estimated type II error disparity for every fold and thus can estimate the estimated probability of type II error disparity exceeding \(\varepsilon\). Finally, we select the the combination such that this estimated probability is smaller than or equal to \(\gamma\). If there are multiple such combinations, we select the one with the largest \(p\). Then the resulting \(\widehat{\phi}_{\text{mixed}}\) satisfies high-probability NP and EO constraints.
**Simulation 1**: _Let \(X^{y,s}\) be multidimensional Gaussian distributed with mean \(\mu_{y,s}\) and covariance matrix \(\Sigma_{y,s}\) for each \(y\in\{0,1\}\) and \(s\in\{a,b\}\). Here, \(\mu_{0,a}=(1,2,1)^{\top}\), \(\mu_{1,a}=(0,0,0)^{\top}\), \(\mu_{0,b}=(0,0,2)^{\top}\) and \(\mu_{1,b}=(1,0,-1)^{\top}\). Moreover_
\[\Sigma_{y,a}=\begin{pmatrix}2&-1&0\\ -1&2&-1\\ 0&-1&2\end{pmatrix}\qquad\text{and}\qquad\Sigma_{y,b}=\begin{pmatrix}1&0&0\\ 0&2&0\\ 0&0&1\end{pmatrix}\,,\]
_for every \(y\in\{0,1\}\). Furthermore, \(n^{0,a}=800\), \(n^{1,a}=400\), \(n^{0,b}=1200\) and \(n^{1,b}=1600\). We set \(\alpha=0.05\), \(\delta=0.05\), \(\varepsilon=0.2\) and \(\gamma=0.05\). The base algorithm used is logistic regression. The numerical results associated with this simulation are reported in Table 1._
**Simulation 2**: _Let \(X^{y,s}\) be uniformly distributed in a three dimensional ball \(B_{y,s}\) with radius \(1\) and centered at \(O_{y,s}\), where \(O_{0,a}=(0,0,0)^{\top}\), \(O_{1,a}=(1,0,-1)^{\top}\), \(O_{0,b}=(1,1,1)^{\top}\) and \(O_{1,b}=(-1,1,0)^{\top}\). Furthermore, \(n^{0,a}=800\), \(n^{1,a}=400\), \(n^{0,b}=1200\) and \(n^{1,b}=1600\). We also set
\begin{table}
\begin{tabular}{|c c c c c|} \hline & average of & average of & NP violation & EO violation \\ & type I errors & type II errors & rate & rate \\ \hline NP-EO\({}_{\text{OP}}\) &.012(1.13) &.480(12.75) & 0(0) &.046(66.28) \\ NP-EO\({}_{\text{MP}}\) &.039(1.92) &.387(14.74) &.033(56.52) &.029(53.09) \\ NP mixed with random guess &.035(1.61) &.657(23.25) &.010(31.48) &.037(59.72) \\ NP &.039(1.97) &.163(3.78) &.047(66.96) & 1(0) \\ classical &.094(1.79) &.096(1.35) & 1(0) & 1(0) \\ \hline \end{tabular}
\end{table}
Table 1: Averages of type I and II errors, along with violation rates of the NP and EO constraints over \(1,\!000\) repetitions for Simulation 1. Standard error of the means (\(\times 10^{-4}\)) in parentheses
\(\alpha=0.05\), \(\delta=0.05\), \(\varepsilon=0.2\) and \(\gamma=0.05\). The base algorithm used is logistic regression. The numerical results associated with this simulation are reported in Table 2._
In both simulations, the classical classifier admits the lowest type II error; the NP classifier comes in the second place. This is not surprising as the NP paradigm controls the type I error to a low level with high probability, thereby resulting in a higher type II error. The NP and EO violation rates are both higher than the target levels for the classical classifier, whereas the NP classifier fails to keep the EO violation rate under the target level. These two classifiers adopt no design for EO adjustments; thus, it is expected that the EO requirement would fail.
The remaining three algorithms, \(\text{NP-EO}_{\text{OP}}\), \(\text{NP-EO}_{\text{MP}}\) and NP mixed with random guesses, are built to achieve the high-probability NP and EO constraints. All three algorithms produce an overall type II error larger than that of the NP paradigm. This is the price paid for equality in our classification algorithms. For reference, we remark that the "nearly trivial" NP-EO classifier, a random guess that return 1 with probability 0.05 and 0 otherwise, has an overall type II error as high as 0.95. Benchmarked against this result, the classifiers listed in both Tables 1 and 2 have much smaller type II errors. Moreover, in terms of the overall type II error, it is clear that \(\text{NP-EO}_{\text{OP}}\) and \(\text{NP-EO}_{\text{MP}}\) outperform NP mixed with random guesses, suggesting the effectiveness of our proposed algorithms. Between the two proposed algorithms, \(\text{NP-EO}_{\text{MP}}\) yields larger average overall type I error and type I error violation rate, and smaller overall type II error than \(\text{NP-EO}_{\text{OP}}\), which agrees with the argument in Section 4.2 that \(\text{NP-EO}_{\text{MP}}\) uses multiple pivots to select thresholds more effectively. In conclusion, the two simulation studies illustrate that our proposed algorithms under the NP-EO paradigm are able to achieve the goals of regulating equality of opportunities and controlling type I error while only paying a modest price in terms of the less consequential type II error.
### Real data analysis
In many countries, lenders' discrimination against a certain social group other than creditworthiness is either illegal or socially unacceptable. Most notably, the Equal Credit Opportunity Act in the US explicitly makes it unlawful for any creditor to discriminate against any applicant on the basis of race, color, sex, and other non-credit related social factors. Nevertheless, ample
\begin{table}
\begin{tabular}{|c c c c c|} \hline & average of & average of & NP violation & EO violation \\ & type I errors & type II errors & rate & rate \\ \hline \(\text{NP-EO}_{\text{OP}}\) &.012(1.12) &.478(12.67) & 0(0) &.053(70.88) \\ \(\text{NP-EO}_{\text{MP}}\) &.038(2.01) &.346(14.11) &.030(53.97) &.070(80.72) \\ \(\text{NP mixed with}\) & & & & \\ random guess &.035(1.55) &.588(21.11) &.006(24.43) &.009(29.88) \\ \(\text{NP}\) &.034(2.45) &.191(6.43) &.029(53.09) & 1(0) \\ classical &.094(1.88) &.094(1.39) & 1(0) & 1(0) \\ \hline \end{tabular}
\end{table}
Table 2: Averages of type I and II errors, along with violation rates of NP and EO constraints over \(1{,}000\) repetitions for Simulation 2. Standard error of the means (\(\times 10^{-4}\)) in parentheses
evidence shows that Hispanic and Black borrowers have less access to credits or pay a higher price for mortgage loans in the US (Munnell et al., 1996; Charles et al., 2008; Hanson et al., 2016; Bayer et al., 2018).
With the emergence of the FinTech market, statistical and machine learning techniques have gained increasing popularity in lending decisions by both traditional financial institutions and peer-to-peer lending and crowd-sourcing platforms. An important regulatory concern in this development is whether algorithmic decision-making promotes or impedes impermissible discrimination. Recently, Bartlett et al. (2022) show that algorithmic lending reduces rate disparities between Latinx/African-American borrowers and other borrowers in consumer-lending markets but cannot eliminate the bias. Fuster et al. (2022) find that, in the US mortgage market, Black and Hispanic borrowers are disproportionately less likely to gain from the introduction of machine learning in lending decisions. Central in the welfare judgement of algorithmic lending is the tradeoff between efficiency (controlling default risk) and equality (non-disparate treatment). In the section, we illustrate how our proposed algorithms can help address this question with an example of potential gender bias in credit card consumption in Taiwan.
We focus on this case for two reasons. First, gender discrimination is a significant phenomenon in credit lending markets worldwide. Alesina et al. (2013) find that Italian women pay more for overdraft facilities than men. Bellucci et al. (2010) and Andres et al. (2021) show that female entrepreneurs face tighter credit availability in Italy and Spain. Ongena and Popov (2016) document a strong correlation between gender bias and credit access across developing countries. Second, practically, the Taiwanese credit card dataset is simple, transparent, and has clear labelling of payment status that enables an analysis of financial risk.
The dataset is from Yeh and Lien (2009), which has been widely used to evaluate various data mining techniques. This dataset depicts the given credit, demographic features, and payment history of 30,000 individuals during April 2005 to September 2005. Importantly, it includes a binary status of the payment: either default, encoded by 0, or non-default, encoded by 1. Among all 30,000 records, 6,636 of them are labelled as 0, i.e., default. In this dataset, a person is default if they fail to repay the credit card in October 2005. The payment status defines the type I/II errors in the classification problem, and the protected attribute is gender. In this dataset, 11,888 people are labelled as male and 18,112 are labelled as female. Fitting such a typical credit-lending problem into the NP-EO classification framework, banks primarily want to control the risk of misclassifying someone who will default as non-default (type I error) although they also want to minimize the chance of letting go non-defaulters (type II error). Furthermore, by regulation or as a social norm, fairness requires banks not to discriminate against qualified applicants on the basis of gender. Therefore, to obtain the dual goal of risk control and fairness, our classification problem needs to satisfy the NP constraint and the EO constraint. We also note that since we already illustrated in Section 5.1 that the NP classifier mixed with random guesses performs worse than our proposed algorithms in all simulation settings, we do not include this classifier in this real data section.
We use 1/3 of the data for training and the other 2/3 for test, with stratification in both
protected attribute and label. As an illustrative example, we set \(\alpha=0.1\), \(\delta=0.1\), \(\varepsilon=0.05\) and \(\gamma=0.1\). The base algorithm used is random forest. The process is repeated 1000 times, and the numerical results are presented in Table 3. Using the classical classifier, the high-probability EO constraint is satisfied. Indeed, the EO violation rate in Table 3 is 0, indicating that the random forest under the classic paradigm is "fair" and "equal" in terms of gender. This is not entirely surprising given that gender bias in modern Taiwan is not a significant concern. The problem with this classifier is that it produces a type I error of 0.633, which is prohibitively high for nearly any financial institution. Benchmarked against the modest NP constraint (\(\alpha=0.1\)), the violation rate is 1, imposing too much risk to the banks.
When the NP paradigm alone is employed, the EO violation rate surges to 0.482, demonstrating a conflict between the banks' private gain of improving risk control and the society's loss of achieving fairness. When the NP-EO\({}_{\text{OP}}\) and NP-EO\({}_{\text{MP}}\) algorithms are employed, both the NP and EO constraints are satisfied with very small violation rates, and the classifiers simultaneously achieve the goals of risk control and fairness. The cost that the banks have to bear is missing some potential business opportunities from non-defaulters, which is reflected in the higher overall type II error committed by either NP-EO algorithm. Consistent with the simulation results in Section 5.1, compared to NP-EO\({}_{\text{OP}}\), NP-EO\({}_{\text{MP}}\) produces a smaller the overall type II error while maintaining satisfactory (yet larger) violation rates.
## 6 Discussion
This paper is motivated by two practical needs in algorithmic design: a private user's need to internalize social consideration and a social planner's need to facilitate private users' compliance with regulation. The challenge in fulfilling these needs stems from the conflict between the private and social goals. Notably, the social planner's promotion of fairness and equality may constrain private users' pursuit of profits and efficiency. In an ideal world without measurement and sampling problems, such a private-public conflict can be best resolved by maximizing a social welfare function with well-defined private and public components. Statistical tools hardly play any role in this process. However, when knowledge about the social welfare function is partial, measurement of each component in the objective is imperfect, and consequences of predictive errors are uncertain, statistical innovation is called for to step into the endeavor of resolving the
\begin{table}
\begin{tabular}{|c c c c c|} \hline & average of & average of & NP violation & EO violation \\ & type I errors & type II errors & rate & rate \\ \hline NP-EO\({}_{\text{OP}}\) &.081(3.11) &.720(6.65) &.033(56.52) &.034(57.34) \\ NP-EO\({}_{\text{MP}}\) &.089(2.99) &.701(6.23) &.114(100.55) &.054(71.51) \\ NP &.088(3.02) &.700(6.26) &.111(99.39) &.482(158.10) \\ classical &.633(4.02) &.059(1.31) & 1(0) & 0(0) \\ \hline \end{tabular}
\end{table}
Table 3: (Averages of type I and II errors, along with violation rates of NP and EO constraints over 1,000 repetitions for credit card dataset. Standard error of the means (\(\times 10^{-4}\)) in parentheses
private-public conflict. Our work is a response to this challenge.
In a classification setting, we propose the NP-EO paradigm, in which we incorporate a social consideration into a constrained optimization problem with the less-important private goal (type II error) being the objective while the social goal (equal opportunity) and the more-important private goal (type I error) as constraints. Algorithmic decisions with such restrictions provide safeguards against deviations from the social goal and avoid significant damage to the private goal, leaving the private-social conflict mostly absorbed by the less-consequential private consideration. We believe that our approach can be applied to a wide range of settings beyond the problem we are handling in this paper.
We do not claim that our proposed NP-EO paradigm is superior to other classification paradigms. Rather, we are proposing an alternative framework to handle private-social conflicts in algorithmic design. Central in our analysis is a perspective of gaining security through statistical control when multiple objectives have to be compromised. Key to our methodological innovation is a principled way to redistribute specific errors so that the resulting classifiers have high-probability statistical guarantees.
Possible future research direction include but not limited to: (i) extending the solutions to multiple constraints with respects to the social norms, which can be multiple attributes such as race and gender or multiple levels such as race, (ii) working with parametric models, such as the linear discriminant analysis (LDA) model, to derive model-specific NP-EO classifiers that address small sample size problem and satisfy oracle type inequalities, (iii) replacing type I error constraint by other efficiency constraints, and replacing the EO constraint by other fairness criteria, and (iv) studying fairness under other asymmetric efficiency frameworks such as isotonic subgroup selection in Muller et al. (2023).
|
2306.15942 | Enhanced Neural Beamformer with Spatial Information for Target Speech
Extraction | Recently, deep learning-based beamforming algorithms have shown promising
performance in target speech extraction tasks. However, most systems do not
fully utilize spatial information. In this paper, we propose a target speech
extraction network that utilizes spatial information to enhance the performance
of neural beamformer. To achieve this, we first use the UNet-TCN structure to
model input features and improve the estimation accuracy of the speech
pre-separation module by avoiding information loss caused by direct
dimensionality reduction in other models. Furthermore, we introduce a
multi-head cross-attention mechanism that enhances the neural beamformer's
perception of spatial information by making full use of the spatial information
received by the array. Experimental results demonstrate that our approach,
which incorporates a more reasonable target mask estimation network and a
spatial information-based cross-attention mechanism into the neural beamformer,
effectively improves speech separation performance. | Aoqi Guo, Junnan Wu, Peng Gao, Wenbo Zhu, Qinwen Guo, Dazhi Gao, Yujun Wang | 2023-06-28T06:03:10Z | http://arxiv.org/abs/2306.15942v1 | # Enhanced Neural Beamformer with Spatial Information for Target Speech Extraction
###### Abstract
Recently, deep learning-based beamforming algorithms have shown promising performance in target speech extraction tasks. However, most systems do not fully utilize spatial information. In this paper, we propose a target speech extraction network that utilizes spatial information to enhance the performance of neural beamformer. To achieve this, we first use the UNet-TCN structure to model input features and improve the estimation accuracy of the speech pre-separation module by avoiding information loss caused by direct dimensionality reduction in other models. Furthermore, we introduce a multi-head cross-attention mechanism that enhances the neural beamformer's perception of spatial information by making full use of the spatial information received by the array. Experimental results demonstrate that our approach, which incorporates a more reasonable target mask estimation network and a spatial information-based cross-attention mechanism into the neural beamformer, effectively improves speech separation performance.
## I Introduction
The target speech extraction task is derived from the speech separation task, which aims to extract the speech of a specific target speaker from a mixed signal. Unlike conventional speech separation, which deals with the permutation order of signals, target speech extraction only focuses on extracting the target signal, thus avoiding the permutation problem [1]. In recent years, multi-channel beamforming algorithms have shown great potential in the task of target speech extraction. Beamforming algorithms achieve spatial domain filtering of the observed signal by enhancing the signal in the desired direction and suppressing the signal in other directions, which naturally meets the task goal of target speech extraction. Classical beamforming algorithms require accurate estimation of the wave direction and determination of the speech or noise part in the mixed signal [2], which is difficult to obtain in noisy and reverberant environments using traditional algorithms. Therefore, obtaining accurate target orientation information in real scenes, improving the estimation accuracy of the covariance matrix, and solving the numerical instability problem in matrix operations are the key to improving beamforming performance.
In recent years, the development of deep learning technology has provided new research ideas for solving the above problems. First, using neural networks for wave direction estimation has achieved good experimental results [3], providing more accurate target orientation information for beamforming algorithms. At the same time, combining neural networks with beamforming algorithms improves the estimation accuracy of the covariance matrix and effectively improves the performance upper limit of beamforming algorithms. Among them, Mask Based Beamforming, as a classic representative of the combination of deep learning and beamforming algorithms, uses a neural network to predict the mask of speech and noise to calculate the spatial covariance matrix of speech and noise, respectively, and then inputs it into traditional beamforming algorithms such as GEV or MVDR for numerical calculation [4, 5]. For the numerical instability problem that traditional beamforming algorithms still have difficulty in solving even after diagonal loading [6] in matrix numerical operations, ADL-MVDR [7] used an RNN to replace the matrix inversion and eigendecomposition processes in the MVDR beamforming algorithm, avoided the numerical instability caused by singular matrices in matrix operations, and achieved significant performance improvement for the neural beamformer. In recent experiments, the combination of deep learning and beamforming algorithms has been further deepened. GRNNBF [8] used an RNN to model the covariance matrices of speech and noise, replaced all matrix numerical operations in traditional beamforming algorithms, and directly predicted beamforming weights using neural networks, achieved good separation results. SARNN [9] added self-attention mechanism to the beamforming module, enhanced the neural network's modeling ability for the spatial and temporal information contained in the covariance matrix. EABNet [10] adopted a more aggressive approach, directly modelled the array observation signals to obtain higher-dimensional information representations than the covariance matrix. The neural network implicitly includes all the calculation steps of beamforming and directly outputs beamforming weights. The above methods combined with deep learning technology have all achieved good performance improvements for beamforming algorithms.
In scenarios such as in-car and remote conferences, the position of the target speaker relative to the microphone array is often fixed, and its angle information is easily obtained through the device [1]. To leverage the angle information of the target sound source, Chen et al. proposed the wave direction feature calculated using the angle information of the target sound source, which enhanced the directionality of the speech separation system [11]. ADL-MVDR [7], GRNNBF [8], SARNN [9], UFE [12], etc. inputted the angle feature
into the neural network, improved the separation module's perception of spatial information, thereby improved the separation performance of the system. Gu et al. extended the angle feature from a two-dimensional plane to a three-dimensional space, improved the separation performance of the model when the azimuth angle between the target sound source and the interference source was small [13].
In this paper, we designed a model that uses target spatial orientation information to enhance the performance of neural beamforming. The system consists of a front-end pre-separation module and a back-end beamforming module. Compared with other models, our improvements mainly focus on the following two points:
\(\bullet\) To improve the estimation accuracy of the spatial covariance matrices of speech and noise, we stack the input features in the channel dimension and use the UNet-TCN structure to model the input features. This avoids the loss of feature information caused by directly reducing the input features after connecting them in the frequency dimension.
\(\bullet\) To better utilize the spatial information obtained by the array, we transform the spatial features of the input signal and the covariance matrix calculated by the pre-separation module into a feature space with the same dimensionality. We then use a cross-attention mechanism to enhance the beamforming network's perception of spatial orientation, thereby improving the prediction accuracy of the beamforming weights.
Experimental results show that after the improvements to the pre-separation module and beamforming module mentioned above, the target speech separation performance of the neural beamformer could be effectively improved.
The remainder of this paper is organized as follows. Section II presents the signal model for the target speech extraction task and the beamforming algorithm. Section III details our proposed neural beamforming algorithm. Section IV provides an overview of the experimental setup and presents an analysis of the experimental results. Finally, Section V concludes the paper.
## II Signal Modeling and Beamforming
Consider the far-field time-domain signal model of the real scene, described as:
\[y(t)=x(t)*a_{1}(t)+n(t)*a_{2}(t)+s(t) \tag{1}\] \[=x^{{}^{\prime}}(t)+n^{{}^{\prime}}(t)+s(t) \tag{2}\]
where \(y(t)=[y^{(0)}(t),y^{(1)}(t),...,y^{(M-1)}(t)]^{T}\) indicating the time-domain signal received by the M-channel microphone array. \(x(t),n(t)\) represent the reverberation-free speech signals from the target speaker and the interfering speaker, respectively. \(a_{1}(t),a_{2}(t)\) are the room impulse response(RIR) between the speaker and the array elements of the microphone. \(*\) denotes the convolution operation. \(x^{{}^{\prime}}(t),n^{{}^{\prime}}(t)\) represent the speech signals of the target speaker and the interfering speaker with reverberation, respectively. \(s(t)\) is the background noise. When not focusing on the dereverberation task, the task goal of target speech extraction is to extract the target speaker's speech \(x^{{}^{\prime}}(t)\) from the noisy signal \(y(t)\).
After the signal model is transferred to the time-frequency domain by short-time fourier transform(STFT), it is expressed as:
\[Y(t,f)=X(t,f)+N(t,f)+S(t,f) \tag{3}\]
The purpose of the beamforming algorithm is to obtain the filter weight \(w\) for the array observation signal, and extract the desired signal from the array observation signal by performing spatial filtering on the observation signal, that is:
\[X(t,f)=w^{H}Y(t,f) \tag{4}\]
where \((\cdot)^{H}\) denotes the conjugate transpose operation.
Taking the traditional MVDR beamforming algorithm as an example, the purpose is to minimize the output noise power of the beamforming algorithm without distorting the signal, that is:
\[min(w^{H}\Phi_{NN}w^{H}),\quad s.t\quad w^{H}\alpha(\theta)=1 \tag{5}\]
For the above formula, the desired filter weight \(w\) is solved by the lagrange multiplier method:
\[w=\frac{\Phi_{NN}^{-1}\alpha(\theta)}{\alpha^{H}(\theta)\Phi_{NN}^{-1}\alpha (\theta)} \tag{6}\]
where \(\Phi_{NN}=N(t,f)N^{H}(t,f),\Phi_{NN}\in M\times M\) is the covariance matrix of the noise signal, \(\alpha(\theta)=[e^{-j\omega\tau_{m}}],m=[0,1,...,M-1]\) is the steering vector that calculated using target sound source's DOA. When the steering vector is difficult to obtain due to the interference of environmental factors, PCA can be performed on the covariance matrix of the speech signal to obtain the estimated value of the steering vector [14].
Therefore, it can be seen that the key to the beamforming algorithm lies in the estimation of the covariance matrix. Traditional beamforming methods often rely on certain algorithms to determine the time periods of speech and noise in the signal, but these approach is not optimal. With the development of deep learning technology, using deep neural networks to determine the speech and noise components in the signal has achieved good experimental results.
Fig. 1 shows the general framework of the combination of neural network and beamforming. It first predicts a set of time-frequency masks representing the corresponding relationship between the desired signal and the original mixed signal through the neural network, such as IBM [4], IRM [5], CRM [15], CRF [16], etc. Taking the IRM as an example, it defines
Fig. 1: General structure of a neural beamformer.
the energy ratio relationship between the desired signal and the mixed signal:
\[IRM_{x}=\frac{|X(t,f)|}{|Y(t,f)|} \tag{7}\]
Then the speech and noise components in the mixed signal can be recovered through mask, so as to calculate the speech and noise space covariance matrix respectively [4]:
\[\Phi=\frac{\sum_{t=1}^{T}M^{2}(t,f)Y(t,f)Y^{H}(t,f)}{\sum_{t=1}^{T}M^{2}(t,f)} \tag{8}\]
Once the covariance matrix of the target signal and noise is calculated, it can be substituted into traditional beamforming algorithms such as GEV and MVDR to obtain the beamforming weight \(w\). Recent experiments have demonstrated that utilizing neural networks to model covariance matrix information and predict beam weights can lead to improved target speech separation compared to traditional algorithms.
## III Enhanced Neural Beamformer with Spatial Information
Fig. 2 shows the overall architecture of our proposed model. First, the observation signal of the array is transformed into the time-frequency domain by STFT to extract the spatial features of the signal. The pre-separation network then estimates the mask of the target speech and noise based on the input features and calculates the covariance matrix. Following this, the beamforming network leverages the spatial features to predict the beamforming weight, which is subsequently used to beamform the observed signal. Finally, the original time domain signal is restored through ISTFT. Specifically as follows:
### _Feature Extraction_
The first microphone from the multi-channel mixed signal is selected as the reference microphone, and its received signal's amplitude spectrum is calculated as the input feature for the pre-separation network.
\[Y_{1}^{mag}(t,f)=|Y_{1}(t,f)| \tag{9}\]
To obtain the spatial information of the microphone array, we select P pairs of microphones and calculate the phase difference between each pair as the spatial feature for the input of our model. This approach has been widely used in speech enhancement and source separation tasks, as it provides a concise and effective way to capture the spatial information of the microphone array.
\[IPD_{i,j}(t,f)=\angle Y_{i}(t,f)-\angle Y_{j}(t,f) \tag{10}\]
Fig. 4: Pre-separator based UNet-TCN structure
Fig. 5: A cross attention module with spatial information as Query and covariance matrix information as Key and Value. All features are on the time frame level.
Fig. 3: The combination of input features.
Fig. 2: The overall structure of our proposed model.
Both the input features and the covariance matrix are stacked along the channel dimension when inputting to the network.
\[i,j\in(0,1,...,M-1),i\neq j\]
In order to enhance the spatial information acquired by the model, the cosine distance between the phase delay \(\varphi_{1}\) of the signal in the desired direction on the array and the phase difference \(\varphi_{2}\) observed by a certain pair of microphones is calculated. Then sum over all P pairs of microphones. That is, the angular characteristics of the target signal serve as supplementary input.
\[AF(\theta,f)=\sum_{p=1}^{P}cos(IPD_{i,j}(t,f)-\frac{2\pi fdcos\theta}{c}) \tag{11}\]
When the phase angle of the mixed signal and the desired signal is closer, the value of the angle characteristic tends to 1 approximately [12]. All these features are stacked along the channel dimension to serve as the network's input features.
\[Feature=[Y_{1}^{mag}(t,f),cosIPD_{i,j}(t,f),AF(\theta,f)]\]
### _Pre-separation Module_
The pre-separation network adopts an Encoder-TCN-Decoder structure, which includes 2D convolutional layers, linear layers, and a variant of TCN [17]. To process the input features, it's a common practice to concatenate them on the frequency dimension, followed by 1D convolution to reduce dimensionality before feeding them into the TCN or RNN module [7, 8, 9]. However, to avoid the loss of feature information due to direct dimensionality reduction of the input features, we employ the UNet-TCN structure to model them. By stacking the input features along the channel dimension, we obtain \(Y_{input}\in R^{N\times F\times T}(N=P+1+1)\), which is then fed into the UNet-TCN layer to model the nonlinear mapping of the time dimension from the source signal domain to the separated signal domain and noise domain. Finally, the predicted cRM is restored through the linear layer to recover the target speech.
\[X_{cRM}(t,f)=(cRM_{r}+jcRM_{i})(Y_{r}+jY_{i}) \tag{12}\]
The noise signal is also calculated in the above way. All activation functions in the pre-separation network use PReLU to accelerate the convergence speed of the network. Compared with ReLU, it allows negative values to appear, which is synergistic with the negative values between the input phase difference and the output mask.
### _Neural Beamforming Module_
The beamforming network is composed of a recurrent neural network and a cross-attention module. First, the speech and noise signals generated by the pre-separation of the front-end network are used to calculate the spatial covariance matrix \(\Phi_{SS}(t,f)\in C^{M\times M},\Phi_{NN}(t,f)\in C^{M\times M}\) at the time frame level respectively.
\[\Phi_{SS}(t,f)=X_{cRM}(t,f)X_{cRM}^{H}(t,f) \tag{13}\]
\[\Phi_{NN}(t,f)=N_{cRM}(t,f)N_{cRM}^{H}(t,f) \tag{14}\]
Since the spatial covariance matrix is a complex-valued Hermitian matrix, and the defined neural network is a real network, the real and imaginary parts of the speech and noise signal covariance matrices are concatenated on the channel dimension, and LayerNorm is used to the covariance matrix of is standardized. In order to enhance the spatial constraints on the beamformer, we use the spatial information received by the array, that is, the phase difference between channels and the angle characteristics of the target sound source, to perform a cross attention mechanism on the covariance matrix of speech and noise. First, the linear layer and GRU are used to transform the spliced covariance matrix and cosIPD, AngleFeature(AF) to model the inter-channel information at the time frame level and transform it into embedding of the same dimension. Subsequently, the modeled spatial information is used as the query of multi-head attention, and the covariance matrix information is used as the key and value, and the spatial orientation information is used to enhance the ability of attention to model inter-channel information at the time frame level.
\[Query=RNN-DNN(cosIPD,AF) \tag{15}\]
\[Key,Value=RNN-DNN(\Phi_{SS},\Phi_{NN}) \tag{16}\]
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{17}\]
Then, the observed signals from the microphones are beamformed using the beamforming weights output by the linear layer to obtain the spectrogram of the model prediction output. The time-domain signal is then recovered through ISTFT, resulting in the desired time-domain speech signal.
## IV Experiment setup and result analysis
### _Dataset settings_
The source speech signals for the target and interfering speakers were obtained from the Aishell2 [18] dataset, while the background noise data was sourced from DNS2023 [19]. For an indoor reverberation scene, the room size was set to a random size between [3, 3, 1.5] and [8, 8, 2.5] meters for length, width, and height, and the reverberation time(rt60) was set to 0.1-0.6 seconds. The simulation was carried out using the image source method, generating room impulse responses for the target and interference signals [20]. The microphone array consisted of a four-element linear array with an element spacing of 3 cm. To ensure a certain degree of spatial independence in the signals, we set the minimum angle between the two sources relative to the microphone array to 5\({}^{\circ}\). The signal-to-noise ratio between the target and interference signals was set to [-6, 6] db. To enhance the model's robustness, a background noise of [-5, 20] db was added to the simulated data. In total, the simulation generated 120,000 pieces of training set data, 14,000 pieces of validation set data, and 7,000 pieces of test set data. All audio was downsampled to 16kHz and each piece of data is 4 seconds long.
### _Experiment settings_
During the training process, the window length of STFT and the number of FFT points are set to 32ms, and the frame shift is set to 16ms. For the array, we select (0,1), (0,2), (0,3) three microphone pairs to calculate cosIPD. Therefore, the channel dimension of the input feature is 5, that is \(Feature\in R^{5\times F\times T}\). For the pre-separation module, the number of channels of Conv2D in the UNet structure is (5, 32, 64, 128). The TCN structure is a 3x8 stacked TCN Block with 128 channels. For the beamforming module, the number of hidden layer units of the double-layer GRU network is 256, the dimension of the cross attention module is 128, and the last linear layer outputs complex-valued beamforming weights, so the output dimension is set to 8 (channels*2), the proposed The model is trained end-to-end.
The experiment uses two A100 graphics cards to train 60 epochs on the simulation data set, and the Batch Size is set to 24. The Adam optimizer is used for optimization, the initial learning rate is set to 2e-3 and decays continuously with the number of epochs, and the decay coefficient is 0.98. The maximum gradient clipping is set to 10 to speed up the network convergence process.
We use the MVDR method based on the IRM and the GRNNBF [8] and SARNN[9] models trained according to the MISO model as the baseline for experimental comparison. For IRM-MVDR, we use the TCN network to predict the IRM, which is then substituted into the MVDR algorithm for beamforming. For GRNNBF and SARNN, the 4x8 TCN Block is used to predict the CRF mask, which is input to the GRU or self-attention module with 500 hidden units after LayerNorm to output beamforming weights. For all models, we use the joint loss function of SI-SDR and MSE for training, and the weights of the two are equal.
### _Results_
Table 1 shows the experimental results of our proposed model and baseline model on the test set. In terms of model parameters, our model has increased compared with IRM-MVDR, which is mainly due to the difference in the prediction mask and the replacement of the MVDR algorithm with a neural beamforming module. Compared with GRNNBF and SARNN, our parameter reduction is more obvious. This is because after adding the cross-attention mechanism based on spatial information, the model does not require a large number of hidden layer units to achieve a satisfactory separation effect. Therefore, the experimental results show that our proposed model can significantly improve the separation performance of the target speech compared with the baseline while reducing the number of parameters.
In Table 2, we present the results of our ablation experiments on the proposed model. The experimental results show that for the pre-separation module, the pre-separation effect obtained by first encoding the input features through the Unet structure and then inputting them to the TCN block to output the mask is better than directly inputting the input features to the TCN through Conv1D. For the neural beamformer, the UNet-TCN structure makes the model more fully utilize the input feature information, reduces the information loss in the dimension reduction process, and makes the estimation of the covariance matrix more accurate. In terms of mask prediction, whether it is TCN or UNet-TCN structure, the effect of network prediction cRM on recovering speech signals is slightly better than that of IRM, but the gap is not large. This is because for frequency-domain speech separation, when the frame length is 32ms, the effect of phase estimation on the separation effect is limited
Fig. 6: Spectrum of a piece of data in the test set after model processing. SIR=2.2dB, SNR=-4.8dB. The angle between the speaker is 9\({}^{\circ}\).
[21].
For the neural beamforming module, we take multi-head self-attention(MHSA) module with 128 and 256 hidden layer units as an example. It is evident that incorporating the embedding modeled by the covariance matrix and spatial information into the multi-head cross-attention(MHCA) module leads to better speech separation performance than the MHSA module with an equivalent number of hidden layers. Experimental results demonstrate the effectiveness of introducing spatial information into the beamforming module via the cross-attention mechanism.
### _Spectrogram analysis_
We analyze and compare the spectrograms of the target speech generated using different separation methods, and discuss the results further. In order to verify the robustness of the algorithm, we selected a piece of data with a low SNR and a small angle between the target speaker and the interference speaker from the test set for visualization, as shown in Fig. 6. Due to numerical instability in the training process and the influence of the MVDR algorithm itself, the IRM-MVDR model has a poor suppression effect on interference noise, and the separated spectrum is severely damaged by noise. In GRNNBF and SARNN using neural beamers, the separation performance of the system has been improved to a certain extent, and both have a good suppression effect on background noise, but there may still be some residual interference noise present in the separated signal. After the proposed model uses the UNet-TCN structure to improve the pre-separation network and adds spatial information to the beamforming module, the background and interference noise are further suppressed, and the degree of spectral distortion is lower than that of the baseline model, achieved better experimental results.
### _Beam pattern_
To further analyze the spatial filtering performance of the model, Fig. 7 visualizes the beam patterns of different neural beamformers, including GRNNBF, SARNN and our proposed model. The speech is divided into four segments, segment (a) has only background noise, segment (b) has two speakers speaking at the same time, segment (c) has only the interfering speaker active, segment (d) has only the target speaker active. All beam patterns are averaged over the frequency dimension.
All three models have a strong suppression near 90\({}^{\circ}\) when there is no sound source activity, which is a little strange, we don't know if this is due to the influence of background noise. When two sound sources speak at the same time, all three models produce a strong suppression effect on the interference direction. Our proposed model suppresses the disturber to a greater extent when only the disturber is active. When only the target sound source is active, the directivity of our proposed model is better than that of GRNNBF and SARNN, which reduces the degree of distortion of the model-predicted spectrum.
## V Conclusion
In this study, we propose a novel UNet-TCN structure to model input features, resulting in improved estimation accuracy of the covariance matrix. Furthermore, we introduce spatial information through the cross-attention module in the neural beamforming module to enhance beamformer performance, enabling better noise rejection and reduced spectral distortion. Objective analysis and subjective evaluation show significant improvements in separation effect. In the future, we plan to achieve better target speech extraction while reducing model complexity and output spectrum distortion.
Fig. 7: An example of beam patterns derived by GRNNBF, SARNN and our proposed module. The DOA of the target speaker and the interference speaker are 79 and 164, and marked with red and green respectively.(a) is a silent segment, (b) is two speakers are active at the same time, (c) is only the interference speaker, (d) is only the target speaker. |
2307.08384 | Efficient Quantum State Preparation with Walsh Series | A new approximate Quantum State Preparation (QSP) method is introduced,
called the Walsh Series Loader (WSL). The WSL approximates quantum states
defined by real-valued functions of single real variables with a depth
independent of the number $n$ of qubits. Two approaches are presented: the
first one approximates the target quantum state by a Walsh Series truncated at
order $O(1/\sqrt{\epsilon})$, where $\epsilon$ is the precision of the
approximation in terms of infidelity. The circuit depth is also
$O(1/\sqrt{\epsilon})$, the size is $O(n+1/\sqrt{\epsilon})$ and only one
ancilla qubit is needed. The second method represents accurately quantum states
with sparse Walsh series. The WSL loads $s$-sparse Walsh Series into $n$-qubits
with a depth doubly-sparse in $s$ and $k$, the maximum number of bits with
value $1$ in the binary decomposition of the Walsh function indices. The
associated quantum circuit approximates the sparse Walsh Series up to an error
$\epsilon$ with a depth $O(sk)$, a size $O(n+sk)$ and one ancilla qubit. In
both cases, the protocol is a Repeat-Until-Success (RUS) procedure with a
probability of success $P=\Theta(\epsilon)$, giving an averaged total time of
$O(1/\epsilon^{3/2})$ for the WSL (resp. $O(sk/\epsilon)$ for the sparse WSL).
Amplitude amplification can be used to reduce by a factor
$O(1/\sqrt{\epsilon})$ the total time dependency with $\epsilon$ but increases
the size and depth of the associated quantum circuits, making them linearly
dependent on $n$. These protocols give overall efficient algorithms with no
exponential scaling in any parameter. They can be generalized to any
complex-valued, multi-variate, almost-everywhere-differentiable function. The
Repeat-Until-Success Walsh Series Loader is so far the only method which
prepares a quantum state with a circuit depth and an averaged total time
independent of the number of qubits. | Julien Zylberman, Fabrice Debbasch | 2023-07-17T10:44:28Z | http://arxiv.org/abs/2307.08384v3 | # Efficient Quantum State Preparation with Walsh Series
###### Abstract
In this Letter, a new approximate Quantum State Preparation (QSP) method is introduced, called the Walsh Series Loader (WSL). The WSL approximates quantum states defined by real-valued functions of single real variables with a depth independent of the number \(n\) of qubits. The circuit depth is also \(O(1/\sqrt{\epsilon})\), where \(\epsilon\) is the precision of the approximation. The size is \(O(n+1/\sqrt{\epsilon})\) and only one ancilla qubit is needed, giving an overall efficient algorithm with no exponential scaling. The protocol can be generalized to any complex-valued, multi-variate differentiable function. The Walsh Series Loader is so far the only method which prepares a quantum state with a circuit depth independent of the number of qubits.
The second quantum revolution relies on the manipulation of individual quantum systems. One of the key technologies promised by this revolution is quantum computing, which is made possible by the manipulation of individual quantum bits (qubits). Because they can use quantum superposition and entanglement, Quantum Computers (QCs) will perform some computations faster than classical computers and mapping computationally demanding problems into a form tractable by a QC has become an active area of research.
In particular, the problem of solving Partial Differential Equations (PDEs) on QCs has recently attracted a lot of attention, with publications discussing digital quantum algorithms [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14], hybrid and variational quantum-classical methods [15; 16; 17; 18; 19; 20; 21], and adiabatic and annealing quantum algorithms [22; 23; 24; 25; 26; 27; 28]. To solve the Cauchy problems for differential equations on any digital computer, be it classical or quantum, one needs (i) to discretize space and time (ii) to load the initial condition onto the computer. An initial condition for a PDE can always be represented by a function \(f\) of a certain variable \(x\), where both \(x\) and \(f\) are possibly multi-dimensional.
On a classical computer, the cost of loading the initial condition is negligible when compared to the cost of the integration steps. This is not so on a QC. Indeed, encoding classical data into an \(n\)-qubit state may cost an exponential amount of primitive operations because the space of all \(n\)-qubit states has dimension \(2^{n}\). Thus, exact methods for Quantum State Preparation (QSP) have an exponential scaling with \(n\) either in depth, size or number of ancilla qubits [29; 30; 31; 32; 33; 34; 35]. It has been suggested that these issues can be overcome by using quantum generative adversarial networks and variational methods with low depth and size trained-quantum circuits [36; 37]. However, these methods suffer from usual optimization problems such as Barren plateaus, local minima and scalability [38; 39]. This has prompted the introduction of approximate methods with efficient complexities scaling at most as \(O(\text{poly}(n,1/\epsilon))\) and, in particular, no exponential scaling [40; 41; 42].
In this Letter, we present a new simple quantum algorithm for QSP based on Walsh functions : the Walsh Series Loader (WSL). Consider for example initial conditions corresponding to real valued functions of a single real variable. Given an error \(\epsilon>0\), one can then implement a quantum state \(\epsilon\)-close to the target quantum state using only one ancilla qubit, with a quantum circuit of depth \(O(1/\sqrt{\epsilon})\) independent of the number of qubits \(n\) and of size \(O(n+1/\sqrt{\epsilon})\). The efficiency of the algorithm is guaranteed for any function with bounded first derivative. More generally, the algorithm also applies to complex-valued functions and/or functions of \(d\) real variables loaded into \(nd\) qubits.
Assume for the time being that \(f\) is a real-valued function of the single real variable \(x\in[0,1]\). To solve the PDE numerically, be it on a classical or a quantum computer, the variable \(x\) of the function \(f\) must be discretised and we take that step for granted in what follows. Initialising a quantum algorithm solving the PDE means loading onto a digital quantum computer the state \(\left|f\right\rangle=\frac{1}{\left|f\right\rangle\left|z\right\rangle}\sum _{x}f(x)\left|x\right\rangle\) where the kets \(\left|x\right\rangle\) are eigenstates of the operator representing the classical variable \(x\). Consider now, for any given \(f\), the operator \(\hat{f}=\sum_{x}f(x)\left|x\right\rangle\left\langle x\right|\) and the unitary operator \(\hat{U}_{f,\epsilon_{0}}=e^{-if\epsilon_{0}}\) where \(\epsilon_{0}\) is an arbitrary strictly positive real number. Both operators are diagonal in the \(x\)-basis. At given \(\epsilon_{0}\), the operator \(\hat{U}_{f,\epsilon_{0}}\) contains all the information present in the state \(\hat{f}\). So, encoding \(\hat{U}_{f,\epsilon_{0}}\) in an efficient way is tantamount to encoding the information present in \(\left|f\right\rangle\) in an efficient way.
The new quantum algorithm for QSP that we propose in this Letter is thus based on two key ingredients. The first one is an efficient implementation of diagonal unitary operators through their actions on the set of orthogonal functions called Walsh functions \(w_{j}:[0,1]\rightarrow\{-1,1\}\)[43]. This set of functions was first introduced by Walsh in 1923 [44] who showed that every continuous function of bounded variations defined on \([0,1]\) can be expanded into a series of Walsh functions. 1
Footnote 1: In other words, Walsh functions can be used to perform spectral analysis.
Walsh functions are ideal in the general context of binary logic and binary arithmetic and, in particular, in quantum information. Indeed, the operator \(\hat{w}_{j}\) associated to the Walsh functions \(w_{j}\) can be written as a ten
sor product of \(Z\)-Pauli gates \(\hat{w}_{j}=(Z_{1})^{j_{1}}\otimes...\otimes(Z_{n})^{j_{n}}\), where \(j_{i}\) is the \(i\)-th coefficient in the binary expansion of \(j=\sum_{i=1}^{n}j_{i}2^{i-1}\). Given a function \(f\) of the variable \(x\in[0,1]\), one can expand it in terms of Walsh functions, \(f=\sum_{j=0}^{\infty}a_{j}w_{j}\). On a finite set of \(M\) points, one can expand the restricted function \(f\) as a series of \(M\) Walsh functions which approximate the function \(f\) on \([0,1]\) up to an error \(\epsilon_{1}\) (see lemma 1.1 in appendix (B)).
Since all Walsh operators \(\hat{w}_{j}\) commute with each other, implementing (approximately) \(\hat{U}_{f,\epsilon_{0}}\) comes down to implementing the \(M\) operators \(\hat{W}_{j,\epsilon_{0}}=e^{-ia_{j}\hat{w}_{j}\epsilon_{0}}\), \(j=1,...,M\), and the efficiency of the method is ensured by the simplicity of the quantum circuits implementing each \(\hat{W}_{j,\epsilon_{0}}\)[43].
The second ingredient in the algorithm is a repeat-until-success method which transforms the unitary \(\hat{U}_{f,\epsilon_{0}}=e^{-if\epsilon_{0}}\) into an operator proportional to \(\hat{f}\) and, ultimately, into the desired quantum state \(\left|f\right\rangle\). This is achieved by an interference scheme where an ancilla qubit is manipulated to generate the operator \(\hat{I}-e^{-if\epsilon_{0}}\) which, for small enough \(\epsilon_{0}\), coincides with \(i\hat{f}\epsilon_{0}\). It turns out that measuring the ancilla qubit delivers, at leading order in \(\epsilon_{0}\), the desired state \(\left|f\right\rangle\). This is so because measurement introduces an extra normalisation factor \(\mathcal{N}\simeq||\hat{f}\epsilon_{0}||_{2}\) which, at leading order in \(\epsilon_{0}\), cancels the \(\epsilon_{0}\) dependence present in \(\hat{I}-e^{-if\epsilon_{0}}\).
Let us now give some details about the way the ancilla qubit is used.
Suppose that the \(n\)-qubit register for the position \(x\) is initially in the state \(\left|0,...,0\right\rangle\). We apply to the register a Hadamard tower to get from that state the uniform superposition:
\[\left|s\right\rangle=\hat{H}^{\otimes n}\left|0,...,0\right\rangle=\frac{1}{ \sqrt{N}}\sum_{x}\left|x\right\rangle. \tag{1}\]
We then add an ancillary qubit in state \(\left|q_{A}\right\rangle=\hat{H}\left|0\right\rangle=\frac{1}{\sqrt{2}}( \left|0\right\rangle+\left|1\right\rangle)\), so the state of the total system is \(\left|\psi_{1}\right\rangle=\frac{1}{\sqrt{2}}(\left|s\right\rangle\left|0 \right\rangle+\left|s\right\rangle\left|1\right\rangle)\). We now let the ancilla control the action of \(\hat{U}_{f,\epsilon_{0}}\) by introducing a new operator controlled\(-\hat{U}_{f,\epsilon_{0}}\) whose action on \(\left|\psi_{1}\right\rangle\) gives
\[\left|\psi_{2}\right\rangle=\frac{1}{\sqrt{2}}(\left|s\right\rangle\left|0 \right\rangle+e^{-i\hat{f}\epsilon_{0}}\left|s\right\rangle\left|1\right\rangle). \tag{2}\]
Technically, a quantum circuit for controlled\(-\hat{U}_{f,\epsilon_{0}}\) can be obtained from a quantum circuit for \(\hat{U}_{f,\epsilon_{0}}\) by letting every gate be controlled by the ancilla qubit, changing CNOT gates into Toffoli gates and single-qubits-rotations into controlled-rotations.
The Hadamard gate \(\hat{H}\) and the gate \(\hat{P}=\begin{pmatrix}1&0\\ 0&-i\end{pmatrix}\) can then be used to mix components and get the state
\[\left|\psi_{3}\right\rangle=\frac{\hat{I}+e^{-if\epsilon_{0}}}{2}\left|s \right\rangle\left|0\right\rangle-i\frac{\hat{I}-e^{-if\epsilon_{0}}}{2}\left| s\right\rangle\left|1\right\rangle. \tag{3}\]
One then measures the ancilla qubit (in the computational basis). If \(\left|q_{A}\right\rangle=\left|0\right\rangle\), the protocol starts again. If \(\left|q_{A}\right\rangle=\left|1\right\rangle\), the output state is
\[\left|\psi_{4}\right\rangle=-i\frac{\hat{I}-e^{-i\hat{f}\epsilon_{0}}}{2|| \frac{\hat{I}-e^{-if\epsilon_{0}}}{2}\left|s\right\rangle||_{2}}\left|s\right \rangle\simeq\left|f\right\rangle+O(\epsilon_{0}), \tag{4}\]
which, at leading order in \(\epsilon_{0}\), is identical to the desired state. Note the very act of measuring the ancilla introduces the correct renormalization which makes it possible to obtain the desired state.
The part of the algorithm that we have just described, which involves the ancilla qubit, can be represented in the following schematic way:
The probability of success of the protocol, i.e, the probability \(P(1)\) to measure \(\left|q_{A}\right\rangle=\left|1\right\rangle\), scales as
\[P(1)=||\frac{\hat{I}-e^{-i\hat{f}\epsilon_{0}}}{2}\left|s\right\rangle||_{2}^ {2}\simeq\frac{\epsilon_{0}^{2}}{4}||f||_{2}^{2}. \tag{5}\]
The repeat-until-success procedure does not increase the size, nor the depth, of the quantum circuit. Once success is reached, the initialization has been performed and one can implement the evolution of the initial quantum state without initializing again. Futhermore, one could perform amplitude amplification to reach \(P(1)=O(1)\) and to reduce the total time by a factor \(1/\epsilon_{0}\) but at the cost of increasing the size and the depth of the WSL (more details in appendix (C)).
This procedures works for real-valued functions \(f\), but it obviously fails if \(f\) is complex-valued, because a complex-valued \(f\) makes the operator \(\hat{U}_{f,\epsilon_{0}}\) non unitary. The way to handle complex-valued functions is to add a layer to the algorithm. One introduces the modulus \(\mid f\mid\) and the phase \(\phi_{f}\). One carries out the above procedure for \(\mid f\mid\) (instead of \(f\)) and then implements efficiently the unitary operator \(\exp(i\phi_{f})\) separately using again Walsh functions as developed in [43] adding an additional \(O(1/\sqrt{\epsilon})\) in terms of size and depth (more details in appendix (A.2)).
Figure 1: Quantum circuit for the preparation of an initial quantum state \(\left|f\right\rangle=\frac{1}{||f||_{2}}\sum_{x}f(x)\left|x\right\rangle\) associated to a real-valued function \(f\). At each red line, the quantum state corresponds respectively to equation (1), (2), (3) and (4).
_Error analysis._ The discrepancy between the target quantum state and the implemented quantum state has two distinct origins. The first one is the error \(\epsilon_{1}\) introduced by computing the finite Walsh series of \(f\) on a set of \(M(\epsilon_{1})\) points. The second one is the error \(\epsilon_{0}\) introduced by the interference scheme.
Let us be a bit more specific about the first source of error. The diagonal unitary operator \(\hat{U}_{f}\) is implemented efficiently using the scheme introduced by Welsh et al. in [43]. The differentiable real function \(f\) defined on \([0,1]\) is expanded into a Walsh series \(f^{\epsilon_{1}}\). The Walsh series of \(f\) corresponds to a piece-wise constant function which coincides with \(f\) on a finite number of points and the error associated to the Walsh series can be bounded by the maximum value of the first derivative of \(f\) on \([0,1]\): \(||f(x)-f^{\epsilon_{1}}(x)||_{\infty}\leq\epsilon_{1}||f^{\prime}||_{\infty}\), where \(||f||_{\infty}=\sup_{x\in[0,1]}|f(x)|\). These two errors result in an infidelity \(1-F=O((\epsilon_{0}+\epsilon_{1}||f^{\prime}||_{\infty})^{2})\) emphasizing the fact that the method is efficient for slowly varying functions, ie when the space-step of the discretization of the continuous problem is small compared to the characteristic length of variations of the PDE problem.
The results of this Letter can be summarized in a theorem. Consider a state defined by a real valued function defined on \([0,1]^{d}\) and suppose one wants to load that state unto \(n=\sum_{i=1}^{d}n_{i}\) qubits with errors \(\vec{\epsilon}=(\epsilon_{1},...,\epsilon_{d})\). Then:
_Theorem._ There is an efficient quantum circuit of size \(O(n_{1}+...+n_{d}+1/(\epsilon_{1}...\epsilon_{d}))\) and depth \(O(1/(\epsilon_{1}...\epsilon_{d}))\), which, using one ancillary qubit, implements the quantum state \(|f\rangle\) with a probability of success \(P(1)=\Theta(\epsilon_{0}^{2})\) and infidelity \(1-F=O((\epsilon_{0}+\sum_{i=1}^{d}\epsilon_{i}||\partial_{i}f||_{\infty,[0,1 ]^{d}})^{2})\).
The proof of this theorem can be found in appendix (B) to this Letter. A direct corollary of this theorem in the one dimensional case is: there is a quantum circuit of size \(O(n+1/\sqrt{\epsilon})\), depth \(O(1/\sqrt{\epsilon})\) which uses only one ancillary qubit and implements the quantum state \(|f\rangle\) with a probability of success \(P(1)=\Theta(\epsilon)\) and infidelity \(1-F\leq\epsilon\). Also, note that the size is affine in \(n_{1}+...+n_{d}\) (or \(n\)) because of the Hadamard gates applied on each qubits at the first step of the QSP algorithm.
_Numerical results._ The scaling laws stated in this theorem can be illustrated by numerical examples. Fig. 2 displays how the infidelity \(1-F\) scales with \(\epsilon=\epsilon_{0}^{2}=\epsilon_{1}^{2}\) (Fig. 2.a) and with \(n\) (Fig. 2b) for various functions. Fig. 2a confirms the linear scaling with \(\epsilon\) while Fig. 2b clearly illustrates the fact that, for a given target state, the infidelity admits an \(n\)-independent (but \(\epsilon\)-dependent) upper-bound.
Furthermore, the WSL offers two ways of arranging the Walsh operators. The first one is to use a Gray code which cancels a maximum number of CNOT gates: out of two CNOT stairs, only one CNOT remains, reaching optimality in terms of size [43; 45]. The second method consists in listing the \(M\)-Walsh coefficients of \(f\) in decreasing order, keeping then only the first, dominant coefficients. One can thus obtain surprisingly accurate approximations of the targeted state with a very small numbers of Walsh operators. Numerical results show that, at given infidelity, the second method has a depth smaller than first method (see Fig 3). The dominant Walsh coefficients actually do not depend on the total number of qubits \(n\). The procedure thus delivers another QSP method with depth independent of \(n\). The number of classical computations needed to implement the Gray code or the decreasing order method depends only on \(M\), and not \(n\). More details on the WSL for complex and non-differentiable functions can be found in Appendix (A.2) and (A.3).
_Discussion/Comparison with other methods._ Our method needs \((n+1)\) Hadamard gates to initialise the state into a full superposition of all possible ket vectors. The control-diagonal unitary which is applied afterwards can be implemented with \(M\) controlled-Z-rotations and \(M\) Toffoli gates, where \(M\) depends on \(\epsilon_{1}\), which is the error made in representing the function \(f\) by its Walsh series \(f^{\epsilon_{1}}\). To be precise, \(M=2^{m}\) with \(m=\lfloor\log_{2}(1/\epsilon_{1})\rfloor+1\). This leads to a size scaling as
\(O(n+1/\epsilon_{1})\) and a depth \(O(1/\epsilon_{1})\). Now, each Toffoli gate can be decomposed into 6 CNOT gates and 9 single qubit gates, without using ancilla qubits [46], giving a total count of \(7M\) two-qubit gates and \(n+9M+3\) single-qubit gates. These results can be compared to other approximate QSP algorithms preparing quantum states associated to continuous functions.
The recent Fourier Series Loader (FSL) [41] makes it possible to prepare continuous functions with a depth linear in the number of Fourier components and in the number of qubits. The idea behind this method is to first load the \(2^{m}\) Fourier components of the target \(f\) on the quantum computer, and then to apply an inverse Quantum Fourier Transform to get the function \(f\) in'real space'. This result can be compared to ours since the number of Fourier components in the Fourier series of a function can be directly related to the error one makes in the truncation, leading to a gate complexity scaling at most as \(O(1/\epsilon^{1/p})\) for \(p\)-differentiable functions. Nevertheless, the inverse-QFT leads to a final quantum circuit of size \(O(n^{2}+2^{m})\) and depth \(O(n+2^{m})\) while the Walsh Series Loader has only size \(O(n+2^{m})\) and depth \(O(2^{m})^{2}\). This difference mainly comes from the fact that Walsh series can be loaded directly in real space.
In [42], quantum state preparation for continuous real functions \(f_{1}\) is achieved going adiabatically from Hamiltonian \(H_{0}=\left|f\right\rangle\left\langle f\right|\) with \(\left|f\right\rangle=H^{\otimes n}\left|0\right\rangle\) to the target Hamitonian \(H_{1}=\left|f_{1}\right\rangle\left\langle f_{1}\right|\). The adiabatic evolution is implemented via'small' Trotterization steps. To thus prepare the target quantum state with error \(\epsilon\), the query complexity \(O(\mathcal{F}^{p}/\epsilon^{2})\), where \(\mathcal{F}\) is a constant depending on \(f_{1}\) and the number of necessary ancilla qubits scales as \(O(n+d)\), where \(d\) is the number of digits used in the discretised encoding of \(f_{1}\). Even if the WSL is a repeat-until-success procedure, it offers a quadratic advantage in terms of size and depth from the fact that the complexity scales with the L2 error \(\epsilon\) (\(\frac{1}{\epsilon}\) instead of \(\frac{1}{\epsilon^{2}}\)) and necessitates only one ancilla qubit.
Another method [47] suggests to approximate quantum states associated to smooth, differentiable, real (SDR) valued functions using Matrix Product States methods. Approximating SDR functions as polynomials admitting MPS representation, one can use MPS compressions and mappings from MPS representations to quantum circuit. The presented quantum circuits are linear in \(n\) (depth and size) and are obtained with a linear number of classical computations. However, [47] offers only empirical arguments in favour of the method's efficiency and does not produce analytically proven scaling laws involving the error \(\epsilon\).
Another approximate QSP method [40] makes use of a modified version of the Grover-Rudolph algorithm [29]. To load a real valued, positive and twice differentiable function on \(n\) qubits with infidelity less than \(\epsilon\), Sanchez et al. implement only \(2^{k(\epsilon,n)}-1\) multi-controlled rotations (instead of \(2^{n}\)) with \(k(\epsilon,n)\) asymptotically independent of \(n\). For other functions, Sanchez et al. use a variational generalisation of the original algorithm. Even if the Walsh Series Loader presented above is a repeat-until-success procedure, it does not involve variational steps and it can be used for any once (as opposed to twice) differentiable functions, including real-valued but non-positive functions, or complex functions, or even multivariate ones. Also, the depth of the WSL is exactly, and not only asymptotically independent of the number of qubits \(n\).
Conclusion.The WSL is the first in a new family of quantum algorithms. These approximate quantum states efficiently with a depth independent of the number of qubits. This remarkable property brings us one step closer to quantum supremacy for all algorithms needing a QSP step. This work should be extended by investigating other, alternative methods to compute finite Walsh Series approximations. Possible candidates include threshold sampling, data compression [48] or efficient estimation of the number \(M\) of best Walsh coefficients [49].
###### Acknowledgements.
The authors thank N. F. Loureiro, T.Fredon, U.Remond and U.Nzongani for their usefull feedbacks on our research. The quantum circuit diagrams in this manuscript were prepared using quantiz package [50] and the plots were prepared using Matplotlib library [51].
Figure 3: Infidelity \(1-F\) as a function of the depth of the quantum circuits associated to the Gray code order (full lines) and the decreasing order (dashed lines) for the functions defined in Fig. 2 and parameters \(n=16\), \(\epsilon_{0}=10^{-3}\). Each dot corresponds to a number of Walsh operators \(2^{m}\) going from \(2^{1}\) to \(2^{10}\). For the Gray code order, the \(2^{m}\)-Walsh Series is computed for each points. For the decreasing order method, \(2^{10}\) Walsh coefficients are computed and only the \(2^{m}\) largest are implemented. |
2306.01937 | LIC-GAN: Language Information Conditioned Graph Generative GAN Model | Deep generative models for Natural Language data offer a new angle on the
problem of graph synthesis: by optimizing differentiable models that directly
generate graphs, it is possible to side-step expensive search procedures in the
discrete and vast space of possible graphs. We introduce LIC-GAN, an implicit,
likelihood-free generative model for small graphs that circumvents the need for
expensive graph matching procedures. Our method takes as input a natural
language query and using a combination of language modelling and Generative
Adversarial Networks (GANs) and returns a graph that closely matches the
description of the query. We combine our approach with a reward network to
further enhance the graph generation with desired properties. Our experiments,
show that LIC-GAN does well on metrics such as PropMatch and Closeness getting
scores of 0.36 and 0.48. We also show that LIC-GAN performs as good as ChatGPT,
with ChatGPT getting scores of 0.40 and 0.42. We also conduct a few experiments
to demonstrate the robustness of our method, while also highlighting a few
interesting caveats of the model. | Robert Lo, Arnhav Datar, Abishek Sridhar | 2023-06-02T22:39:14Z | http://arxiv.org/abs/2306.01937v1 | # LIC-GAN: Language Information Conditioned Graph Generative GAN Model
###### Abstract
Deep generative models for Natural Language data offer a new angle on the problem of graph synthesis: by optimizing differentiable models that directly generate graphs, it is possible to side-step expensive search procedures in the discrete and vast space of possible graphs. We introduce LIC-GAN, an implicit, likelihood-free generative model for small graphs that circumvents the need for expensive graph matching procedures. Our method takes as input a natural language query and using a combination of language modelling and Generative Adversarial Networks (GANs) and returns a graph that closely matches the description of the query. We combine our approach with a reward network to further enhance the graph generation with desired properties. Our experiments, show that LIC-GAN does well on metrics such as PropMatch and Closeness getting scores of \(0.36\) and \(0.48\). We also show that LIC-GAN performs as good as GPT-3.5, with GPT-3.5 getting scores of \(0.40\) and \(0.42\). We also conduct a few experiments to demonstrate the robustness of our method, while also highlighting a few interesting caveats of the model.
## 1 Introduction
We work on building natural language conditional graph generative models. Current graph generation literature mostly focuses on unconditional generation of molecules and proteins, while conditional generation is limited to deterministic graph generation and simple scene graphs. Although there are currently limited applications where a graph needs to be sampled from a distribution on a small scale, at a bigger scale there are applications like city planning, latent sub-graph identification, task assignments, and approximate shortest path identification. If such a model can be successfully built and trained, it should also be able to approximate a deterministic output (scene graph, for instance) or categorical distribution (instance segmentation for an image) over the space of graphs. As a first step towards addressing this problem, we propose LIC-GAN: a language conditioned GAN model inspired by the MolGAN architecture. We plan to evaluate the effectiveness of our proposed method on a random graph dataset we create. A potential application of the setting we adopt could be the generation of graphical test cases given natural language input.
### Related Works
With the advent of graph neural networks, there has recently been a lot of work in Graph representation learning (Bronstein et al., 2017; Hamilton et al., 2017; Khoshraftar and An, 2022), however not much in the field of graph generation. We can categorize prior works into two categories: unconditional and conditional generation, which is discussed below.
Unconditional graph generation is the task of generating graphs without any pre-specified constraints or conditions. These methods can be deterministic or can instead learn to sample from a distribution. This task has numerous applications in various fields such as social network analysis, chemical structure design, and image processing. In recent years, deep learning-based models have shown promising results in generating unconditional graphs (Garg et al., 2021; Wang et al., 2019; Kar et al., 2019). Figure 1 shows the schematics for an unconditional scene graph generation model.
Discovering chemical compounds that possess specific characteristics can be a demanding endeavor with significant practical uses, such as creating novel pharmaceuticals from scratch. Likelihood-based methods (Li et al., 2018; Simonovsky and Komodakis, 2018) for generating molecular graphs necessitate either a predetermined (or randomly selected) sequential depiction of the graph or a costly graph matching process to determine the probability of a generated molecule. This is because assessing the likelihood of all feasible node orderings is impractical, even for small-sized graphs. One of the most significant and recent work in this area has been in correspondence to MolGAN(De Cao and Kipf, 2018). MolGAN is an implicit, likelihood free generative model for small molecular graphs that circumvents the need for expensive graph matching procedures or node ordering heuristics of previous likelihood-based methods. The schematic of MolGAN can be found in Figure 2. Some other notable works on chemical compound discovery are Gomez-Bombarelli et al. (2018); Kusner et al. (2017); Dai et al. (2018).
On the contrary, most methods in conditional generation have been on producing a deterministic output. Scene graph generation with image and/or caption supervision is a particularly interesting research topic in this domain (Zhong et al., 2021). There have also been works on road network extraction with image conditioning (Belli and Kipf, 2019). An open source work, called GraphGPT, which generates knowledge graphs from a given text using appropriate prompting on GPT models has also became popular.
Language Models such as BERT (Devlin et al., 2018) and RoBERTa(Liu et al., 2019) have become one of the most widely used models in the field of NLP, and have been applied to a wide range of tasks including question answering (Widad et al., 2022), sentiment analysis (Hoang et al., 2019; Li et al., 2019), and language translation (Zhu et al., 2020; Yang et al., 2020). GPT-3.5, a language model developed by OpenAI, has proven to be a valuable tool for multiple applications (Biswas, 2023; Sallam et al., 2023). However, its applicability in processing or generating graphical data is yet to be fully tested. In this report, we provide the first test of GPT-3.5's skills in this regard. Rewards have been highly used in Reinforcement Learning. Recently, they have been incorporated in GANs (Zheng et al., 2021; Chen et al., 2022; Xia et al., 2020) for stronger generalization and robustness.
## 2 Proposed Methods
In this section, we briefly describe the two methods we employed for graph generation. We initially talk about the GPT-3.5 baseline, where we use prompt the GPT-3.5 model to generate graphs based on a description. We then talk about our main contribution of the report: LIC-GAN. We briefly talk about its architecture and some provide some details as to how it was trained.
### GPT-3.5 Baseline
Since, there has been no prior work, that has specifically tried to solve the problem of Natural Language Conditioned Graph Generation, we proposed a baseline method using GPT-3.5.
Within the prompt given to GPT-3.5, we define what each of the 5 additional proposed properties mean in the graph-generation context. Furthermore, we also give it some example inputs and outputs.
GPT-3.5 is a chatbot provided by OpenAI, which supports dialogue-like text completion. We designed a simple prompt that ask GPT-3.5 to generate the adjacency matrix. Since the adjacency matrices it generates are sometimes not perfect, we apply a simple post-process that pad the matrix and make it symmetric. The prompt, configuration and the post-processing method is given in A.1.
### Lic-Gan
#### Model Architecture
The architecture of the final proposed model is shown in Figure 3. The initial models we used for preliminary analysis and architectural decisions had the adjacency matrix alone predicted, from which we obtain the nodes as the rows with all zeros in the matrix. This was later replaced to have two separate prediction heads for the node and edges, which lead to better performance and supported isolated vertices. As opposed to MolGAN's choice of graph convolutional network (GCN) for the discriminator, we adopt a a weight-shared fully connected network (FCN) to process each row of the adjacency matrix to enforce the symmetry of the node processing (considering the nature of task).
#### Training Details
WGANs are known to minimize an approximation of the Wasserstein-1 distance. The Wasserstein distance between two distributions \(p\) and \(q\) can be written as
\[D_{W}[p||q]=\frac{1}{K}\sup_{||f||_{L}\leq K}\left[\mathbb{E}_{x\sim p(x)}[f(x )]-\mathbb{E}_{x\sim q(x)}[f(x)]\right]\]
where supremum is taken from the set of all \(K\)-Lipschitz functions. We use the WGAN-GP loss formulation used by MolGAN (De Cao and Kipf, 2018) as well for training the model, to improve stability. Similar to the MolGAN model, we also retain the gradient penalty from Gulrajani et al. (2017) scaled by factor \(\lambda_{\text{gp}}=5\). For incorporating additional signals to train the GAN, we provide a reward in the form of negative of mean squared loss over node and edge number match. This loss is included in the overall loss after scaling by a factor \(\lambda_{\text{rew}}\).
Figure 3: Proposed architecture for natural language conditioned GAN for graph generation
For the preliminary analysis, we use a two hidden-layer FCN to predict the number of nodes from the adjacency matrix in a differentiable manner. For the final model, this becomes unnecessary due to separate dedicated heads.
## 3 Results
In this section we discuss the results of our model. We initially describe the synthetic datasets that we used in our experiments in section 3.1 and how we evaluate the results of the generative methods in section 3.2. The results of the methods mentioned in the previous section are shown in section 3.3 and 3.5.
### Dataset
To better analyze and benchmark our model, we create a synthetic graph dataset with natural language description of the properties. We utilized NetworkX (Hagberg et al., 2008) to generate four different types of undirected simple graphs (which corresponds to four different graph generation methods).
We consider two datasets: a **simple** dataset and a **complex** dataset, for better understanding of how our models are performing. The simple dataset only consists of the number of nodes and the number of edges in the graph in the textual description. Whereas the complex dataset consists of the number of nodes and the number of edges as well as a random subset of the 5 properties listed here: (a) Diam: the diameter (b) Cycle: whether there exists a cycle or not (c) MaxDeg: maximum degree (d) MinDeg: minimum degree (e) CCNum: the number of connected components.
To make the dataset more versatile, we shuffle the properties in the description (for the model to not skip the language inputs and learn simpler correlations) in an online fashion for training all our final baselines and methods. The summary statistics for both datasets can be found in the Appendix B. For a fair comparision between different methods, we generated separate datasets - train, dev and test, where we use only the test dataset to evaluate the performance all the methods presented in this report. The size of train, dev and test set is \(100,000\), \(10,000\) and \(500\).
### Evaluation
We primarily consider two metrics closeness and property match. They are defined as follows
\[\mathsf{PropMatch} =\frac{1}{N}\sum_{i=1}^{N}\left(\frac{\sum_{p\in\mathsf{P}_{i}} \mathbbm{1}\left[p(G_{\mathsf{predicted}})=p(G_{\mathsf{true}})\right]}{|P_{i}|}\right)\] \[\mathsf{Closeness} =\frac{1}{N}\sum_{i=1}^{N}\left(\frac{\sum_{p\in\mathsf{P}_{i}} \exp\left(-\left(p(G_{\mathsf{predicted}})-p(G_{\mathsf{true}})\right)^{2} \right)}{|P_{i}|}\right)\]
Here \(N\) is the number of data points that we are evaluating and \(P_{i}\) is the set of properties that are present in the description of the \(i^{\text{th}}\) datapoint. It is clear to see that the \(\mathsf{PropMatch}\leq\mathsf{Closeness}\), and that \(\mathsf{PropMatch}\) is only high if there is a perfect match where as \(\mathsf{Closeness}\) is a bit forgiving to graphs that have properties that are close enough to the description. We also note that both metrics are going to lie in the \([0,1]\) range.
### GPT-3.5 Results
A brief summary of the results of the method described in Section 2.1 can be found Table 1. The detailed results can be found in Tables 9 and 10 in the Appendix. As can be seen from the results as we make the input more complex (increase the value of \(n\)), the property match and the closeness both start falling for the simple and the complex dataset. As expected, we can also observe that the complex dataset has significantly lower values of node \(\mathsf{PropMatch}\) and edge \(\mathsf{PropMatch}\) as can be seen from Tables 9 and 10. Particularly we note that the node \(\mathsf{PropMatch}\) went from \(0.969\) to \(0.178\) for the simple dataset as we transitioned from \(n\in[11,25]\) to \(n\in[26,50]\). Similarly it dipped from \(0.9\) to \(0.139\) for the complex dataset. This seems to imply that this method is unlikely to scale for larger graphs.
### LIC-GAN Preliminary Analysis
We initially trained our model for \(50\) epochs on the simple dataset (without shuffling the properties) to validate our design choices, and hyperparameters. We describe some of the important design choices based on the results found in Table 2.
* _Hard Gumbel Sigmoid_ **vs _Sigmoid_ as output-activation**: We found that using _Sigmoid_ to convert logits to adjacency make the convergence of training harder. Therefore, we decide to use _Hard Gumbel Sigmoid_, an extension of the Gumbel Softmax [10]
* _Multi-Head Attention_ **vs [CLS] token embedding in generator**: _Multi-Head Attention_ allows the generator to access the whole embedding of the description, as opposed to _[CLS] token embedding_ only allows the generator to access the first token embedding. While it is common to represent the whole input with _[CLS] token embedding_, we found that in our case using _Multi-Head Attention_ gives us a more balanced PropMatch performance.
* **RoBERTa vs BERT in generator**: We found that the performance for BERT and RoBERTa is similar on our dataset. However, we decide to use RoBERTa because it has a better pretrain performance which might be helpful when the dataset is contains more complex natural language description (the text description in our dataset contains only simple words such as "graph", "node" and "edge").
* _FCN (Fully Connected Network)_ **vs _GCN (Graph Convolution Network)_ in discriminator**: We decide to use FCN instead of GCN because the former gives us a better performance.
* _Adjacency matrix only_ **vs Adjacency matrix and node numbers**: We found that generating only adjacency matrix and use heuristic to get the number of nodes give us poor performance compared to GPT-3.5. To mitigate this and to support isolated vertices (e.g. vertex with degree 0), our generator has two output head: a adjacency matrix prediction head and a node prediction head (to indicate the active nodes in the output graph).
We can see that even when we shuffle the positions of the nodes and edges in the description, our model gives a similar performance. Similarly when we use text in the description as opposed to numbers (e.g "two" instead of "2") we are still able to perform to a similar level. This seems to indicate that the language model is appropriately embedding the information in the description to the feature vector.
### LIC-GAN Final Results
For our final results, we train the models for \(100\) epochs with a constant \(2\times 10^{-4}\) learning rate for the generator and discriminator and an Adam optimizer. The results can be found in Table 3, with a more detailed version in Tables 11 and 12 in the Appendix. While the reward helps for the complex dataset (especially for the node and edge match), it does not aid training on the simple dataset as seen in Figures 4 and 5.
It is observed that, while there is certainly a drop in performance as we increase \(n\) for PropMatch and Closeness for both the simple and the complex dataset, the drop is significantly lesser than the one for GPT-3.5. This seems to imply that our LIC-GAN model is likelier to scale for larger graphs unlike the GPT-3.5 model.
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline \hline Dataset & Metric & Overall & \(n\in[1,5]\) & \(n\in[6,10]\) & \(n\in[11,25]\) & \(n\in[26,50]\) \\ \hline \hline \multirow{2}{*}{Simple} & PropMatch & 0.423 & 0.933 & 0.635 & 0.522 & 0.097 \\ & Closeness & 0.468 & 0.958 & 0.697 & 0.542 & 0.154 \\ \hline \multirow{2}{*}{Complex} & PropMatch & 0.397 & 0.528 & 0.514 & 0.462 & 0.233 \\ & Closeness & 0.415 & 0.610 & 0.564 & 0.458 & 0.237 \\ \hline \hline \end{tabular}
\end{table}
Table 1: GPT-3.5 Performace on the Simple and Complex Dataset. We have split the dataset into 4 buckets based on the graph complexity and show the results across these buckets in the last 4 columns
\begin{table}
\begin{tabular}{l|l|l|c|c|c|c} \hline \hline \(\lambda_{\text{rew}}\) & LM & Other Notes & PropMatch & Closeness & Node PM & Edge PM \\ \hline \hline
0 & BERT & & 0.2444 & 0.3287 & 0.3607 & 0.1281 \\
0.5 & BERT & & 0.2397 & 0.3251 & 0.3578 & 0.1215 \\
0.5 & BERT & Text input & 0.2231 & 0.3235 & 0.3211 & 0.1251 \\ \hline
0 & RoBERTa & & 0.2437 & 0.3276 & 0.3159 & 0.1235 \\
0 & RoBERTa & _[CLS]_ tokens & 0.2668 & 0.3391 & 0.4587 & 0.0751 \\
0.5 & RoBERTa & _[CLS]_ tokens & 0.2405 & 0.3220 & 0.3614 & 0.1196 \\
0.5 & RoBERTa & gcns & 0.1928 & 0.2898 & 0.2676 & 0.1181 \\ \hline
0.5 & BERT & Invert \(n\) and \(m\) & 0.2228 & 0.314 & 0.3397 & 0.1058 \\
0.5 & BERT & Shuffle \(n\) and \(m\) & 0.2139 & 0.3135 & 0.3007 & 0.1271 \\
0.5 & RoBERTa & _[CLS]_ + Invert \(n\) and \(m\) & 0.2241 & 0.3135 & 0.3595 & 0.0887 \\
0.5 & RoBERTa & _[CLS]_ + Shuffle \(n\) and \(m\) & 0.2244 & 0.3155 & 0.3371 & 0.1117 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Initial results of LIC-GAN on the simple dataset. LM represents the Language model used on the text description. Node PM and Edge PM represent the property match obtained when we just consider \(P_{i}=\{\)Node\(\}\) and \(P_{i}=\{\)Edge\(\}\) respectively for all \(i=1\) to \(N\).
## 4 Discussion and Analysis
In this section, we run two experiments in order to do a ablation study on different parts of our generator e.g. language module and the graph generation module. The _shuffled numbers_ experiments tries to demonstrate that the Language model is extracting crucial information from the textual description. The _no node and edge_ experiments try to show that the graph generation module can generate valid graph even without explicit constraint on graph number and edges.
### The no nodes and edges experiments
We test that the LIC-GAN model was not simply generating a lot of candidate graphs with matching \(n\) and \(m\) and randomly suggesting graphs that had properties that are close enough. To test our hypothesis, we zero-shot tested a variant of the complex dataset where we randomly choose \(2-7\) properties (however, this time we do not compulsorily include nodes and edges in the description). As can be seen from Figure 6 and from Table 13, we perform slightly worse in each metric. Nevertheless we are able to maintain a similar performance which seems to imply that our language model has truly learned the meaning of each of those properties. In fact, this variation of the complex dataset is what one might want to achieve in practice (without having to compulsorily provide any particular property).
However, when we directly train the model on this version of the dataset, we achieve a worse
Figure 6: Plots for the performances for some metrics. The 4 experiments are 1) **Random**: We zero-shot test our trained model on descriptions which do not necessarily contain nodes/edges, 2) **AnyProp**: We train a new model that does not not necessarily contain nodes/edges in description in during train/test, 3) **Shuffled**: We zero-shot test descriptions just containing shuffled numeric values 4) **Complex**: LIC-GAN model’s performance on the test set
Figure 7: The values of the various metrics as we increase the number of properties in the description during the 0-shot learning
performance compared to the zero-shot evaluation after training on the complex dataset. This validates the importance of implicit (and explicit) reward of preserving one or more easier properties in the description, that guides the training to a better solution.
We also perform another experiment where we zero-shot test a randomly chosen set of properties and progressively increase the number of properties that we choose. The results can be found in Figure 7. These results seem counter-intuitive as for a person if we are just provided a single property is very simple to come up with a graph that satisfies this property as opposed to a description with larger number of properties. However, this likely signals that our model can scale fairly well as we keep increasing the number of properties. The detailed table of results can be found in the Appendix in Table 14.
### The shuffled numbers experiment
We first show that our language model is able to extract information from textual description. We performed the following experiment: for every textual description such as "10 nodes, 20 edges and minimum degree 1" we just gave it the description "20 1 10", i.e. we removed the property names and shuffled the order of the input sequence. While we expected a significant decrease in performance as we made the task significantly harder, there is not a significant difference in the performance as can be seen from Figure 6. However, we do notice that the biggest performance drop has been in terms of Node's PropMatch. The detailed results for the same can be found in Table 13 in the Appendix.
We hypothesize that this is might caused by the fact that Node and Edge are always present (as opposed to the other properties) in the textual description and we have Edge \(\geq\) Node int most cases (with the other properties all being less than Node). This allows the model to infer that the maximum number in the property vector is the required number of Edge, and the second largest number is the required number of Node. This explains why the sharpest drop in PropMatch is observed for Node. The heatmap showing the probabilities of ordering of properties can be found in the Appendix in Figure 8. While we observe that there is a similar confusion between MaxDeg and Diam, we believe that there isn't a significant performance drop here because both of these properties being present in the same description is an unlikely incident in our dataset (probability is around \(30\%\)). We also note that the most common ordering (Edge \(\geq\) Node \(\geq\) MaxDeg \(\geq\) Diam \(\geq\) CCNum \(\geq\) MinDeg) appears around \(\sim 60\%\) of the time in our dataset.
## 5 Conclusion
We studied the generation of Natural-Language Conditioned Graphs using GANs, however our work raises several open questions. While, we have primarily worked on generating the graph given the description, both models (LIC-GAN and GPT-3.5) knew that such a graph always existed. An interesting direction would be to have another model predict whether there exists even a single graph that satisfies all of the properties in the description. An example could be a description of the format "A simple undirected graph with 3 nodes and 8 edges". In conjunction with our model it can potentially generate a better language-conditioned graph generator.
Another possible extension of our work is to make the dataset harder, such as generating directed graph and related properties, or generating weighted graph that satisfy some complex properties like value of min-cut. We can also make the undirected graph generation task harder by adding more advanced graph-theoretic properties (such as planarity, connectivity etc).
#### Access to Code
All of the code for the project can be found here. |
2305.18936 | The Isomorphism Problem of Power Graphs and a Question of Cameron | The isomorphism problem for graphs (GI) and the isomorphism problem for
groups (GrISO) have been studied extensively by researchers. The current best
algorithms for both these problems run in quasipolynomial time. In this paper,
we study the isomorphism problem of graphs that are defined in terms of groups,
namely power graphs, directed power graphs, and enhanced power graphs. It is
not enough to check the isomorphism of the underlying groups to solve the
isomorphism problem of such graphs as the power graphs (or the directed power
graphs or the enhanced power graphs) of two nonisomorphic groups can be
isomorphic. Nevertheless, it is interesting to ask if the underlying group
structure can be exploited to design better isomorphism algorithms for these
graphs. We design polynomial time algorithms for the isomorphism problems for
the power graphs, the directed power graphs and the enhanced power graphs
arising from finite nilpotent groups. In contrast, no polynomial time algorithm
is known for the group isomorphism problem, even for nilpotent groups of class
2.
We note that our algorithm does not require the underlying groups of the
input graphs to be given. The isomorphism problems of power graphs and enhanced
power graphs are solved by first computing the directed power graphs from the
input graphs. The problem of efficiently computing the directed power graph
from the power graph or the enhanced power graph is due to Cameron [IJGT'22].
Therefore, we give a solution to Cameron's question. | Bireswar Das, Jinia Ghosh, Anant Kumar | 2023-05-30T10:54:47Z | http://arxiv.org/abs/2305.18936v2 | # The Isomorphism Problem of Power Graphs and a Question of Cameron
###### Abstract
The isomorphism problem for graphs (GI) and the isomorphism problem for groups (GrISO) have been studied extensively by researchers. The current best algorithms for both these problems run in quasipolynomial time. In this paper, we study the isomorphism problem of graphs that are defined in terms of groups, namely power graphs, directed power graphs, and enhanced power graphs. It is not enough to check the isomorphism of the underlying groups to solve the isomorphism problem of such graphs as the power graphs (or the directed power graphs or the enhanced power graphs) of two nonisomorphic groups can be isomorphic. Nevertheless, it is interesting to ask if the underlying group structure can be exploited to design better isomorphism algorithms for these graphs. We design polynomial time algorithms for the isomorphism problems for the power graphs, the directed power graphs and the enhanced power graphs arising from finite nilpotent groups. In contrast, no polynomial time algorithm is known for the group isomorphism problem, even for nilpotent groups of class 2.
We note that our algorithm does not require the underlying groups of the input graphs to be given. The isomorphism problems of power graphs and enhanced power graphs are solved by first computing the directed power graphs from the input graphs. The problem of efficiently computing the directed power graph from the power graph or the enhanced power graph is due to Cameron [14]. Therefore, we give a solution to Cameron's question.
and phrases Graph Isomorphism, Graphs defined on Groups, Power Graph, Enhanced Power Graph, Directed Power Graph, Nilpotent Groups
## 1 Introduction
Given two graphs as input, the graph isomorphism problem (GI) is to check if the graphs are isomorphic. Despite extensive research, the complexity status of the graph isomorphism problem is still open. The best-known algorithm for GI is due to Babai, and the runtime of the algorithm is quasipolynomial [3]. The graph isomorphism problem is in NP but it is very unlikely to be NP-hard as the problem is also in coAM [6].
Efficient algorithms are known for several restricted graph classes, for example, graphs with bounded degree [21, 15], graphs with bounded eigenvalue multiplicity [4], graphs with bounded tree-width [5, 14, 18, 20], graphs with bounded rank-width and clique-width [13, 24].
In this paper, we study the isomorphism problem of graphs defined on finite groups. More precisely, we study the class of power graphs, directed power graphs, and enhanced power graphs. For two elements \(x\) and \(y\) in a finite group \(G\), we say that \(y\) is a power \(x\) if \(y=x^{i}\) for some integer \(i\). For a group \(G\), the vertex set of the _power graph_\(\operatorname{\mathsf{Pow}}(G)\) of \(G\) consists of elements of \(G\). Two vertices \(x\) and \(y\) are adjacent in \(\operatorname{\mathsf{Pow}}(G)\) if \(x\) is a power of \(y\) or \(y\) is a power of \(x\). We call \(G\) as the _underlying group_ of \(\operatorname{\mathsf{Pow}}(G)\). The definition of directed power graphs and enhanced power graphs can be found in Section 2.
Kelarev and Quinn defined the concept of directed power graphs of semigroups [17]. Power graphs were defined by Chakrabarty et. al. [10] again for semigroups. The paper by Cameron [8] discusses several graph classes defined in terms of groups and surveys many interesting results on these graphs. Kumar et al. [19] gave a survey on the power graphs of finite groups. Questions related to the isomorphism of graphs defined on groups have also been studied [2, 11].
Our motivation for studying the isomorphism of graphs defined in terms of groups is to explore if the group structure can be exploited to give efficient algorithms for the isomorphism problems of these graphs. There are two versions of the isomorphism problem for each class of graphs defined on groups. For example, let us consider the case for the class of power graphs. In the first version of the problem, two groups \(G_{1}\) and \(G_{2}\) are given by their Cayley tables, and the task is to check if \(\operatorname{\mathsf{Pow}}(G_{1})\) is isomorphic to \(\operatorname{\mathsf{Pow}}(G_{2})\). In the second version, two power graphs \(\Gamma_{1}\) and \(\Gamma_{2}\) are given and we need to check if \(\Gamma_{1}\) is isomorphic to \(\Gamma_{2}\).
In the first version of the problem, it is tempting to use the isomorphism of the underlying groups in the hope that it might yield an easier 1 quasipolynomial time algorithm, because unlike graphs, the quasipolynomial time algorithm for groups attributed to Tarjan by Miller [22] is much easier. However, we note that it is not enough to check the isomorphism of the underlying groups as two nonisomorphic groups can have isomorphic power graphs. To see this, we can take the elementary abelian group of order 27 and the non-abelian group of order 27 with exponent 3 ([9]). In general, consider the power graphs of two nonisomorphic groups of order \(p^{i}\) for any \(i\geqslant 2\) and exponent \(p\) for some prime \(p\). One can check that the power graphs are isomorphic while the groups are not.
Footnote 1: compared to Babai’s quasipolynomial time isomorphism algorithm.
The second version looks more challenging since we do not have the underlying groups. In this paper, we show that the isomorphism problem of power graphs of nilpotent groups can be tested in polynomial time even in the second version of the isomorphism problem (see Section 6). Thus, we do not need the underlying groups to be given. In contrast, the group isomorphism problem for nilpotent groups, even for class 2, is still unresolved and is considered a hard instance for the group isomorphism problem [12].
Our algorithm for solving the isomorphism problem of power graphs works by first computing the directed power graphs of the input power graphs. Next, we use the algorithm for the isomorphism problem of directed power graphs for nilpotent groups that we design in Section 6.
The question of efficiently computing the direct power graph from the power graph (or the enhanced power graph) was asked by Cameron [8]: "Question 2: Is there a simple algorithm for constructing the directed power graph or the enhanced power graph from the
power graph, or the directed power graph from the enhanced power graph?" In this paper, we solve Cameron's question positively (see Section 7).
Zahirovi\(\acute{\text{e}}\) et. al. [28] and Cameron [7] proved that for two groups, the power graphs are isomorphic if and only if the directed power graphs of the groups are isomorphic, if and only if the enhanced power graphs of the groups are isomorphic. Our solution to Cameron's question provides an algorithmic proof of this result.
One of the main contributions of the paper is the study of minimal cyclic covers of a group (Section 3) and exploring the structure of closed-twins in certain subgraphs of power graphs (Section 4). These results are at the heart of the solution to Cameron's question. These results may also be of interest independently. We also note that the solution to Cameron's question is crucial for solving the isomorphism problem of power graphs in our paper.
The other main contribution of the paper is a collection of reduction rules that can be used on directed power graphs to simplify its structure while retaining the properties that are isomorphism invariant (see Section 6).
## 2 Preliminaries
Let \(X\) be a simple graph, where \(V(X)\) denotes the vertex set of \(X\) and \(E(X)\) denotes the edge set of \(X\). We refer the reader to the textbook by Douglas West [27] for basic definitions and notations from graph theory. A _subgraph_ of \(X\) is a graph \(Y\), where \(V(Y)\subseteq V(X)\) and \(E(Y)\subseteq E(X)\). Let \(S\subseteq V(X)\). Then the subgraph with the vertex set \(S\) and all edges in \(E(X)\) such that both endpoints are in \(S\) is called the _induced subgraph_ of \(X\) on \(S\) and it is denoted by \(X[S]\).
The set of vertices adjacent to a vertex \(u\) in an undirected graph \(X\) is called the open neighborhood of \(u\) in \(X\) and is denoted by \(N_{X}(u)\). The cardinality of \(N_{X}(u)\) is called the _degree_ of \(u\) in \(X\), denoted by \(deg_{X}(u)\). The _closed neighborhood_ of a vertex \(u\) in \(X\) is denoted by \(N_{X}[u]\) and defined by \(N_{X}[u]=N_{X}(u)\cup\{u\}\). Two vertices in \(X\) are called the _closed-twins_ in \(X\) if their closed neighborhoods in \(X\) are the same.
For a directed graph \(X\) (with no multiple edges), the _out-neighborhood_ of a vertex \(u\) in \(X\) is the set \(\{v\in V(X)\ :\ (u,v)\ \in E(X)\}\) and _out-degree_ of \(u\) in \(X\), denoted by \(\text{out-deg}_{X}(u)\), is the size of the out-neighborhood of \(u\) in \(X\). Similarly, the _in-neighborhood_ of a vertex \(u\) in \(X\) is the set \(\{v\in V(X)\ :\ (v,u)\ \in E(X)\}\) and _in-degree_ of \(u\) in \(X\), denoted by \(\text{in-deg}_{X}(u)\), is the size of the in-neighborhood of \(u\) in \(X\).2 Two vertices in a directed graph \(X\) are called the _closed-twins_ in \(X\) if their closed-out-neighborhoods in \(X\) are the same and also the closed-in-neighborhoods in \(X\) are the same. An edge of the form \((u,u)\) in a directed graph is called a _self-loop_.
Footnote 2: When the graph is clear from the context, we drop the suffixes.
In any graph \(X\), the _closed-twin-class_ of a vertex \(u\) in \(X\) is the set of all closed-twins of \(u\) in \(X\).
Two graphs \(X\) and \(Y\) are called _isomorphic_ if and only if there exists a bijection \(f\) from \(V(X)\) to \(V(Y)\) such that \(\{u,v\}\in E(X)\) if and only if \(\{f(u),f(v)\}\in E(Y)\). Moreover, if \(X\) and \(Y\) are vertex-colored, then an isomorphism \(f\) is called a _color preserving isomorphism_ if for all \(u\in V(X)\), the color of \(u\) and the color of \(f(u)\) are the same.
In this paper, if the underlying graphs are colored, then by isomorphism we mean color preserving isomorphism only.
(see for example [16]) Let \(X\) and \(Y\) be two graphs. The _strong product_\((X\boxtimes Y)\) of \(X\) and \(Y\) is the graph with the vertex set \(V(X)\times V(Y)\), where distinct vertices \((u,u^{\prime})\) and \((v,v^{\prime})\) are adjacent in \(X\boxtimes Y\) if and only if one of the following holds:
1. \(u=v\) and \(u^{\prime}\) is adjacent to \(v^{\prime}\).
2. \(u^{\prime}=v^{\prime}\) and \(u\) is adjacent to \(v\).
3. \(u\) is adjacent to \(v\) and \(u^{\prime}\) is adjacent to \(v^{\prime}\).
(see for example [25]) Vertex identification of a pair of vertices \(v_{1}\) and \(v_{2}\) of a graph is the operation that produces a graph in which the two vertices \(v_{1}\) and \(v_{2}\) are replaced with a new vertex \(v\) such that \(v\) is adjacent to the union of the vertices to which \(v_{1}\) and \(v_{2}\) were originally adjacent. In vertex identification, it doesn't matter whether \(v_{1}\) and \(v_{2}\) are connected by an edge or not.
The basic definitions and properties from group theory can be found in any standard book (see, for example, [26]). All the groups considered in this paper are finite. A subset \(H\) of a group \(G\) is called a _subgroup_ of \(G\) if \(H\) forms a group under the binary operation of \(G\); it is denoted by \(H\leqslant G\).
The number of elements in \(G\) is called the _order_ of the group and it is denoted by \(|G|\). The _order_ of an element \(g\) in \(G\), denoted by \(o(g)\), is the smallest positive integer \(m\) such that \(g^{m}=e\), where \(e\) is the identity element. The set \(\{g,g^{2},g^{3},\ldots,g^{m-1},e\}\) is the set of all group elements that are _generated_ by \(g\), where \(m=o(g)\). Moreover, this set forms a subgroup of \(G\) and is called the _cyclic subgroup_ generated by \(g\) and denoted by \(\langle g\rangle\). The number of generators of a cyclic subgroup \(\langle g\rangle\) is \(\phi(o(g))\). A group \(G\) is called _cyclic_ if \(G=\langle g\rangle\), for some \(g\in G\). In a finite cyclic group \(G\), for any factor \(m\) of \(|G|\), \(G\) has a unique subgroup of order \(m\). This is known as the converse of Lagrange's theorem for finite cyclic groups.
A group \(G\) is called a \(p\)-_group_ if the order of each element is some power of \(p\), where \(p\) is a prime. For a prime \(p\), if \(p^{m}\) is the highest power of \(p\) such that \(p^{m}\) divides \(|G|\), then a subgroup \(H\leqslant G\) with the property \(|H|=p^{m}\) is called a _Sylow \(p\)-subgroup_ of \(G\). The _direct product_ of two groups \(G\) and \(H\), denoted by \(G\times H\), is the group with elements \((g,h)\) where \(g\in G\) and \(h\in H\). The group operation of \(G\times H\) is given by \((g_{1},h_{1})(g_{2},h_{2})=(g_{1}g_{2},h_{1}h_{2})\), where the co-ordinate wise operations are the group operations of \(G\) and \(H\) respectively. A finite group is called a _nilpotent group_ if it is a direct product of its Sylow-\(p\)-subgroups.
We now give the definitions of graphs defined on groups that we discuss in this paper (see [8]).
The directed power graph of a group \(G\) is a directed graph with vertex set \(G\), where \((x,y)\) is an edge if \(y=x^{m}\) for some integer \(m\). We denote it by \(\operatorname{\textsc{DPow}}(G)\). If \((x,y)\) is an edge in \(\operatorname{\textsc{DPow}}(G)\), then \(o(y)\) divides \(o(x)\).
Let \(\operatorname{\mathcal{DPow}}=\{\operatorname{\textsc{DPow}}(G)\ :\ G\text{ is a finite group }\}\).
The power graph of a group \(G\), denoted by \(\operatorname{\textsc{Pow}}(G)\), is a graph with vertex set \(G\) and with edges of the form \(\{x,y\}\) if \(x=y^{m}\) or \(y=x^{m}\) for some integer \(m\). If \(\{x,y\}\) is an edge in \(\operatorname{\textsc{Pow}}(G)\), then \(o(x)|o(y)\) or \(o(y)|o(x)\).
Let \(\operatorname{\mathcal{Pow}}=\{\operatorname{\textsc{Pow}}(G)\ :\ G\text{ is a finite group }\}\).
**Definition 6**.: _The enhanced power graph of a group \(G\), denoted \(EPow(G)\), is a graph with vertex set \(G\), in which two vertices \(x\) and \(y\) are adjacent if they are in the same cyclic subgroup of \(G\), i.e., there exists \(z\) in \(G\) such that \(x,y\in\langle z\rangle\)._
Let \(\mathcal{EPow}=\{Pow(G)\ :\ G\text{ is a finite group }\}\).
## 3 Cyclic cover of group and its properties
**Definition 7**.: _We say that a proper cyclic subgroup \(C\) of \(G\) is a maximal cyclic subgroup if for all cyclic subgroups \(C^{\prime}\), \(C\leqslant C^{\prime}\) implies \(C=C^{\prime}\) or \(C^{\prime}=G\)._
**Definition 8**.: _Let \(G\) be a finite group. Let \(C_{1},C_{2},\ldots,C_{m}\) be a set of the cyclic subgroups of \(G\). We say that \(C_{1},C_{2},\ldots,C_{m}\) is a minimal cyclic cover if \(G=\cup_{i=1}^{m}C_{i}\) and \(\cup_{i\neq j}C_{i}\neq G\) for all \(j=1,\ldots,m\)._
**Lemma 9**.: _If \(G\) is a cyclic group, then \(C=G\) is the only minimal cyclic cover; otherwise, the set of all maximal cyclic subgroups of \(G\) forms the unique minimal cyclic cover._
Proof.: The case when \(G\) is a cyclic group is easy. Assume that \(G\) is not cyclic. Then the set of all maximal cyclic subgroups \(\{C_{1},\ldots,C_{m}\}\) of \(G\) is non-empty. Since for all \(g\) there is a maximal cyclic subgroup containing \(g\), we must have \(G=C_{1}\cup\ldots\cup C_{m}\). If \(G=\cup_{i\neq j}C_{i}\) for some \(j\), then any generator of \(C_{j}\) is in one of the cyclic groups \(C_{1},C_{2},\ldots,C_{j-1},C_{j+1},\ldots,C_{m}\). However, this is not possible. So, \(\{C_{1},\ldots,C_{m}\}\) is a minimal cyclic cover.
Suppose \(\{D_{1},\ldots,D_{k}\}\) is a minimal cyclic cover. Now a generator of a maximal cyclic subgroup \(C_{i}\) is in one of the \(D_{i}^{\prime}s\). This forces \(D_{j}=C_{i}\). Thus, each \(D_{j}\) is a maximal cyclic subgroup. However, we have seen a proper subset of \(\{C_{1},\ldots,C_{m}\}\) cannot cover \(G\). Therefore, \(\{D_{1},\ldots,D_{k}\}=\{C_{1},\ldots,C_{m}\}\).
Following the above lemma, we can see that the set of all maximal cyclic subgroups forms the minimum cyclic cover of a non-cyclic group.
**Definition 10**.: _Let \(\{C_{1},\ldots,C_{m}\}\) be the minimum cyclic cover of \(G\). A cyclic group \(C_{i}\) in the minimum cyclic cover is called a covering cycle. For a cyclic group \(C\), let \(gen(C)\) be the set of generators of \(C\). An element in \(\cup_{i=1}^{m}gen(C_{i})\) is called a covering cycle generator or CC-generator. We call a set \(\{g_{1},g_{2},\ldots,g_{m}\}\) a covering cycle generating set (CCG-set) if \(\{\langle g_{1}\rangle,\langle g_{2}\rangle,\ldots,\langle g_{m}\rangle\}=\{C_ {1},C_{2},\ldots,C_{m}\}\)._
The above definition includes the case when \(m=1\), i.e., \(G\) is cyclic.
## 4 Structure of closed-twins in a power graph
In this section, we explore the structure of closed-twins in the subgraph of a power graph induced by the closed neighborhood of a vertex. As we show in Section 5.1 that these structures can be used to find a CCG-set of a group from the corresponding power graph, even when the group is not given.
First, we note an easy fact about the closed-twins in any graph.
**Lemma 11**.: _Let \(X\) be a graph and let \(v\in V(X)\). Suppose \(x\) and \(y\) are closed-twins in \(X\). If \(x\in N[v]\), then \(y\in N[v]\). Moreover, \(x\) and \(y\) are closed-twins in \(X[N[v]]\)._
Let \(G\) be a group. It is easy to see that an element \(x\in G\) and any generator of \(\langle x\rangle\) are closed-twins in \(\Gamma=Pow(G)\). Therefore applying Lemma 11, we have the following corollary:
Let \(v\in V(\Gamma)\). If \(x\in N[v]\), then all the generators of \(\langle x\rangle\) are in \(N[v]\). Moreover, they are closed-twins in \(\Gamma_{v}\), where \(\Gamma_{v}=\Gamma[N[v]]\).
Now consider a vertex \(v\in V(\Gamma)\), where \(\Gamma\in\mathcal{Pow}\) and the subgraph \(\Gamma_{v}=\Gamma[N[v]]\) induced on the closed neighborhood of \(v\). For any vertex \(x\) in \(\Gamma_{v}\), \(o(x)|o(v)\) or \(o(v)|o(x)\). We partition \(V(\Gamma_{v})\) according to the order of the vertices in the following way:
\[U_{v} =\{x\in V(\Gamma_{v}):o(x)>o(v)\}\] \[E_{v} =\{x\in V(\Gamma_{v}):o(x)=o(v)\}\] \[L_{v} =\{x\in V(\Gamma_{v}):o(x)<o(v)\}\]
For a vertex \(x\in U_{v}\), we have \(o(v)|o(x)\) and for a vertex \(x\in L_{v}\), we have \(o(x)|o(v)\).
For a prime \(p\), we say that an element in a group is a \(p\)-power element if \(o(x)=p^{i}\) for some \(i\geq 0\). We say that \(x\) is a nontrivial \(p\)-power element if \(o(x)=p^{i}\) for some \(i>0\).
Suppose \(v\in V(\Gamma)\) is not a \(p\)-power element and \(x\in U_{v}\) is a closed-twin of \(v\) in \(\Gamma_{v}\). Then, \(\deg_{\Gamma}(x)>\deg_{\Gamma}(v)\).
Proof.: In this case, there exists prime \(q\) and positive integer \(s\) such that \(q^{s}|o(x)\) but \(q^{s}\nmid o(v)\). Then, \(x\) has a neighbor \(z=x^{\frac{o(s)}{q^{s}}}\) of order \(q^{s}\) (by the converse of Lagrange's theorem in finite cyclic groups). Note that \(z\) is not a neighbor of \(v\) as \(o(z)\nmid o(v)\) and also \(o(v)\nmid o(z)\). The latter is true as \(o(v)\) is divisible by at least two distinct primes.
Let \(v\in V(\Gamma)\) be a CC-generator such that \(o(v)\) is not a prime power. Let \(u\in V(\Gamma_{v})\). If \(u=e\) or \(u\) is a generator of \(\langle v\rangle\), then the closed-twins of \(u\) in \(\Gamma_{v}\) are exactly the generators of \(\langle v\rangle\) and \(e\); otherwise, the closed-twins of \(u\) in \(\Gamma_{v}\) are exactly the generators of \(\langle u\rangle\).
Proof.: Let \(o(v)=p_{1}^{r_{1}}p_{2}^{r_{2}}\dots p_{k}^{r_{k}}\), where \(k\geq 2\). The case when \(u=e\) or \(u\) is a generator of \(\langle v\rangle\) is easy as \(N[u]=V(\Gamma_{v})\) for any such element. Otherwise, since \(v\) is a CC-generator, \(\langle u\rangle\leq\langle v\rangle\). For \(u\) and \(z\) to be closed-twins, we must have \(u\in\langle z\rangle\) or \(z\in\langle u\rangle\). We show that for \(z\) to be a closed-twin of \(u\), its order must be the same as that of \(u\). We consider the case when \(z\in\langle u\rangle\). The other case can be handled similarly. In this case, we have \(o(z)|o(u)\).
Suppose both \(u\) and \(z\) are \(p\)-power elements for some prime \(p\in\{p_{1},p_{2},\dots,p_{k}\}\). Moreover, without loss of generality, assume that \(o(u)=p_{1}^{s_{1}}\) and \(o(z)=p_{1}^{s_{1}}\) where \(s_{1}>s_{1}^{\prime}\). Note that \(r_{1}\geq s_{1}\). In this case, there is an element in \(V(\Gamma_{v})\) of order \(p_{1}^{s_{1}}p_{2}\) which is adjacent to \(z\) but not to \(u\). More precisely, this element is an element in \(\langle v\rangle\) of order \(p_{1}^{s_{1}}p_{2}\). So, in this case, \(u\) and \(z\) are not closed-twins in \(\Gamma_{v}\).
Now suppose \(o(u)\) is not a prime power. We first take \(z\) to be non-identity. Then, let \(o(u)=p_{1}^{s_{1}}\dots p_{k}^{s_{k}}\), where \(k\geq 2\). Let \(o(z)=p_{1}^{s_{1}}\dots p_{k}^{s_{k}}\), where \(s_{j}\geq s_{j}^{\prime}\). Assume without loss of generality that \(s_{1}>s_{1}^{\prime}\). As \(o(u)\) is not a prime power order, we can take \(s_{2}\neq 0\). Now if \(s_{2}^{\prime}=0\), consider an element \(x\) of order \(p_{2}\) in \(\Gamma_{v}\). Then \(x\) is a neighbor of \(u\), but not of \(z\). On the other hand, if \(s_{2}^{\prime}\neq 0\), we take an element \(y\) of order \(p_{1}^{s_{1}}\). Again \(y\) is a neighbor of \(u\) but not of \(z\). So, in this case also, \(u\) and \(z\) are not closed-twins in \(\Gamma_{v}\).
Now suppose \(o(u)\) is not a prime power, i.e., \(o(u)=p_{1}^{s_{1}}p_{2}^{s_{2}}\cdots p_{k}^{s_{k}}\) where \(k\geq 2\) and \(z\) is identity. We recall that since \(u\) is not a generator \(\langle v\rangle\), there exists \(i\) such that \(r_{i}>s_{i}\). We take an element \(x\) of order \(p_{i}^{r_{i}}\) in \(\Gamma_{v}\). One can check that \(x\) is adjacent to \(z\) but not to \(u\). So, here also \(u\) and \(z\) are not closed-twins in \(\Gamma_{v}\).
Note that if \(o(u)=o(z)\), then they are closed-twins in \(\Gamma_{v}\).
If \(a|b\) then (1) \(\phi(a)|\phi(b)\), (2) \(\phi(a)\leq\phi(b)\) and the equality holds only when \(b=2a\) where \(a\) is an odd natural number.
If \(v\in V(\Gamma)\) is a CC-generator, it is easy to see that \(o(v)=deg(v)+1=|\Gamma_{v}|\). Let \(o(v)\) be not a prime power. Then using Lemma 3.1, the set of dominating vertices in \(\Gamma_{v}\) is the set of generators of \(\langle v\rangle\) and identity. Thus, the size of the closed-twin-class of \(v\) in \(\Gamma_{v}\) is \(\phi(o(v))+1\), i.e., \(\phi(|\Gamma_{v}|)+1\). Also, for all divisors, \(1<k<o(v)\), of \(o(v)\), there exists a closed-twin-class of size \(\phi(o(k))\) in \(\Gamma_{v}\). Therefore, using Lemma 3.1 and Remark 3.1, we have the following corollary:
Let \(v\in V(\Gamma)\) be a CC-generator and \(o(v)\) be not a prime power. Then the following holds:
1. The size of the closed-twin-class of \(v\) in \(\Gamma_{v}\) is \(\phi(o(v))+1\) (the set of dominating vertices).
2. For each divisor \(k\) of \(o(v)\), \(1<k<o(v)\), there is a closed-twin-class of size \(\phi(k)\) in \(\Gamma_{v}\). Moreover, \(\phi(k)\) divides \(\phi(o(v))\).
3. There are at most two closed-twin-classes of size greater or equal to \(\phi(o(v))\).
Proof.:
1. Using Lemma 3.1, the closed-twins of \(v\) in \(\Gamma_{v}\) are the generators of \(<v>\) and identity. This proves our claim. Also, note that these vertices form the set of dominating vertices in \(\Gamma_{v}\).
2. By converse of Lagrange's theorem for finite cyclic groups, we know that for each divisor \(k\) of \(o(v)\), there exists a unique cyclic subgroup of order \(k\). Also, for each such \(k\) there is a closed-twin-class of size \(\phi(k)\), due to Lemma 3.1. Moreover, using Remark 3.1 we can conclude \(\phi(k)|\phi(o(v))\).
3. From (1) and (2), we know that the size of a closed-twin-class in \(\Gamma_{v}\) is either \(\phi(o(v))+1\) or \(\phi(k)\), where \(k\), \(1<k<o(v)\), is a divisor of \(o(v)\). Now using Remark 3.1, we can see that \(\phi(o(v))=\phi(k)\) if and only if \(o(v)=2\cdot k\) where \(k\) is odd. Thus, there can be at most two closed-twin-classes of size greater than or equal to \(\phi(o(v))\).
The following theorem is a well-known result [10]. We give a proof for the sake of completeness.
[10] Let \(G\) be a finite group. Then, \(\Gamma=Pow(G)\) is complete if and only if \(G\) is cyclic of prime power order.
Proof.: If \(G=\mathbb{Z}_{p^{m}}\), then \(\Gamma\) is complete.
For the other direction, if \(G\) is not cyclic then the cyclic cover of \(G\) has at least two maximal cyclic subgroups \(\langle g_{1}\rangle\) and \(\langle g_{2}\rangle\). If \(x\) and \(y\) are generators of \(\langle g_{1}\rangle\) and \(\langle g_{2}\rangle\) respectively, then they are not adjacent. Therefore, we can assume that \(G=\mathbb{Z}_{m}\) for some \(m\) such that \(m\) is
not a prime power. Now, let \(v\) be a generator of \(\mathbb{Z}_{m}\). Let \(u_{1}\) and \(u_{2}\) be two non-generator elements of \(G\) with different orders. Then, by Lemma 15, \(u_{1}\) and \(u_{2}\) are not closed-twins in \(\Gamma_{v}=\Gamma\). Therefore, \(\Gamma\) is not complete.
From the above theorem, the following corollary is immediate.
**Corollary 19**.: _Let \(v\in V(\Gamma)\) be a \(p\)-power element for some prime \(p\). Then, \(\Gamma[E_{v}\cup L_{v}]\) is a complete graph. Moreover, the elements of \(E_{v}\cup L_{v}\) are closed-twins of \(v\) in \(\Gamma_{v}\)._
**Lemma 20**.: _Let \(v\in V(\Gamma)\) be a nontrivial \(p\)-power element and not a CC-generator. Suppose for all \(u\in U_{v}\) such that \(u\) is a closed-twin of \(v\) in \(\Gamma_{v}\), \(\deg_{\Gamma}(u)\) is at most \(\deg_{\Gamma}(v)\). Let \(y\) be a closed-twin of \(v\) in \(\Gamma_{v}\) with maximum order and \(S\) denotes the set \(\{x\in V(\Gamma_{v}):o(y)|o(x)\text{ and }o(x)\neq o(y)\}\). Then,_
1. _The closed-twins of_ \(v\) _are exactly the elements in_ \(\langle y\rangle\)_._
2. \(V(\Gamma_{v})=\langle y\rangle\sqcup S\)_, where_ \(\sqcup\) _denotes the disjoint union._
3. _Moreover, if_ \(o(y)=p^{i}\) _where_ \(j\geqslant 2\)_, then_ \(p\) _divides_ \(|V(\Gamma_{v})|\)_._
Proof.: Suppose \(o(v)=p^{i}\). From Corollary 19, we know that the elements in \(E_{v}\) and \(L_{v}\) are closed-twins of \(v\). Observe that these elements have order \(p^{r}\) for some \(r\leqslant i\). Next, we show that all closed-twins of \(v\) in \(U_{v}\) have orders of the form \(p^{1}\) for some \(l>i\). Suppose not, then let \(u\) be a closed-twin of \(v\) in \(U_{v}\). As \(u\in U_{v},\ p^{i}\) divides \(o(u)\). Then \(o(u)=k.p^{i}\), where \(k>1\) and \(gcd(k,p)=1\). Since \(u\) is a closed-twin of \(v\), \(|\Gamma_{v}|-1=\deg_{\Gamma_{v}}(u)=\deg_{\Gamma_{v}}(v)=\deg_{\Gamma}(v)\). Now, \(\langle u\rangle\) has an element of order \(k\) and this element cannot be a neighbor of \(v\). So, \(\deg_{\Gamma}(u)>\deg_{\Gamma_{v}}(u)=\deg_{\Gamma}(v)\). Therefore, \(\deg_{\Gamma}(u)>\deg_{\Gamma}(v)\). It is a contradiction. Hence, \(o(u)=p^{1},\ \text{for some }l>i\).
Given that \(y\) is the closed-twin of \(v\) in \(\Gamma_{v}\) with maximum order, say \(p^{j}\). Suppose \(z\in\langle y\rangle\). If \(y\in E_{v}\cup L_{v}\), then clearly \(\langle y\rangle=\langle v\rangle\) (because \(y\) cannot be in \(L_{v}\)). If \(y\in U_{v}\), then noting that \(\deg_{\Gamma}(y)\leqslant\deg_{\Gamma}(v)\) and \(y\) is a closed-twin of \(v\) in \(\Gamma_{v}\), we can say that \(z\) is in \(\Gamma_{v}\). In both the cases \(\langle y\rangle\subseteq V(\Gamma_{v})\). We show that every vertex \(w\in V(\Gamma_{v})\) is adjacent to \(z\). Since \(w\in V(\Gamma_{v})\) and \(y\) is a closed-twin of \(v\), there is an edge between \(w\) and \(y\). So, \(o(y)|o(w)\) or \(o(w)|o(y)\). In the first case, \(z\in\langle y\rangle\subseteq\langle w\rangle\). On the other hand, if \(o(w)|o(y)\), then \(w\in\langle y\rangle\). So either \(z\in\langle w\rangle\) or \(w\in\langle z\rangle\) as \(\langle y\rangle\) is a cyclic group of prime power order. In any case, \(\{w,z\}\) is an edge. So, any element \(z\) in \(\langle y\rangle\) is a closed-twin of \(v\).
Let \(z\) be a closed-twin of \(v\). If \(z\in\langle v\rangle\) then \(z\in\langle y\rangle\). On the other hand, if \(z\notin\langle v\rangle\), then \(z\in U_{v}\). Therefore, \(o(z)\) is a power of \(p\). As \(y\) is a closed-twin of \(v\), there is an edge between \(y\) and \(z\). Therefore, \(z\in\langle y\rangle\) or \(y\in\langle z\rangle\). If \(y\in\langle z\rangle\), we must have \(\langle y\rangle=\langle z\rangle\) as \(o(z)\leqslant o(y)\) (as both are \(p\)-power order closed-twins of \(v\) and \(y\) is with maximum order). This forces \(\langle z\rangle=\langle y\rangle\). Thus, \(z\in\langle y\rangle\) in both cases.
This completes the proof of part (1). Now, we prove part (2).
Let \(x\in V(\Gamma_{v})\setminus\langle y\rangle\). In this case, \(x\notin E_{v}\cup L_{v}\) by Corollary 19. Since \(y\) is a closed-twin of \(v\), \(\{x,y\}\) is an edge. Therefore, \(x\in\langle y\rangle\) or \(y\in\langle x\rangle\). However, by assumption, \(x\notin\langle y\rangle\). So, \(y\in\langle x\rangle\). Therefore, \(o(y)|o(x)\). So, \(o(x)=p^{j}\cdot k\) for some \(k>1\).
Therefore, \(V(\Gamma_{v})=\langle y\rangle\sqcup\{x\in V(\Gamma_{v}):p^{j}|o(x)\text{ and }o(x)\neq p^{j}\}\), i.e., \(V(\Gamma_{v})=\langle y\rangle\sqcup S\).
To prove part (3), we define an equivalence relation \(\equiv\) on \(S\) as follows: \(x_{1}\equiv x_{2}\), if and only if \(\langle x_{1}\rangle=\langle x_{2}\rangle\). Note that the generators of \(\langle x_{1}\rangle\) are in \(S\) by Corollary 12. Therefore, the
equivalence class of any vertex \(x\in S\) is of size \(\phi(o(x))\). Recall that, \(o(x)=o(y)\cdot k=p^{j}\cdot k\). Now, as \(p^{j}\geq p^{2}\), so \(p\) divides \(\phi(o(x))\). Therefore, \(p\) divides \(|V(\Gamma_{v})|\), as claimed.
## 5 Finding a CCG-set of a group from its power graph and enhanced power graph
Given a directed power graph, the set of vertices corresponding to a CCG-set \(\{g_{1},\ldots,g_{m}\}\) of the underlying group \(G\) can be readily found in the graph. The scenario changes when the input graph is a power graph or an enhanced power graph and the underlying group is not given directly. Then, it is not possible to recognise these vertices exactly in the input graph as we can not distinguish two closed-twins \(g_{i}\) and \(g_{i}^{\prime}\) in \(\mathsf{Pow}(G)\) (or \(\mathsf{EPow}(G)\)). For example, if we take \(\mathbb{Z}_{p^{m}}\) for some prime \(p\) and integer \(m\), then \(\mathsf{Pow}(\mathbb{Z}_{p^{m}})\) is a clique [Theorem 18]. If the vertices of \(\mathsf{Pow}(\mathbb{Z}_{p^{m}})\) are named arbitrarily, then it is not possible to distinguish a generator of \(\mathbb{Z}_{p^{m}}\) from any other vertex. Fortunately, the fact that the underlying group is \(\mathbb{Z}_{p^{m}}\) can be concluded just from the graph by Theorem 18.
Therefore, we aim to do the following: Given a power graph (or an enhanced power graph) \(\Gamma\), mark a set \(\{g_{1},g_{2},\ldots,g_{m}\}\) of vertices such that (1) each \(g_{i}\) is a CC-generator or \(g_{i}\) is a closed-twin of a CC-generator \(g_{i}^{\prime}\) in the graph \(\Gamma\), and (2) \(\{h_{1},h_{2},\ldots,h_{m}\}\) is a CCG-set where \(h_{i}=g_{i}\), if \(g_{i}\) is a CC-generator; otherwise, \(h_{i}=g_{i}^{\prime}\).
In Section 5.1, we find the vertices corresponding to a CCG-set of the underlying group of the power graph. The process of finding a CCG-set for the underlying group of an enhanced power graph is discussed in Section 5.2.
### Finding a CCG-set of a group from its power graph
There is an efficient polynomial time algorithm that, on input a power graph 3\(\Gamma\in\mathcal{P}\)ow, outputs a set \(\{g_{1},g_{2},\ldots,g_{m}\}\) where \(g_{i}\) is a CC-generator or \(g_{i}\) is a closed-twin of a CC-generator \(g_{i}^{\prime}\) in the graph \(\Gamma\) such that \(\{h_{1},h_{2},\ldots,h_{m}\}\) is a CCG-set where \(h_{i}=g_{i}\), if \(g_{i}\) is a CC-generator, otherwise, \(h_{i}=g_{i}^{\prime}\).
Footnote 3: Recall that the underlying group is not given.
Hence, without loss of generality, we call the set \(\{g_{1},g_{2},\ldots,g_{m}\}\) as CCG-set and \(g_{i}\)'s as CC-generators.
Before we give the proof of the above theorem, we need the following definition that is required in the algorithm.
Let \(d\) be a positive integer. Let \(\Gamma\in\mathcal{P}\)ow and \(v\) be a vertex in \(\Gamma\). Let \(T_{1},T_{2},\ldots,T_{r}\) be the closed-twin partition of vertices in \(\Gamma_{v}\). Let \(S_{1},S_{2},\ldots,S_{r^{\prime}}\) be the closed-twin partition of \(\mathsf{Pow}(\mathbb{Z}_{d})\). We say that \(\Gamma_{v}\) closed-twin-partition-wise matches with \(\mathsf{Pow}(\mathbb{Z}_{d})\) if (1) the closed-twin class containing the dominating vertices of both the graphs have the same size, and (2) \(\tau=r^{\prime}\) and there is some permutation \(\pi\in\mathsf{Sym}(r)\) such that \(|T_{i}|=|S_{\pi(i)}|\).
If \(v\) is a CC-generator and \(o(v)=d\) is not a prime power. Then, \(\Gamma_{v}\) twin-partition-wise-matches with \(\mathsf{Pow}(\mathbb{Z}_{d})\), by Corollary 17.
It is not hard to see that testing if \(\Gamma_{v}\) closed-twin-partition-wise matches with \(\mathsf{Pow}(\mathbb{Z}_{d})\) can be done in polynomial time. Also, when \(d\) is not prime power, the size of the closed-twin
class containing \(v\) has size \(\phi(d)+1\).
Proof. The process of finding a CCG-set of the underlying group of a given power graph is described in Algorithm 1.
```
Input:\(\Gamma\in\mathcal{P}\mathsf{ow}\)
1. First, isolate the case when the power graph \(\Gamma\) is a clique using Theorem 3.1.
2. If \(\Gamma\) is not a clique, then mark any of the universal vertices as the identity.
3. Next, all vertices except the identity are stored in a list \(L\) in decreasing order of their degrees.
4. During the algorithm, we use the labels: \(U\) (undecided), CC (a CC-generator) and NC (not a CC-generator).
5. To start with, mark all the vertices \(U\) in the list. Note that identity is not marked with any label.
6. The algorithm marks the vertices further in phases. In each phase, pick the first \(U\) marked vertex, say \(v\), in the list \(L\) and do the following: [Rule 1a] If \(\deg(v)+1\) is a prime power and \(\Gamma_{v}=\Gamma[N[v]]\) is complete, then mark \(v\) as CC and mark all its neighbors \(NC\). [Rule 1b] Else if \(\deg(v)+1\) is a prime power and \(\Gamma_{v}\) is not complete, then mark \(v\) as NC. [Rule 2a] Else if \(\deg(v)+1\) is not a prime power and if \(v\) has a closed-twin \(w\) in \(\Gamma_{v}\) such that \(w\) has been marked NC, then mark \(v\) as NC. [Rule 2b] Else (i.e., \(\deg(v)+1\) is not a prime power and \(v\) does not have a NC marked closed-twin in \(\Gamma_{v}\)) If \(\Gamma_{v}\) closed-twin-partition-wise matches with \(\mathsf{Pow}(\mathbb{Z}_{d})\), where \(d=\deg(v)+1\) Mark \(v\) as CC and all its neighbors \(NC\) Else Mark \(v\) as NC.
```
**Algorithm 1** Algorithm to mark a CCG-set in a finite power graph
Now we prove that the above algorithm is correct. The proof is by induction on the number of phases. In any phase, one of the four rules is applied to relabel a set of vertices. Our goal is to prove that this labelling is done correctly. In phase \(i\), we assume that up to phase \((i-1)\), all the labellings were done correctly. For the base case, this means that all the vertices are still labelled \(U\).
_If Rule 1a is applied:_ If \(v\) is not a CC-generator, then \(v\) is contained in at least one covering cycle. If \(v\) is contained in two covering cycles, say \(\langle g_{1}\rangle\) and \(\langle g_{2}\rangle\), then \(\Gamma_{v}\) is not complete, as the CC-generators \(g_{1}\) and \(g_{2}\) are not adjacent to each other. Now consider the case when \(v\) is contained in exactly one covering cycle, say \(\langle x\rangle\). Then \(N_{\Gamma_{v}}(v)\subseteq N_{\Gamma_{v}}(x)\). So, if \(\deg_{\Gamma}(x)>\deg_{\Gamma}(v)\), then \(x\) or one of its closed-twins has already been marked as CC in some previous phase, and then \(v\) would have been marked as NC. Now if \(\deg_{\Gamma}(x)=\deg_{\Gamma}(v)\), then \(v\) and \(x\) are closed-twins and therefore \(v\) is also a CC-generator. This is a contradiction.
_If Rule 1b is applied:_ If \(v\) is a CC-generator, then \(\Gamma_{v}\) is a complete graph. Thus, this step works correctly.
_If Rule 2a is applied:_ If \(v\) is a CC-generator, then by Lemma 3.1 its closed-twins in \(\Gamma_{v}\) are exactly \(e\) (identity) and generators of \(\langle v\rangle\). So, if any of the closed-twins is marked NC, it must have been because some other closed-twin is already marked CC in some previous
phase \(t\leq i-1\) of the algorithm. In phase \(t\), the algorithm would have also marked \(v\) as NC.
_If Rule 2b is applied:_ If \(v\) is a CC-generator, then \(\Gamma_{v}\) closed-twin-partition-wise matches with \(\mathsf{Pow}(\mathbb{Z}_{d})\). Now if none of \(v\)'s closed-twins in \(\Gamma_{v}\) are already marked CC, then \(v\) can be marked CC.
On the other hand, suppose that \(v\) is not a CC-generator. We first consider the case when \(v\) is contained in only one covering cycle, say generated by \(x\).
\(\rhd\) Claim 23. \(\deg_{\Gamma}(x)>\deg_{\Gamma}(v)\).
Proof.: As \(v\) is contained in only one covering cycle, we have \(N_{\Gamma}(v)\subseteq N_{\Gamma}(x)\). This implies \(\deg_{\Gamma}(x)\geq\deg_{\Gamma}(v)\). Moreover in \(\Gamma_{v}\), the vertices \(x\) and \(v\) are closed-twins. If \(o(x)=p^{i}\), then \(\deg(x)+1=p^{i}\). The graph \(\Gamma_{x}=\Gamma[N[x]]\) is complete. So, \(\deg(v)+1=p^{i}\). Therefore, this case cannot arise. On the other hand, if \(o(x)\) is not a prime power, we can apply4 Lemma15 and since \(v\neq e\) and \(v\) is not a CC-generator, we can see that \(v\) and \(x\) are not closed-twins in \(\Gamma_{x}\). Hence, we have \(\deg_{\Gamma}(x)>\deg_{\Gamma}(v)\).
Footnote 4: Here \(x\) and \(v\) are to be treated as the variables \(v\) and \(u\) in Lemma15.
By the above claim, the algorithm considers \(x\) and other generators of \(\langle x\rangle\) before \(v\). Then, by the induction hypothesis, one of these generators would be marked CC, and \(v\) would not be labelled \(U\).
Now we consider the case when \(v\) is contained in at least two covering cycles, say \(\langle g_{1}\rangle\) and \(\langle g_{2}\rangle\). We prove that if \(v\) is not a CC-generator, then \(\Gamma_{v}\) cannot closed-twin-partition-wise match with \(\mathsf{Pow}(\mathbb{Z}_{d})\). This case is divided into two subcases.
In the \(1^{st}\) subcase, we assume that \(o(v)\) is not a prime power. Now we count the closed-twins of \(v\) in \(\Gamma_{v}\) present in each of the sets \(U_{v}\), \(E_{v}\) and \(L_{v}\).
If \(x\in U_{v}\) is a closed-twin of \(v\) in \(\Gamma_{v}\), then by Lemma14, \(\deg_{\Gamma}(x)>\deg_{\Gamma}(v)\). So, \(x\) must have been considered by the algorithm before \(v\). At that phase, the algorithm either marked \(x\) as NC or CC. If \(x\) was marked as NC, \(v\) would not satisfy the condition of Rule 2b (i.e., no closed-twin of \(v\) in \(\Gamma_{v}\) is marked NC). On the other hand, if \(x\) was marked CC, the algorithm would have marked \(v\) as NC. So, there are no closed-twins of \(v\) in \(U_{v}\).
Number of closed-twins of \(v\) in \(\Gamma_{v}\) which are present in \(E_{v}\) is \(\phi(o(v))\). By noting that \(\Gamma_{v}[E_{v}\sqcup L_{v}]=\mathsf{Pow}(\langle v\rangle)\) and using Lemma15 on \(\mathsf{Pow}(\langle v\rangle)\), we see that the only closed-twin of \(v\) in \(L_{v}\) is the identity. Therefore, the total number of closed-twins of \(v\) in \(\Gamma_{v}\) is \(\phi(o(v))+1\).
Now CC-generators \(g_{1}\) and \(g_{2}\) have distinct 5 closed-twin-classes of size at least \(\phi(o(g_{1}))\) and \(\phi(o(g_{2}))\). But, \(\phi(o(g_{i}))\geq\phi(o(v))\) for \(i=1,2\) by Remark16. This is a contradiction since \(\mathsf{Pow}(\mathbb{Z}_{d})\) can have at most two closed-twin-classes of size greater than or equal to \(\phi(o(v))\), by (3) of Corollary17.
Footnote 5: \(g_{1}\) and \(g_{2}\) are not adjacent.
In the \(2^{nd}\) subcase, we assume that \(o(v)\) is a prime power, say \(o(v)=p^{i}\) for some prime \(p\) and some integer \(i>0\). Consider \(y\) and \(S\) as in Lemma20. Note that \(\deg_{\Gamma}(y)\leq\deg_{\Gamma}(v)\). Since otherwise, the algorithm would have marked \(y\) as NC or CC. In both cases, the algorithm would not satisfy the conditions of Rule 2b.
The proof of correctness of Algorithm 2 is by induction on the number of iterations. In any iteration, the first unmarked vertex is marked as CC and its neighbors in the graph are marked as NC. Our goal is to prove that this marking process is correct.
For the base case, \(x=v_{1}\). By Lemma 24, \(v_{1}\) is either a CC-generator or \(v_{1}\in\langle g_{1}\rangle\), where \(g_{1}\) is a CC-generator and \(v_{1}\) is a closed-twin of \(g_{1}\) in \(\Gamma\). Since \(N[v_{1}]\) corresponds to \(\langle g_{1}\rangle\) by Lemma 24, we can safely mark the vertices adjacent to \(v_{1}\) as NC.
In phase \(i\), we assume that up to iteration \((i-1)\), all the markings were done correctly. Let us pick the first unmarked vertex, say \(x\), in \(A\). It is easy to see that \(x\) does not belong to any covering cycle marked till the \((i-1)^{th}\) iteration, i.e., \(x\) does not belong to the neighborhood of any CC marked vertex till the \((i-1)^{th}\) iteration. So, again using the same argument given in the base case, it can be seen that the markings done in the \(i^{th}\) iteration are correct.
## 6 Isomorphism of directed power graphs, power graphs and enhanced power graphs
The isomorphism problems of power graphs, directed power graphs, and enhanced power graphs are equivalent (see [8, 7, 28]). Thus, an algorithm for the isomorphism problem of directed power graphs automatically gives an isomorphism algorithm for power graphs (or enhanced power graphs), provided we can obtain the directed power graph from the power graph (respectively, the enhanced power graph). This is done in Section 7. In the current section, we focus on the isomorphism problem of directed power graphs. In the last part of this section, we discuss a necessary result that is used in Section 7 for obtaining the directed power graph of an input power graph (or an enhanced power graph).
We perform several reductions on a directed power graph that are isomorphism invariant. The out-degree of a vertex in \(\operatorname{\mathsf{DPow}}(G)\) is the order of the element in the group \(G\), i.e., for a vertex \(u\) out-\(\deg(u)\)=\(\circ(u)\). Therefore we can color the vertices by their out-degrees. We call the colored graph \(\operatorname{\mathsf{CDPow}}(G)\). We emphasise that here the colors are numbers, and hence we can perform arithmetic operations on these colors and use the natural ordering of integers inherited by these colors. We recall that by isomorphism we mean color preserving isomorphism when the graphs are be colored.
Two vertices \(u\) and \(v\) are closed-twins in \(\operatorname{\mathsf{CDPow}}(G)\) (in \(\operatorname{\mathsf{DPow}}(G)\) also) if and only if \(\langle u\rangle=\langle v\rangle\) in \(G\), i.e., \(u\) and \(v\) are two generators of the same cyclic subgroup in \(G\). There are \(\phi(o(u))\) generators of \(\langle u\rangle\) in \(G\). So, for each vertex \(u\in\operatorname{\mathsf{CDPow}}(G)\), there are exactly \(\phi(col(u))\) closed-twins in \(\operatorname{\mathsf{CDPow}}(G)\). By the converse of Lagrange's theorem, in each cyclic subgroup of order \(n\), for each divisor \(k\) of \(n\), there are exactly \(\phi(k)\) generators. So, for each \(k\) in the color set of \(\operatorname{\mathsf{CDPow}}(G)\), there are \(\phi(k)\) closed-twins in the graph. Observe that \(u\) and \(v\) are closed-twins in \(\operatorname{\mathsf{CDPow}}(G)\), if and only if \((a,b)\in E(\operatorname{\mathsf{CDPow}}(G))\) and \(col(u)=col(v)\).
_Reduction rule 1:_**Closed-twin Reduction**: If there are two closed-twins \(u\) and \(v\) in \(\operatorname{\mathsf{CDPow}}(G)\), then do a vertex identification of \(u\) and \(v\) and color the identified vertex with \(col(u)=col(v)\). Let \(R_{1}(G)\) denotes the reduced graph after applying Reduction rule 1 to \(\operatorname{\mathsf{CDPow}}(G)\).
From the discussion above, the next lemma follows easily.
**Lemma 26**.: \(\operatorname{\mathsf{CDPow}}(G)\cong\operatorname{\mathsf{CDPow}}(H)\) _if and only if \(R_{1}(G)\cong R_{1}(H)\)._
**Remark 27**.: It is easy to see that, we can get back an isomorphic copy of \(\operatorname{\mathsf{CDPow}}(G)\) from \(R_{1}(G)\), by adding \(\phi(col(u))\) closed-twins at each vertex \(u\) in \(R_{1}(G)\).
Since we know that each vertex has a self-loop, for the purpose of isomorphism we can delete these self-loops. It is also easy to check that \(R_{1}(G)\) is a transitively closed directed graph.
_Reduction rule 2:_**Edge-deletion**: Let us consider \(R_{1}(G)\). Do the following steps:
(1) Delete all self-loops (\((a,a)\)).
(2) For all \(a,b,c\), if \((a,b)\) and \((a,c)\) are edges, then mark \((a,c)\) as a transitive edge. Then, delete all edges that are marked as transitive edges. Let \(R_{2}(G)\) denotes the resulting graph. Since \(R_{1}(G)\) is the reflexive and transitive closure of \(R_{2}(G)\), we have the following lemma:
**Lemma 28**.: \(R_{1}(G)\cong R_{1}(H)\) _if and only if \(R_{2}(G)\cong R_{2}(H)\)._
**Lemma 29**.: _The reduced graph \(R_{2}(G)\) satisfies the following properties:_
1. _Vertices with in-degree zero in_ \(R_{2}(G)\) _form a CCG-set of_ \(G\)_._
2. _If_ \((u,v)\) _is an edge in_ \(R_{2}(G)\)_, then_ \(col(u)>col(v)\)_. Moreover,_ \(col(u)=col(v)\cdot p\) _for some prime_ \(p\)_._
3. \(R_{2}(G)\) _is a directed acyclic graph._
Proof.:
1. Let us assume that \(u\in V(R_{2}(G))\) is a vertex of in-degree zero, i.e., there is no incoming edge \((v,u)\) to \(u\) for any \(v\in V(R_{2}(G))\). This implies that in \(DPow(G)\), the only incoming edges to \(u\) are from its closed-twins. So, \(u\notin\langle v\rangle\) for any \(v\). Thus, \(\langle u\rangle\) is a maximal covering cycle. Hence \(u\) is a covering cycle generator (CC-generator).
2. Since \((u,v)\) is an edge in \(R_{2}(G)\), \(v\) is generated by \(u\) in \(G\). So, \(col(v)\mid col(u)\) and hence \(col(u)\geq col(v)\). Now, if \(col(u)=col(v)\), then \(\langle u\rangle=\langle v\rangle\). This means that \(u\) and \(v\) are closed-twins, which must have taken part in the vertex identification process in Reduction rule 1 itself. So, we can discard this case and the only possibility we are left with is \(col(u)>col(v)\). Since \(\langle v\rangle\subsetneq\langle u\rangle\), we have \(\langle u\rangle/\langle v\rangle\cong\mathbb{Z}_{m}\) for some \(m\). If \(m\) is not a prime, then \(\mathbb{Z}_{m}\) has a proper nontrivial subgroup \(H\) in it. So, \(\langle u\rangle/\langle v\rangle\) also has a subgroup \(\langle w\rangle/\langle v\rangle\) which is isomorphic to \(H\). Therefore, \(\langle v\rangle\subsetneq\langle w\rangle\subsetneq\langle u\rangle\). Now, in \(R_{2}(G)\) there is a closed-twin of \(w\), say \(w^{\prime}\). This means \((u,w^{\prime})\) and \((w^{\prime},v)\) are two edges in \(R_{2}(G)\). But in that case, we would have marked \((u,v)\) as a transitive edge during the reduction from \(R_{1}(G)\) to \(R_{2}(G)\) and hence \((u,v)\notin E(R_{2}(G))\). This is a contradiction. So, \(m\) is some prime \(p\) and \(|\langle u\rangle|/|\langle v\rangle|=p\). Hence, \(col(u)=col(v)\cdot p\).
3. We prove this by contradiction. Let us assume the graph has a cycle, say \(a_{1}a_{2}...a_{k}a_{1}\). Then, by (2) of Lemma 29, we have \(col(a_{1})>col(a_{2})>\cdots>col(a_{k})>col(a_{1})\), which is not possible. So, our assumption is wrong. That means \(R_{2}(G)\) is acyclic.
Note that using (1) of Lemma 29, we can easily find a set of vertices, say \(\{g_{1},g_{2},\ldots,g_{m}\}\), that form a covering cycle generating set (CCG-set) of \(G\).
_Reduction rule 3: Removing the direction_: Remove the direction of the edges in \(R_{2}(G)\) to obtain an undirected colored graph \(R_{3}(G)\).
Note that the CCG-set of \(G\) can still be identified easily in \(R_{3}(G)\): A vertex \(g\) is a CC-generator if and only if all its neighbours have smaller orders (or colors).
The following result is an easy consequence of (2) of Lemma 29.
**Lemma 30**.: \(R_{2}(G)\cong R_{2}(H)\) _if and only if \(R_{3}(G)\cong R_{3}(H)\)._
**Definition 31**.: _A path \(u_{1}u_{2}\ldots u_{1}\) in \(R_{3}(G)\) is said to be a descendant path if \(col(u_{i})>col(u_{i+1})\). The vertices in the graph reachable from \(u\) using descendant path are called descendant reachable vertices from \(u\). We denote the set of descendant reachable vertices from \(u\) in \(R_{3}(G)\) by \(Des(u)\)._
Observe that \(Des(u)\) in \(R_{3}(G)\) is same as the closed out-neighborhood of \(u\) in \(R_{1}(G)\). The colors of the vertices of \(Des(u)\) in \(R_{3}(G)\) form the set of all divisors of \(col(u)\). Also, no two vertices of \(Des(u)\) in \(R_{3}\) have the same color.
If \(G\) is a finite \(p\)-group, then \(R_{3}(G)\) is a colored tree.
Proof.: Let \(|G|=p^{\alpha}\). Any edge in \(R_{3}(G)\) is of the form \(\{u,v\}\) where by using (2) of Lemma 29 we can assume without loss of generality that \(col(u)=p^{t}\) and \(col(v)=p^{t-1}\), for some \(t\in\{1,2,\ldots,\alpha\}\). Suppose the graph contains a cycle \(u_{0}u_{1}u_{2}\ldots u_{n}u_{0}\). By (2) of Lemma 29, we can assume without loss of generality that the colors of the vertices form the following sequence: \(p^{t}p^{t-1}p^{t-2}\ldots p^{t-(i-1)}p^{t-i}p^{t-(i-1)}\ldots p^{t-1}p^{t}\) for some \(i\). Now, \(u_{1},u_{n}\inDes(u_{0})\) such that \(col(u_{1})=col(u_{n})=p^{t-1}\). This is a contradiction since no two vertices of \(Des(u)\) for any vertex \(u\) have the same color. Hence, our assumption is wrong and \(R_{3}(G)\) has no cycle.
Since the isomorphism of trees can be tested in linear time (see, for example, [1]), the isomorphism of the directed power graphs arising from \(p\)-groups can also be tested in linear time.
Now we extend our algorithm to check the isomorphism of directed power graphs of finite nilpotent groups. For that, we use the following result from [23].
**Lemma 33**.: _[_23_]_ _Let \(G_{1}\) and \(G_{2}\) be two finite groups such that \(|G_{1}|\) and \(|G_{2}|\) are co-prime to each other. Then, \(DPow(G_{1}\times G_{2})=DPow(G_{1})\boxtimes DPow(G_{2})\), where \(\boxtimes\) denotes the strong product of two graphs._
**Lemma 34**.: _Let \(G=G_{1}\times G_{2}\times\cdots\times G_{k}\) and \(H=H_{1}\times H_{2}\times\cdots\times H_{k}\) where \(|G_{i}|=|H_{i}|\) for all \(1\leq i\leq k\). Suppose that \(gcd(|G_{i}|,|G_{j}|)=gcd(|H_{i}|,|H_{j}|)=1\), for all \(1\leq i<j\leq k\). Then, \(DPow(G)\cong DPow(H)\) if and only if \(DPow(G_{i})\cong DPow(H_{i})\), for all \(1\leq i\leq k\)._
Proof.: It is enough to prove the lemma for \(k=2\). Let \(f_{1}:V(DPow(G_{1}))\to V(DPow(H_{1}))\) and \(f_{2}:V(DPow(G_{2}))\to V(DPow(H_{2}))\) be two isomorphisms from \(DPow(G_{1})\) to \(DPow(H_{1})\) and from \(DPow(G_{2})\) to \(DPow(H_{2})\) respectively. Let us define \(f:V(DPow(G))\to V(DPow(H))\) as \(f((u_{1},u_{2}))=(f_{1}(u_{1}),f_{2}(u_{2}))\). Since \(f_{1}\) and \(f_{2}\) are bijections, so is \(f\). We show that \(f\) preserves the edge relations between \(DPow(G)\) and \(DPow(H)\). Let us consider an edge \(((u_{1},u_{2}),(v_{1},v_{2}))\) from \(E(DPow(G))=E(DPow(G_{1})\boxtimes DPow(G_{2}))\) (This equality follows from Lemma 33.). Now from Definition 2 and the facts that \(f_{1}\) and \(f_{2}\) are isomorphisms from \(DPow(G_{1})\) to \(DPow(H_{1})\) and from \(DPow(G_{2})\) to \(DPow(H_{2})\) respectively, we have the following three scenarios:
1. \(u_{1}=v_{1}\) and \((u_{2},v_{2})\in E(DPow(G_{2}))\). In this case, \(f_{1}(u_{1})=f_{1}(v_{1})\) and \((f_{2}(u_{2}),f_{2}(v_{2}))\in E(DPow(H_{2}))\).
2. \(u_{2}=v_{2}\) and \((u_{1},v_{1})\in E(DPow(G_{1}))\). In this case, \(f_{2}(u_{2})=f_{2}(v_{2})\) and \((f_{1}(u_{1}),f_{1}(v_{1}))\in E(DPow(H_{1})\).
3. \((u_{1},v_{1})\in E(\operatorname{\mathsf{DPOw}}(G_{1}))\) and \((u_{2},v_{2})\in E(\operatorname{\mathsf{DPOw}}(G_{2}))\). In this case, \((f_{1}(u_{1}),f_{1}(v_{1}))\in E(\operatorname{\mathsf{DPOw}}(H_{1}))\) and \((f_{2}(u_{2}),f_{2}(v_{2}))\in E(\operatorname{\mathsf{DPOw}}(H_{2}))\). In all the three scenarios, by Definition 2, we have \(((f_{1}(u_{1}),f_{2}(u_{2})),(f_{1}(v_{1}),f_{2}(v_{2})))\in E(\operatorname{ \mathsf{DPOw}}(H_{1})\boxtimes\operatorname{\mathsf{DPOw}}(H_{2}))\). Therefore by Lemma 33, \((f((u_{1},u_{2})),f((v_{1},v_{2})))\in E(\operatorname{\mathsf{DPOw}}(H))\). For the other direction, let \(f:V(\operatorname{\mathsf{DPOw}}(G))\to V(\operatorname{\mathsf{DPOw}}(H))\) be an isomorphism between \(\operatorname{\mathsf{DPOw}}(G)\) and \(\operatorname{\mathsf{DPOw}}(H)\). Consider the sets \(A_{i}=\{(u,v)\in V(\operatorname{\mathsf{DPOw}}(G)):\text{out-}\deg((u,v))\) divides \(|G_{i}|\}\) and \(B_{i}=\{(u^{\prime},v^{\prime})\in V(\operatorname{\mathsf{DPOw}}(H)):\text{ out-}\deg((u^{\prime},v^{\prime}))\) divides \(|H_{i}|\}\) for \(i=1,2\). Recall that here the out-degree of a vertex is the order of the element and \(o((u,v))=o(u)\cdot o(v)\). Since \(|G_{1}|\times|G_{2}|=|G|\) and \(\gcd(|G_{1}|,|G_{2}|)=1\), it is easy to see that \(A_{i}\) indeed corresponds to \(V(\operatorname{\mathsf{DPOw}}(G_{i}))\) for \(i=1,2\). Also, the subgraph of \(\operatorname{\mathsf{DPOw}}(G_{1}\times G_{2})\) induced by \(A_{i}\) corresponds to \(\operatorname{\mathsf{DPOw}}(G_{i})\) for \(i=1,2\). Similarly, we can see that \(B_{i}\) corresponds to \(V(\operatorname{\mathsf{DPOw}}(H_{i}))\) and the subgraph induced by \(B_{i}\) corresponds to \(\operatorname{\mathsf{DPOw}}(H_{i})\) for \(i=1,2\). Now the isomorphism \(f\) preserves the out-degrees of the vertices. We denote the restriction of \(f\) on \(A_{i}\) by \(f_{i}\). Then it is easy to see that \(f_{i}\) is a bijection from \(A_{i}\) to \(B_{i}\). Also, there is only one element, namely the identity element, of out-degree \(1\) (self-loop) and common in both \(A_{1}\) and \(A_{2}\). Also, that element is unique in \(\operatorname{\mathsf{DPOw}}(G)\). One can see that \(f_{i}:V(\operatorname{\mathsf{DPOw}}(G_{i}))\to V(\operatorname{\mathsf{DPOw} }(H_{i}))\) is an isomorphism between \(\operatorname{\mathsf{DPOw}}(G_{i})\) and \(\operatorname{\mathsf{DPOw}}(H_{i})\), for all \(i=1,2\).
We are now ready to present one of the main results of the paper. Namely, we show that the isomorphism of directed power graphs of nilpotent groups can be tested in polynomial time. Let \(\operatorname{\mathcal{DPOw}}_{\mathrm{nil}}=\{\operatorname{\mathsf{DPOw}}(G) \ :\ G\text{ is a finite nilpotent group}\}\).
There is an efficient polynomial time algorithm that on inputs \(\Gamma_{1},\Gamma_{2}\in\operatorname{\mathcal{DPOw}}_{\mathrm{nil}}\) checks if \(\Gamma_{1}\) and \(\Gamma_{2}\) are isomorphic.
Proof.: We know that a finite nilpotent group is the direct product of its Sylow subgroups. Since the orders of the Sylow subgroups are coprime with each other, by Lemma 34\(\Gamma_{1}\) and \(\Gamma_{2}\) are isomorphic if and only if for each prime \(p\) dividing \(|V(\Gamma_{1})|\) (which is same as the order of the underlying group), the directed power graphs of the Sylow-\(p\) subgroups of the underlying groups of \(\Gamma_{1}\) and \(\Gamma_{2}\) are isomorphic. Therefore, if we can find the directed power graphs of the Sylow subgroups associated with each prime divisor, we can test the isomorphism of \(\Gamma_{1}\) and \(\Gamma_{2}\). Note that the underlying groups are not given as input. However, we can still compute the directed power graph of a Sylow-\(p\) subgroup of an input graph by marking all the vertices \(V_{p}\) whose order in the underlying group is \(p^{i}\) for some \(i\geq 0\). More precisely, the subgraph induced by the set \(V_{p}\) of marked vertices is the directed power graph associated with the Sylow-\(p\) subgroup. Note that the order of a vertex (which is also an element in the underlying group) is just the out-degree of the vertex in the directed power graph.
We show that all the isomorphism invariant information of \(R_{3}(G)\) is captured by a) the CCG-set of \(G\) in \(R_{3}(G)\) along with their colors, and b) elements corresponding to their pairwise common neighborhood along with their colors. For this, we do a further reduction. The results in the rest of this section is required in Section 7.
We define a new simple undirected colored graph \(\operatorname{\mathsf{HD}}[n]=(V,E)\) for any natural number \(n\), where \(V=\{d:d|n\}\). The name of each vertex is treated as its color, i.e., here \(\operatorname{col}(v)=v\). The edge set is \(E=\{[u,v]:v=u\cdot p\text{ or }u=v\cdot p\text{ for some prime }p\}\). One can see that \(\operatorname{\mathsf{HD}}[n]\)
is the Hasse diagram of the POSET defined over the set of all divisors of \(n\) with respect to the divisibility relation. Moreover, \(HD[n]\) is also isomorphic to \(R_{3}(Z_{n})\) (as a consequence of (2) of Lemma 3.2). (1) It is easy to see that \(R_{3}(G)[Des(g_{i})]\) is isomorphic to \(HD[col(g_{i})]\) for all \(1\leqslant i\leqslant m\). We can see that the isomorphism is unique as in each of these graphs, there is only one vertex with a particular color.
(2) Note that \(\{y,y^{\prime}\}\in E(R_{3}(G))\) if and only if (a) \(y\), \(y^{\prime}\in Des(g_{i})\) for some \(1\leqslant i\leqslant m\) and (b) \(col(y)=p\cdot col(y^{\prime})\) or \(col(y^{\prime})=p\cdot col(y)\) for some prime \(p\).
Let \(\bar{I}(i,j)\) denotes the vertex in \(R_{3}(G)\) that is of maximum color among the common descendant reachable vertices from both \(g_{i}\) and \(g_{j}\). It is not hard to see that in the group \(G\), \(col(\bar{I}(i,j))=|\langle g_{i}\rangle\cap\langle g_{j}\rangle|\). Note that for two distinct pairs \((i,j)\) and \((i^{\prime},j^{\prime})\), \(\bar{I}(i,j)\) and \(\bar{I}(i^{\prime},j^{\prime})\) can be the same vertex in \(R_{3}(G)\).
\(\rhd\) Claim 3.2. In \(R_{3}(G)\), \(gcd(col(\bar{I}(i,j)),col(\bar{I}(s,j)))\) divides \(col(\bar{I}(i,s))\).
Proof.: Let \(d=gcd(col(\bar{I}(i,j)),col(\bar{I}(s,j))\). Since \(d\) is a factor of \(col(\bar{I}(i,j))\), there exists \(y_{1}\in Des(\bar{I}(i,j))=Des(g_{1})\cap Des(g_{j})\) such that \(col(y_{1})=d\). Similarly, there exists \(y_{2}\in Des(\bar{I}(s,j))=Des(g_{s})\cap Des(g_{j})\) such that \(col(y_{2})=d\).
Since both \(y_{1}\) and \(y_{2}\) are descendants of \(g_{j}\) with the same color, they must be the same element. Therefore, we can argue that \(y_{1}\) is a common descendant of \(g_{i}\) and \(g_{s}\), i.e., \(y_{1}\in Des(g_{i})\cap Des(g_{j})=Des(\bar{I}(i,j))\). Hence, \(col(y_{1})\mid col(\bar{I}(i,s))\).
_Reduction rule 4_: Consider \(R_{3}(G)\). Recall that, in \(R_{3}(G)\) a CCG-set \(\{g_{1},g_{2},...,g_{m}\}\) of \(G\) can be readily found. We make a new graph \(R_{4}(G)\) as follows:
(1) Introduce the vertices \(g_{1},g_{2},\ldots,g_{m}\) with their colors.
(2) For each pair \((i,j),\ 1\leqslant i<j\leqslant m\), do the following:
Find the vertex \(\bar{I}(i,j)\) that is of maximum color among the descendant reachable vertices from both \(g_{i}\) and \(g_{j}\). We add a vertex \(I(i,j)\) in \(R_{4}(G)\) and color it with \(col(\bar{I}(i,j))\). Add edges \(\{g_{i},I(i,j)\}\) and \(\{g_{j},I(i,j)\}\).
We can see that \(R_{4}(G)\) is a bipartite graph where one part is a CCG-set and another part contains vertices marked as \(I(i,j)\) for all \((i,j)\). In \(R_{4}(G)\), for distinct pairs \((i,j)\) and \((i^{\prime},j^{\prime})\), \(I(i,j)\) and \(I(i^{\prime},j^{\prime})\) are distinct vertices, while in \(R_{3}(G)\), \(\bar{I}(i,j)\) and \(\bar{I}(i^{\prime},j^{\prime})\) may be the same vertex. In other words, \(R_{4}(G)\) may have several copies of vertex \(\bar{I}(i,j)\).
We now present an algorithm to get back an isomorphic copy of \(R_{3}(G)\) from \(R_{4}(G)\).
Idea of the algorithm:In \(R_{4}(G)\), we have a set of colored CC-generators. Also, there exist vertices \(I(i,j)\) corresponding to each pairwise intersection of maximal cyclic subgroups \(\langle g_{i}\rangle\) and \(\langle g_{j}\rangle\) in \(G\). \(I(i,j)\) is the only common neighbor of \(g_{i}\) and \(g_{j}\) in \(R_{4}(G)\). Using this information, we construct \(R_{3}(G)\) in an iterative manner. First, we describe a sketch of the idea behind the process. There are \(m\) iterations in the process. In the \(1^{st}\) iteration, we introduce \(HD[col(g_{1})]\). One can easily verify that \(R_{3}(G)[Des(g_{1})]\) is isomorphic to \(HD[col(g_{1})]\) (by (2) of Remark 3.2). In the \(2^{nd}\) iteration, we introduce \(HD[col(g_{2})]\). As we know the color of \(I(1,2)\), we have information about the set of vertices common to both \(HD[col(g_{1})]\) and \(HD[col(g_{2})]\). Let \(u\) and \(v\) be the vertices with color \(col(I(1,2))\) in \(HD[col(g_{1})]\) and \(HD[col(g_{2})]\) respectively. We identify (via vertex-identification) the vertices with the same
colors in \(\operatorname{Des}(u)\) (which is in \(\operatorname{HD}[\operatorname{col}(g_{1})]\)) and \(\operatorname{Des}(v)\) (which is in \(\operatorname{HD}[\operatorname{col}(g_{2})]\))7. One can see that the resulting graph is isomorphic to the induced subgraph of \(R_{3}(G)\) on \(\operatorname{Des}(g_{1})\cup\operatorname{Des}(g_{2})\). Inductively the algorithm introduces \(\operatorname{Des}(g_{1})\cup\operatorname{Des}(g_{2})\cup\ldots\cup \operatorname{Des}(g_{\operatorname{j-1}})\) at the end of the \((j-1)^{\operatorname{th}}\) iteration. In the \(j^{\operatorname{th}}\) iteration, we introduce \(\operatorname{HD}[\operatorname{col}(g_{j})]\). It is easy to note that the set of vertices in \(\operatorname{HD}[\operatorname{col}(g_{j})]\) that are contained in \(\operatorname{Des}(g_{j})\cap\operatorname{Des}(g_{s})\) for all \(s\leqslant j-1\) has already been introduced. So, we need to identify the vertices introduced by the algorithm earlier with the corresponding subset of vertices in \(\operatorname{HD}[\operatorname{col}(g_{j})]\). We get the information of such vertices using the color of \(\operatorname{I}(s,j)\) for \(s\leqslant j-1\). The details of the algorithm is given below (Algorithm 3).
Footnote 7: Since \(\operatorname{HD}[\operatorname{col}(g_{i})]\) is isomorphic to \(R_{3}(G)[\operatorname{Des}(g_{i})]\), we can use the concept of \(\operatorname{Des}\) in the graph \(\operatorname{HD}[\operatorname{col}(g_{1})]\) for all \(i\).
```
1:\(X_{1}\leftarrow\operatorname{HD}[\operatorname{col}(g_{1})]\)
2:\(j\gets 2\)
3:while\(j\leqslant m\)do
4: Introduce \(Y_{j}=\operatorname{HD}[\operatorname{col}(g_{j})]\)
5:\(s\gets 1\)
6:\(h_{j,0}\leftarrow\emptyset\)\(\triangleright\) Mapping for vertex identification
7:while\(s\leqslant j-1\)do
8: Consider \(\operatorname{I}(s,j)\).
9:\(h_{j,s}\gets h_{j,s-1}\cup\{(u,v):\operatorname{col}(u)=\operatorname{ col}(v)\text{ where }u\in\operatorname{Des}(g_{s})\subseteq\operatorname{V}(X_{j-1})\text{ s.t }\operatorname{col}(u)\big{|}\text{col}(\operatorname{I}(s,j))\text{ and }v\in\operatorname{V}(Y_{j})\}\)
10:\(s\gets s+1\)
11:endwhile
12: For all \((u,v)\in h_{j,j-1}\) do vertex identification of \(u\) and \(v\) and color the new vertex with \(\operatorname{col}(u)\).
13:\(X_{j}\leftarrow\) The new graph obtained after the above vertex identification process of \(X_{j-1}\) and \(Y_{j}\).
14:\(j\gets j+1\)
15:endwhile
16: Return \(X_{m}\)
```
**Algorithm 3** To construct an isomorphic copy of \(R_{3}(G)\) from \(R_{4}(G)\)
As indicated above and in Line 12 of Algorithm 3, vertices in the old graph and \(\operatorname{HD}[\operatorname{col}(g_{j})]\) are identified. In Claim 40, we show that the identification of these vertices can be performed without any conflict.
The graph \(X_{m}\) returned by Algorithm 3 is isomorphic to \(R_{3}(G)\).
Proof.: We show by induction on \(j\) that the constructed graph up to the \(j^{\operatorname{th}}\) step is isomorphic to the subgraph of \(R_{3}(G)\) induced on \(\operatorname{Des}(g_{1})\cup\operatorname{Des}(g_{2})\cup\ldots\cup \operatorname{Des}(g_{j})\). This shows that after the \(m^{\operatorname{th}}\) iteration, we can get an isomorphic copy of \(R_{3}(G)\).
\[\rhd\text{Claim \ref{
**Proof of claim:** For simplicity of writing, we denote \(R_{3}(G)[Des(g_{1})\cup Des(g_{2})\cup\cdots\cup Des(g_{j})]\) by \(R_{3}(j)\) in the remaining part of the proof. With this, \(R_{3}(1)\) denotes \(R_{3}(G)[Des(g_{1})]\).
By Remark 36, \(X_{1}=HD[col(g_{1})]\) is isomorphic to \(R_{3}(1)\) by a unique isomorphism, say \(f_{1}\). If we take \(f_{0}\) to be the empty map, then \(f_{1}\) extends \(f_{0}\).
We prove by induction on \(j\) that \(X_{j}\) is isomorphic to \(R_{3}(j)=R_{3}[Des(g_{1})\cup Des(g_{2})\cup\cdots\cup Des(g_{j})]\) via a map \(f_{j}\) that extends the isomorphism \(f_{j-1}\).
By induction hypothesis, let us assume that \(X_{j-1}\cong R_{3}(G)[Des(g_{1})\cup\cdots\cup Des(g_{j-1})]\) and \(f_{j-1}\) is an isomorphism between \(X_{j-1}\) and \(R_{3}(j-1)\) derived by extending \(f_{j-2}\). We show that \(f_{j}\) is an extension of \(f_{j-1}\) and \(f_{j}\) is an isomorphism between \(X_{j}\) and \(R_{3}(j)\).
However, before we go into the details of the inductive case, we address the following important issue.
In the \(j^{th}\) iteration of the outer while loop and just after the execution of Line 4 of Algorithm 3, the current graph is the disjoint union of \(X_{j-1}\) and \(Y_{j}\). Now to get \(X_{j}\), some vertices of \(X_{j-1}\) and \(Y_{j}\) are vertex-identified using the tuples stored in \(h_{j,j-1}\) as described in Line 12 of Algorithm 3. Observe that two vertices in \(Y_{j}\) cannot be identified with the same vertex in \(X_{j-1}\), because in \(Y_{j}=HD[col(g_{j})]\), no two vertices have the same color. However, there is a possibility that two or more vertices of \(X_{j-1}\) are assigned to be identified with the same vertex of \(Y_{j}\). We show that this case does not arise. To do this, we first define the following sets:
\[Y_{j,1}=\{v\in V(Y_{j})\ :\ \col(v)\big{|}col(I(1,j))\}\]
\[Y_{j,s}=Y_{j,s-1}\cup\{v\in V(Y_{j})\ :\ \col(v)\big{|}col(I(s,j))\},\ \ s=2, \ldots,j-1\]
\[X_{j-1,1}=\{u\in V(X_{j-1})\ :\ u\in Des(g_{1})\text{ and }col(u)\big{|}col(I(1,j))\}\]
\[X_{j-1,s}=X_{j-1,s-1}\cup\{u\in V(X_{j-1})\ :\ u\in Des(g_{s})\text{ and }col(u) \big{|}col(I(s,j))\},\ \ s=2,\ldots,j-1\]
Now \(h_{j,j-1}\) is updated from \(h_{j,0}=\emptyset\) by the following rule: \(h_{j,s}=h_{j,s-1}\cup\{(u,v)\mid col(u)=col(v)\text{ where }u\in X_{j-1,s}\text{ and }v\in Y_{j,s}\}\) (as described in Line 9 in Algorithm 3)8. Since there is a unique vertex of any particular color in \(Y_{j}\), we can see \(h_{j,s}\) as a well-defined function from \(X_{j-1,s}\) to \(Y_{j,s}\). Now to show that \(h_{j,j-1}\) gives a conflict-free vertex identification process, we show that \(h_{j,s}\) is a bijection and an extension of \(h_{j,s-1}\). Since \(h_{j,s-1}\subseteq h_{j,s}\), it is enough to prove the following claim:
Footnote 8: Note that when \(u\in X_{j-1,j-1}\) is identified with \(v\in Y_{j,j-1}\), we color it with \(col(u)\) and for simplicity we name the new vertex as \(u\).
The map \(h_{j,s}:X_{j-1,s}\to Y_{j,s}\) is a bijection, for all \(1\leqslant s\leqslant j-1\).
**Proof of claim:** First, we show that \(h_{j,s}\) is onto for all \(s=1,\ldots,j-1\). For this, take a vertex \(v\) from \(Y_{j,s}\). Then \(col(v)|col(I(i,j))\) for some \(i\leqslant s\). So,9 there exists a vertex \(u\in Des(g_{i})\) in \(X_{j-1,s}\) such that \(col(u)=col(v)\) and \(h_{j,s}(u)=v\).
Footnote 9: Since \(X_{j-1}\cong R_{3}(j-1)\), the concept of descendant reachability can also be defined in \(X_{j-1}\). Therefore, it makes sense to use \(Des(g)\) in \(X_{j-1}\) for any vertex \(g\).
Now we prove that \(h_{j,s}\) is one-to-one using induction on \(s\). For the base case, it is easy to see that \(h_{j,1}:X_{j-1,1}\to Y_{j,1}\) is a bijection since \(X_{j-1,1}\) and \(Y_{j,1}\) contains colored vertices
corresponding to each divisor of \(\operatorname{col}(I(1,j))\) and color of each vertex is distinct. By induction hypothesis we assume that \(h_{j,s-1}:X_{j-1,s-1}\to Y_{j,s-1}\) is a bijection. Now for the inductive case, we consider \(h_{j,s}:X_{j-1,s}\to Y_{j,s}\). We need to prove that \(h_{j,s}\) is one-one. Suppose that \(u\in X_{j-1,s}\) is paired with \(v\in Y_{j,s}\) to be stored at \(h_{j,s}\) in the \(s^{th}\) iteration of the inner while loop (Line 9 of Algorithm 3). We need to argue that the pairing does not violate the one-to-one condition. We do this in two cases. Case 1: The vertex \(v\) was not encountered in any of the previous iterations, i.e., \(v\not\in Y_{j,s-1}\). So by definition of \(X_{j-1,s-1}\), there is no vertex of color \(\operatorname{col}(v)\) in \(X_{j-1,s-1}\). Since \(\operatorname{col}(u)=\operatorname{col}(v)\), we have \(u\in X_{j-1,s}\setminus X_{j-1,s-1}\). So, \((u,v)\) is added to \(h_{j,s}\) in the \(s^{th}\) iteration only, where \(v\) is in \(Y_{j,s}\). Therefore, \(X_{j-1,s}\) contains exactly one vertex of color \(\operatorname{col}(u)\). This implies that \(v\) cannot be paired with any vertex except \(u\). Case 2: The vertex \(v\) was encountered before the \(s^{th}\) iteration, and \(i\leqslant(s-1)\) is the most recent such iteration. This means that there exists \(u^{\prime}\) in the old graph (i.e., \(X_{j-1,s-1}\)) such that \(h_{j,s-1}(u^{\prime})=v\). Since \(h_{j,s-1}\) is a bijection by induction hypothesis, \(u^{\prime}\) is the only preimage of \(v\) under \(h_{j,s-1}\). We show that \(u=u^{\prime}\). Observe that there is a vertex \(w\in\operatorname{Des}(g_{s})\) in \(X_{j-1}\) such that \(\operatorname{col}(w)=\operatorname{col}(I(s,j))\). By the algorithm, \(\operatorname{col}(u)|\operatorname{col}(I(s,j))\). So, \(u\in\operatorname{Des}(w)\). Similarly, there is a vertex \(w^{\prime}\in\operatorname{Des}(g_{i})\) in \(X_{j-1}\) such that \(\operatorname{col}(w^{\prime})=\operatorname{col}(I(i,j))\) and by the algorithm \(\operatorname{col}(u^{\prime})|\operatorname{col}(I(i,j))\). So \(u^{\prime}\in\operatorname{Des}(w^{\prime})\). Since \(\operatorname{col}(u)=\operatorname{col}(u^{\prime})\), \(\operatorname{col}(u^{\prime})|\operatorname{col}(I(i,j))\) and \(\operatorname{col}(u)|\operatorname{col}(I(s,j))\), we conclude that \(\operatorname{col}(u)\) divides \(\operatorname{gcd}(\operatorname{col}(I(i,j)),\operatorname{col}(I(s,j)))\). So, by Claim 37, \(\operatorname{col}(u)|\operatorname{col}(I(i,s))\). Now we consider the subgraph of \(X_{j-1}\) induced by \(\operatorname{Des}(g_{i})\cap\operatorname{Des}(g_{s})\). If \(x\in\operatorname{Des}(g_{i})\cap\operatorname{Des}(g_{s})\) is the vertex with color \(\operatorname{col}(I(i,s))\), then this subgraph is formed by the descendants of \(x\). Since the descendants of \(x\) are exactly the vertices in \(\operatorname{Des}(g_{i})\) and \(\operatorname{Des}(g_{s})\) with colors as factors of \(\operatorname{col}(I(i,s))\), both \(u\) and \(u^{\prime}\) are in \(\operatorname{Des}(x)\). Now, \(\operatorname{Des}(x)\) has a unique vertex of a particular color. Therefore, as \(u\) and \(u^{\prime}\) have the same color, \(u=u^{\prime}\). Hence, we have proved that \(h_{j,s}\) is one-one in both the cases. Therefore, we can conclude that \(h_{j,s}:X_{j-1,s}\to Y_{j,s}\) is a bijection for all \(1\leqslant s\leqslant j-1\). \(\Box\) From the above claim, we can conclude that in the \(j^{th}\) iteration of the outer while loop, the identification process done in Line 12 in Algorithm 3 via the mapping \(h_{j,j-1}\) is correct. Next, we show that the graph \(X_{j}\) (output in Line 13), derived after the identification process on \(X_{j-1}\) and \(Y_{j}\), is indeed isomorphic to \(R_{3}(j)\). For \(j\geqslant 2\), we define \(f_{j}:V(X_{j})\to V(R_{3}(j))\) in the following manner: \[f_{j}(x)=\begin{cases}f_{j-1}(x)&\text{if $x\in V(X_{j-1})$}\\ y&\text{otherwise}\end{cases}\] (1) To show that \(f_{j}\) is well defined, it is enough to argue that for each \(x\in V(X_{j})\setminus V(X_{j-1})\), there exists a unique \(y\in V(R_{3}(j))\setminus V(R_{3}(j-1))\) such that \(\operatorname{col}(y)=\operatorname{col}(x)\). Observe that \(V(X_{j})\setminus V(X_{j-1})\) is the set of vertices of \(Y_{j}=\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname }}}}}}}}}}}\) that have not been identified in the \(j^{th}\) iteration. So, for any vertex \(x\in V(X_{j})\setminus V(X_{j-1})\), \(\operatorname{col}(x)\) divides \(\operatorname{col}(g_{j})\) but \(\operatorname{col}(x)\) does not divide \(\operatorname{col}(I(i,j))\) for any \(i<j\). This means, for each such \(x\), there exists \(y\) in \(V(R_{3}(j))\setminus V(R_{3}(j-1))\) with color \(\operatorname{col}(x)\) and this \(y\) is unique since \(V(R_{3}(j))\setminus V(R_{3}(j-1))\) contains the vertices of \(\operatorname{Des}(g_{j})\) that are not descendant reachable from any \(g_{i}\) where \(i<j\). The uniqueness of colors in \(Y_{j}=\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatornameoperatorname{\operatorname{\operatorname \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \, }}}}}}}}}}\) also implies that \(f_{j}\) is a bijection.
Now to show that \(f_{j}\) is an isomorphism between \(X_{j}\) and \(R_{3}(j)\), it remains to show that \(f_{j}\) preserves edge relations between \(X_{j}\) and \(R_{3}(j)\).
Here, we want to emphasize that it might happen that two vertices \(x,x^{\prime}\) in \(X_{j-1}\) are not adjacent to each other, but after the vertex identification process in the \(j^{th}\) iteration, there is an edge between \(x\) and \(x^{\prime}\) in \(X_{j}\). Moreover, through the following claim, we want to show that this incident has a correspondence in \(R_{3}(j)\).
\(\rhd\) Claim 4.1. Let \(x,x^{\prime}\) be two vertices in the old graph (i.e., \(X_{j-1}\)) that take part in the vertex identification process in the \(j^{th}\) iteration, i.e., \(x,x^{\prime}\in X_{j-1,j-1}\). Then, \(\{x,x^{\prime}\}\notin E(X_{j-1})\), but \(\{x,x^{\prime}\}\in E(X_{j})\) if and only if \(\{f_{j-1}(x),f_{j-1}(x^{\prime})\notin E(R_{3}(j-1))\), but \(\{f_{j}(x),f_{j}(x^{\prime})\}\in E(R_{3}(j))\).
**Proof of claim:** As \(f_{j-1}\) is an isomorphism between \(X_{j-1}\) and \(R_{3}(j-1)\), we have \(\{x,x^{\prime}\}\notin E(X_{j-1})\) if and only if \(\{f_{j}(x),f_{j}(x^{\prime})\}\notin E(R_{3}(j-1))\).
Now, assume that \(\{x,x^{\prime}\}\notin E(X_{j-1})\) but \(\{x,x^{\prime}\}\in E(X_{j})\). Since \(x,x^{\prime}\in X_{j-1,j-1}\), the vertices \(x,x^{\prime}\) get identified with some elements \(z,z^{\prime}\) respectively in \(Y_{j}\) such that \(\{z,z^{\prime}\}\in E(Y_{j})\). Also, \(col(x)=col(z)=col(f_{j}(x))\) and \(col(x^{\prime})=col(z^{\prime})=col(f_{j}(x^{\prime}))\). Since \(Y_{j}=HD[col(g_{j})]\) and \(\{z,z^{\prime}\}\in E(Y_{j})\), by definition either \(col(z)=col(z^{\prime})\cdot p\) or \(col(z^{\prime})=col(z)\cdot p\) for some prime \(p\). Therefore, either \(col(f_{j}(x))=col(f_{j}(x^{\prime}))\cdot p\) or \(col(f_{j}(x^{\prime}))=col(f_{j}(x))\cdot p\) for some prime \(p\). Moreover, \(f_{j}(x),f_{j}(x^{\prime})\in Des(g_{j})\). Hence, by (2) of Remark 36, \(\{f_{j}(x),f_{j}(x^{\prime})\}\in E(R_{3}(j))\).
Conversely, assume that \(\{f_{j}(x),f_{j}(x^{\prime})\}\in E(R_{3}(j))\). Since \(x,x^{\prime}\in X_{j-1,j-1}\), \(x\) and \(x^{\prime}\) must have been identified with some vertices \(z\) and \(z^{\prime}\) in \(Y_{j}\) respectively such that \(col(x)=col(z)\) and \(col(x^{\prime})=col(z^{\prime})\). Now, because of (2) of Remark 36, \(\{f_{j}(x),f_{j}(x^{\prime})\}\in E(R_{3}(j))\) implies either \(col(f_{j}(x))=col(f_{j}(x^{\prime}))\cdot p\) or \(col(f_{j}(x^{\prime}))=col(f_{j}(x))\cdot p\) for some prime \(p\). Therefore, either \(col(z)=col(z^{\prime})\cdot p\) or \(col(z^{\prime})=col(z)\cdot p\). So, \(\{z,z^{\prime}\}\in E(Y_{j})\). Hence, after the vertex identification process, \(\{x,x^{\prime}\}\in E(X_{3}(j))\). \(\Box\)
Now to show the preservation of edge relations, we consider the following cases, not necessarily disjoint:
(a) Let \(x,x^{\prime}\in V(X_{j-1})\), i.e., both the vertices are from the graph obtained in the previous iteration of the outer while loop. Then, by definition of \(f_{j}\) in \(1\), \(f_{j}(x)=f_{j-1}(x)\) and \(f_{j}(x^{\prime})=f_{j-1}(x^{\prime})\). Since by induction hypothesis \(f_{j-1}\) is an isomorphism between \(X_{j-1}\) and \(R_{3}(j-1)\), \(\{x,x^{\prime}\}\in E(X_{j-1})\)\(\iff\{f_{j-1}(x),f_{j-1}(x^{\prime})\}\in E(R_{3}(j-1))\). The remaining case is covered by Claim 41.
(b) Let \(x,x^{\prime}\) be two vertices in \(X_{j}\) that appear in the '\(Y_{j}\)-part' of \(X_{j}\). More precisely, \(x,x^{\prime}\) belong to the disjoint union of \(V(X_{j})\setminus V(X_{j-1})\) ( which is the set of vertices which are newly introduced in the \(j^{th}\) iteration of the outer while loop but not identified in the same ) and \(X_{j-1,j-1}\) (which corresponds to the set of vertices that are the result of vertex identification of \(X_{j-1,j-1}\) and \(Y_{j,j-1}\) in the \(j^{th}\) iteration). Since \(Y_{j}=HD[col(g_{j})]\cong R_{3}(G)[Des(g_{j})]\) by Remark 36, \(\{x,x^{\prime}\}\in E(X_{j})\iff\{f_{j}(x),f_{j}(x^{\prime})\}\in E(R_{3}(j))\).
(c) Let \(x\) be a vertex from the old graph \(X_{j-1}\) which has not been identified in the \(j^{th}\) iteration, i.e., \(x\in V(X_{j-1})\setminus X_{j-1,j-1}\). Let \(x^{\prime}\) be a newly added vertex which has not been identified in the \(j^{th}\) iteration, i.e., \(x^{\prime}\in V(X_{j})\setminus V(X_{j-1})\). It is not hard to see that \(\{x,x^{\prime}\}\) is not an edge of the disjoint union of \(X_{j-1}\) and \(Y_{j}\) ( before the identification process ). Since none of \(x\) and \(x^{\prime}\) has taken part in the identification process in this iteration, we have \(\{x,x^{\prime}\}\notin E(X_{j})\). Now as \(f_{j}\) is a bijection, we also have the following: \(f_{j}(x)\in V(R_{3}(j-1))\setminus Des(g_{j})\) and
\(f_{j}(x^{\prime})\in V(R_{3}(j))\setminus V(R_{3}(j-1))\). Since \(f_{j}(x)\) and \(f_{j}(x^{\prime})\) are not in same \(Des(u)\) for any vertex \(u\) in \(R_{3}(j)\), \(\{f_{j}(x),f_{j}(x^{\prime})\}\) is not an edge in \(R_{3}(j)\). Thus, it is proved that \(f_{j}\) is an isomorphism between \(X_{j}\) and \(R_{3}(j)\). So, we can conclude that \(X_{m}\cong R_{3}(m)\). It is easy to see that \(R_{3}(m)\) is \(R_{3}(G)\). This concludes the proof of Claim 39.
Hence, the algorithm is correct and we can return an isomorphic copy of \(R_{3}(G)\) from \(R_{4}(G)\).
## 7 Solution to Cameron's Question
Cameron asked the following question: "Question 2 [8]: Is there a simple algorithm for constructing the directed power graph or the enhanced power graph from the power graph, or the directed power graph from the enhanced power graph?"
Suppose we are given a power graph (or an enhanced power graph) of some finite group \(G\) as input, i.e., \(\Gamma=\mathsf{Pow}(G)\) (or, \(\Gamma=\mathsf{EPow}(G)\)). However, the group \(G\) is not given. As discussed in Section 5 we can find a CCG-set for \(G\) from the input graph. Next we describe how to obtain a graph isomorphic to \(R_{4}(G)\) from the CCG-set. From the vertices corresponding to a CCG-set of \(G\), say \(\{g_{1},g_{2},\ldots,g_{m}\}\), we get the information about their degree in \(\Gamma\) and the pairwise common neighborhood of \(g_{i}\) and \(g_{j}\) in the respective graph. This immediately gives us \(R_{4}(G)\). From \(R_{4}(G)\), we know how to get back an isomorphic copy of \(\mathsf{DPow}(G)\) using the results in Section 6. All the steps in the process can be performed in polynomial time. Therefore, we get a solution to Cameron's question.
|
2303.15144 | Joint Multi-Echo/Respiratory Motion-Resolved Compressed Sensing
Reconstruction of Free-Breathing Non-Cartesian Abdominal MRI | We propose a novel respiratory motion-resolved MR image reconstruction method
that jointly treats multi-echo k-space raw data. Continuously acquired
non-Cartesian multi-echo/multi-coil k-space data with free breathing are
sorted/binned into the motion states from end-expiratory to end-inspiratory
phases based on a respiratory motion signal. Temporal total variation applied
to the motion state dimension of each echo is then coupled in the $\ell_2$
sense for joint reconstruction of the multiple echoes. Reconstructed source
images of the proposed method are compared with conventional echo-by-echo
motion-resolved reconstruction, and R2* of the proposed and echo-by-echo
methods are compared with respect to a clinical reference. We demonstrate that
inconsistency between echoes is successfully suppressed in the proposed joint
reconstruction method, producing high-quality source images and R2*
measurements compared to clinical reference. | Youngwook Kee, MungSoo Kang, Seongho Jeong, Gerald Behr | 2023-03-27T12:27:26Z | http://arxiv.org/abs/2303.15144v2 | Joint Multi-Echo/Respiratory Motion-Resolved Compressed Sensing Reconstruction of Free-Breathing Non-Cartesian Abdominal MRI
###### Abstract
We propose a novel respiratory motion-resolved MR image reconstruction method that jointly treats multi-echo k-space raw data. Continuously acquired non-Cartesian multi-echo/multi-coil k-space data with free breathing are sorted/binned into the motion states from end-expiratory to end-inspiratory phases based on a respiratory motion signal. Temporal total variation applied to the motion state dimension of each echo is then coupled in the \(\ell_{2}\) sense for joint reconstruction of the multiple echoes. Reconstructed source images of the proposed method are compared with conventional echo-by-echo motion-resolved reconstruction, and R2* of the proposed and echo-by-echo methods are compared with respect to a clinical reference. We demonstrate that inconsistency between echoes is successfully suppressed in the proposed joint reconstruction method, producing high-quality source images and R2* measurements compared to clinical reference.
Keywords:Multi-Echo Non-Cartesian MRI Motion-Resolved Image Reconstruction Collaborative/Vectorial Total Variation.
## 1 Introduction
In multi-echo gradient-echo (mGRE) MRI, k-space raw data are acquired at multiple time points (echo times or TEs). Reconstructed images at different TEs are utilized in quantitative tissue parameter mapping, such as proton density fat fraction (PDFF) [3, 18], R2* relaxation rate [8], or quantitative susceptibility mapping (QSM) [13, 20]. Clinical applications include fatty liver disease [15] and hepatic iron overload assessment [9]. Recent advances in non-Cartesian k-space sampling trajectories have facilitated free-breathing mGRE MRI [1, 14], obviating the need for patients to hold their breath which is challenging for children and some adults. In addition, respiratory motion-resolved compressed sensing (CS) reconstruction, such as XD-GRASP [5], has demonstrated to enable motion-resolved PDFF/R2*/QSM [10, 11, 19] mitigating the confounding factor of respiratory motion.
Despite these advances, applying respiratory motion-resolved CS reconstruction to highly-undersampled mGRE data is not straightforward. The question
arises as to what mathematical machinery needs to be devised/incorporated for joint reconstruction of multiple echoes to prevent potential "echo inconsistency" produced by echo-by-echo CS reconstruction that necessarily requires iterative optimization. Little attention has been paid to the optimal treatment of highly-undersampled mGRE imaging data, which is crucial in motion-tolerant quantitative body MRI. In this paper, inspired by vectorial/collaborative total variation (TV) in color image denoising/deblurring/inpainting [4, 6], we propose a novel treatment for mGRE image reconstruction where all echoes are jointly reconstructed.
The remainder of this paper is organized as follows: Section 2 describes the details of the proposed joint multi-echo/multi-coil/respiratory motion-resolved image reconstruction method with numerical implementation, Section 3 describes the methods and materials used to demonstrate the effectiveness of the proposed method, Section 4 reports the results and compares them with existing methods, and finally, Section 5 provides a discussion of our findings and a conclusion.
## 2 Theory
### Problem Formulation
Let \(N\) be the image size, \(C\) be the number of coils, \(E\) be the number of echoes, \(T\) be the number of motion states, and \(M\) be the number of measurements in k-space. We are interested in reconstructing \(u_{k}\in\mathbb{C}^{N\times T}\) in image space from \(y_{j,k}\in\mathbb{C}^{\lfloor M/T\rfloor\times T}\) in k-space for \(k=1,\ldots,E\) and \(j=1,\ldots,C\). Notice that the total number of k-space measurements \(M\) is assumed to be sorted/binned into \(\lfloor M/T\rfloor\times T\) from end-expiratory to end-inspiratory motion state according to a motion signal [5, 10, 11]. Therefore, each motion state is highly undersampled by a factor of \(T\), and the following model-based MRI reconstruction is considered:
\[\underset{u_{1},\ldots,u_{E}}{\text{minimize}}\,\frac{1}{2}\sum_{j=1}^{C}\sum _{k=1}^{E}||\sqrt{D}(FS_{j}u_{k}-y_{j,k})||_{F}^{2}+\lambda\cdot\mathcal{R}(u_ {1},\ldots,u_{E}), \tag{1}\]
where \(S_{j}:\mathbb{C}^{N\times T}\rightarrow\mathbb{C}^{N\times T}\) is the \(j\)-th coil sensitivity map, \(F:\mathbb{C}^{N\times T}\rightarrow\mathbb{C}^{\lfloor M/T\rfloor\times T}\) is the nonuniform Fourier transform, and \(D:\mathbb{C}^{\lfloor M/T\rfloor\times T}\rightarrow\mathbb{C}^{\lfloor M/ T\rfloor\times T}\) is the density compensation factor [16, 17]. \(||\cdot||_{F}\) is the Frobenius norm in complex space and \(\mathcal{R}(u_{1},\ldots,u_{E})\) is the regularization term that can be defined via \(\ell_{2}\) coupling along the echo as follows.
\[\mathcal{R}(u_{1},\ldots,u_{E}) =\sum_{\mathbf{x}}\left(|\partial_{t}u_{1}(\mathbf{x})|^{2}+| \partial_{t}u_{2}(\mathbf{x})|^{2}+\cdots+|\partial_{t}u_{E}(\mathbf{x})|^{2 }\right)^{1/2}\] \[=||\partial_{t}\mathbf{u}||_{2,1}, \tag{2}\]
where \(\mathbf{u}=(u_{1},\ldots,u_{E})\) and \(\partial_{t}\) is the partial derivative or finite difference along the motion dimension (time). This is known as collaborative/vectorial TV [4, 6], initially considered in RGB color image processing. _Here, we make a key observation that multiple echoes in mGRE data can be combined in the \(\ell_{2}\) sense
for joint multi-echo/respiratory motion-resolved CS reconstruction (henceforth, joint TE/MR recon). This approach takes into account the echo signal evolution, which is currently absent in the conventional echo-by-echo motion-resolved reconstruction (henceforth, echo-by-echo MR recon)._
### Numerical Optimization
The collaborative/vectorial TV with \(\ell_{2}\) coupling is convex in \(\mathbf{u}\), and we rewrite (1) with the dualization of the data consistency term [12, 2] as follows.
\[\min_{u_{1},\ldots,u_{E}}\max_{\begin{subarray}{c}(\xi_{1},\ldots,\xi_{E})\in K \\ \zeta_{1},1,\ldots,\zeta_{C,E}\end{subarray}}\left(\sum_{j=1}^{C}\sum_{k=1}^{E} \langle\sqrt{D}(FS_{j}u_{k}-y_{j,k}),\zeta_{j,k}\rangle-\frac{1}{2}||\zeta_{j, k}||_{2}^{2}\right)+\sum_{k=1}^{E}\langle\partial_{t}u_{k},\xi_{k}\rangle, \tag{3}\]
where the convex set \(K:=\{(\xi_{1},\ldots,\xi_{E})\in\mathbb{C}^{N\times T}\times\cdots\times \mathbb{C}^{N\times T}:(|\xi_{1}|^{2}+\cdots+|\xi_{E}|^{2})^{1/2}\leq 1/ \lambda\}\). The primal-dual hybrid gradient (PDHG) algorithm [2] is suitable to solve (9). By applying gradient ascent for the dual variables and gradient descent for the primal variable, we obtain the following update equations:
\[\xi_{k}^{n+1} \leftarrow\mathrm{proj}_{K}(\xi_{k}^{n}+\sigma\partial_{t}\bar{ u}_{k}^{n}), \tag{4}\] \[\zeta_{j,k}^{n+1} \leftarrow\mathrm{prox}(\zeta_{j,k}^{n}+\sigma(\sqrt{D}(FS_{j} \bar{u}_{k}^{n}-y_{j,k}))),\] (5) \[u_{k}^{n+1} \leftarrow u_{k}^{n}-\tau\left(\sum_{j=1}^{C}(\sqrt{D}FS_{j})^{H}\zeta_{j, k}^{n}+\partial_{t}^{H}\xi_{k}^{n}\right),\] (6) \[\bar{u}_{k}^{n+1} \gets 2u_{k}^{n+1}-u_{k}^{n}, \tag{7}\]
for all \(k\) and \(j\). The projection onto \(K\) and proximal operator are performed by
\[\xi_{k}\leftarrow\frac{\tilde{\xi}_{k}}{\max(1,(|\xi_{1}|^{2}+\cdots+|\xi_{E }|^{2})^{1/2}/\lambda)},\quad\zeta_{j,k}\leftarrow\frac{\tilde{\zeta}_{j,k}} {1+\sigma}. \tag{8}\]
The step sizes \(\sigma\) and \(\tau\) are both set to \(1/8\). The additional update step for the primal variable in (7) is an extragradient step for convergence [2].
Remark I: The calculation of \((\sqrt{D}FS_{j})^{H}\zeta_{j,k}^{n}\) for \(j=1,\ldots,C\), which is the most computationally intensive step, can be distributed across multiple GPUs using the Message Passing Interface (MPI) to achieve speedup.
Remark II: The \(\ell_{1}\) coupling between echoes for \(\mathcal{R}(u_{1},\ldots,u_{E})\) can be used instead, which is expressed as
\[||\partial_{t}\mathbf{u}||_{1,1}=\sum_{\mathbf{x}}|\partial_{t}u_{1}(\mathbf{ x})|+|\partial_{t}u_{2}(\mathbf{x})|+\cdots+|\partial_{t}u_{E}(\mathbf{x})|. \tag{9}\]
Notice that there is _no effective coupling_ between the echoes as the gradient of the objective function with respect to each echo is independent from each other, possibly causing "echo inconsistency" in the final image. The only change in the PDHG algorithm is the convex set \(K\) on which the dual variable \(\zeta_{j,k}\) is projected, which is performed as \(\xi_{k}\leftarrow\tilde{\xi}_{k}/\max(1,|\xi_{k}|/\lambda)\).
## 3 Methods
### Synthetic Image (Toy Example)
To illustrate the concept of collaborative/vectorial TV in the idea of joint TE/MR recon, we designed a synthetic image reconstruction experiment as follows: 2D radial and variable density spiral k-space sampling trajectories were designed using the following parameters: Gmax = 80 mT/m, SR = 200 T/m/s, FOV = 25 cm, in-plane resolution = 1\(\times\)1 mm\({}^{2}\), R = 6 (acceleration). The generated trajectories in Fig. 1(A) were used to sample the k-space (Fourier space) data of the
Figure 1: Synthetic Image Experiment (see Section 3.1 and 4 for description.).
input RBG image shown in Fig. 1(B) using the nonuniform Fourier transform (NUFFT). Notice that there is misalignment between the RGB color channels along the vertical axis, which was introduced by shifting the G and B channels. This is illustrated in Fig. 1(I-L), where transparent line profiles are drawn along the vertical dashed line in Fig. 1(B).
The R, G, and B channels can be considered as multi-echo images, where each channel represents the images at TE1, TE2, and TE3, respectively. The intensity profiles of each color channel, or equivalently each echo, can be viewed as representing different motion states. We formulate the following reconstruction problem: Let \(m_{j}:(\Omega_{\mathcal{F}}\subset\mathbb{R}^{2})\rightarrow\mathbb{C}\) be the \(j\)-th channel of the vector-valued function in Fourier space (k-space) \(m:(\Omega_{\mathcal{F}}\subset\mathbb{R}^{2})\rightarrow\mathbb{C}^{3},(k_{x},k_{y})\mapsto(m_{1}(k_{x},k_{y}),m_{2}(k_{x},k_{y}),\,m_{3}(k_{x},k_{y}))\). The goal is to reconstruct \(\mathbf{u}:(\Omega\subset\mathbb{R}^{2})\rightarrow\mathbb{C}^{3},(x,y) \mapsto(u_{1}(x,y),u_{2}(x,y),u_{3}(x,y))\) from the undersampled input Fourier samples \(m\) by solving
\[\underset{u_{1},u_{2},u_{3}}{\text{minimize}}\,\frac{1}{2}\sum_{k=1}^{3}\int_ {\Omega_{\mathcal{F}}}\|\sqrt{d}(\mathcal{F}u_{k}-m_{k})\|_{2}^{2}+\lambda \int_{\Omega}||\partial_{y}\mathbf{u}||_{1}, \tag{10}\]
where \(\mathcal{F}:\mathbb{C}\rightarrow\mathbb{C}\) is the nonuniform Fourier transform operator, and \(d\) is the density compensation factor. The channels (or echoes) can be coupled with the \(\ell_{2}\) norm as follows.
\[\int_{\Omega}||\partial_{y}\mathbf{u}||_{2}=\int_{\Omega}\left(|\partial_{y}u_ {1}|^{2}+|\partial_{y}u_{2}|^{2}+|\partial_{y}u_{3}|^{2}\right)^{1/2}. \tag{11}\]
Note that the above variational formulation of generic image reconstruction problem (10) and (11) coincides with (1) and (2) for MRI reconstruction when \(C=1\) (single coil). To evaluate the effectiveness of the proposed approach, we used PDHG to solve (10) and compared the results with gridding and channel-by-channel (or echo-by-echo) CS reconstructions.
### Human Subjects and Data Acquisition
With IRB approval and informed consent, 7 subjects were recruited, including 4 adult patients with known and suspected iron overload and 3 healthy adult volunteers. The subjects underwent imaging on a 3T clinical MRI scanner (Signa Premier, GE Healthcare, Waukesha, WI) using a 3D multi-echo cones MRI method with free breathing (FB Cones) based on the implementation described in [7, 14]. Subsequently, a commercially available Cartesian-based mGRE sequence (IDEAL-IQ, GE Healthcare, Waukesha, WI) was performed with a single breath-hold as a clinical reference (BH Cartesian).
One of the healthy subjects underwent two FB Cones acquisitions, one with regular breathing and the other with deep breathing. The imaging parameters for FB Cones were as follows: Initial TE/\(\Delta\)TE/TR = 0.032/1.4-1.5/11.4-11.5 ms; #TEs = 6; FA = 3\({}^{\circ}\); resolution = 2\(\times\)2\(\times\)2 mm\({}^{3}\); rBW = 1106-1250 Hz/Px; readout duration = \(\sim\)1 ms. The imaging parameters for BH Cartesian were set
to the clinical standard for iron quantification, which can be completed within a single breath-hold. These parameters include 4X parallel imaging acceleration along the phase and slice encoding directions and a slice thickness of 6 mm.
### Image Quality Assessment and ROI-based R2* Measurements
To assess image quality, organ interfaces, such as the liver dome, pulmonary vasculature/bronchi, and liver/kidney interfaces, were closely examined visually and/or qualitatively. To facilitate this process, areas of interest were magnified, and the sharpness of the root sum of squares of the reconstructed multi-echo images (T2*w) was visually inspected using the 'imgradient3' function in MATLAB. R2* maps were computed using a magnitude-based exponential fitting from reconstructed images from FB Cones as well as BH Cartesian. To quantify the R2* values in the liver parenchyma, we placed a 6-voxel radius region of interest (ROI) in the 3 consecutive coronal slices of the liver, avoiding large vessels that could affect the R2* measurements. Then, mean and std were calculated.
## 4 Results
Fig. 1(C-H) shows reconstructed RGB color images from the Fourier samples "acquired" with the radial and variable density spiral trajectories in Fig. 1(A). The color noise introduced by misalignment in the R, G, and B channels of
Figure 2: Motion-averaged, echo-by-echo MR (A, C, E, and G) and join TE/MR (B, D, F, and H) reconstructions of end-expiration/inspiration of a healthy volunteer with deep breathing. The areas indicated by the color arrows are magnified in separate images and labeled with the same color bounding box.
the input image (Fig. 1(B)) was effectively suppressed in the joint reconstructions (Fig. 1(G, H)) regardless of the trajectory type. In contrast, gridding (Fig. 1(C, D)) and channel-wise reconstructions (Fig. 1(E, F)) preserved the color noise/misalignment. This difference is better appreciated in the line profiles (Fig. 1(I-L)) extracted from a vertical line passing through the center of the images, which is identical to the white dotted line in Fig. 1(B). The misaligned edges between the R, G, and B channels in the input image were reconstructed to have common edges in the joint reconstruction (Fig. 1(J, L)), as opposed to the channel-wise reconstruction (Fig. 1(I, K)).
Fig. 2 shows reconstructed images of a healthy volunteer who performed deep breathing during the acquisition. Both echo-by-echo motion-resolved (echo-by-echo MR) and joint multi-echo/motion-resolved (joint TE/MR) methods demonstrated the capability to resolve end-expiratory/-inspiratory motion phases, in contrast to motion-averaged reconstruction. To facilitate a visual comparison between echo-by-echo MR and joint TE/MR reconstructions, areas of interest are indicated by colored arrows and magnified for closer examination. The joint TE/MR reconstruction provided a clearer visualization of the liver dome (Fig. 2(A, B, E, and F)) and pulmonary vasculature/bronchi (Fig. 2(G, H)) when compared to the echo-by-echo reconstruction. In addition, the spurious noise present in the echo-by-echo MR reconstruction (Fig. 2(C)) was successfully suppressed in the joint TE/MR reconstruction (Fig. 2(D)).
Figure 3: T2*w, magnitude of gradients, and R2* maps for motion-averaged, echo-by-echo MR, and joint TE/MR reconstructions of a patient with iron overload are shown. The second and third columns show end-expiration, and the fourth and fifth columns show end-inspiration. ROI-based R2* measures are shown at the bottom of the R2* maps.
Fig. 3 displays T2*w, magnitude of gradients, and R2* maps of a patient with iron overload, and compares motion-averaged, echo-by-echo MR, and joint TE/MR reconstructions. The echo-by-echo MR and joint TE/MR reconstructions exhibit sharper image quality than the motion-averaged reconstruction, which shows a blurred liver dome. In the area of interest indicated by the red arrows, both end-expiratory and end-inspiratory phases of the joint TE/MR reconstruction exhibit visually sharper image quality than the echo-by-echo MR reconstruction. ROI-based R2* value obtained from the joint TE/MR reconstruction was closest to that obtained from BH Cartesian (230 s\({}^{-1}\); not shown).
The ROI-based R2* measurements of all the human subjects imaged for this study are reported in Table 1. As shown, the joint TE/MR reconstruction (second column from the right; show in bold) produced the lowest R2* values compared to the other reconstruction methods and was closest to those obtained from the clinical standard BH Cartesian method.
## 5 Conclusion
In this paper, we have proposed a novel approach for handling multi-echo k-space raw data in respiratory motion-resolved CS reconstruction of highly-undersampled free-breathing 3D cones acquisition for liver R2* MRI. By making a key observation that multi-echo images in MRI are somewhat similar to the R, G, and B channels in a color image and demonstrating its effectiveness in a synthetic image reconstruction experiment, we have utilized vectorial/collaborative TV to suppress inconsistencies between echoes in _in vivo_ experiments. The proposed joint TE/MR reconstruction has been shown to successfully suppress "echo inconsistency" compared to echo-by-echo reconstructions, enabling 5D (3D space + 1D echo + 1D motion state) image reconstruction from continuously acquired non-Cartesian multi-echo k-space data. Future work will include exploring other convex/non-convex combinations of the echoes.
In conclusion, the proposed joint TE/MR reconstruction method successfully suppressed "echo inconsistency" observed in echo-by-echo motion-resolved CS reconstruction of free-breathing 3D multi-echo cones liver MRI.
\begin{table}
\begin{tabular}{|l|r|r|r|r|} \hline & FB Cones & FB Cones & FB Cones & BH Cartesian \\ & motion-averaged & echo-by-echo MR & joint TE/MR & reference \\ \hline \hline HV\#1 & 106.72 \(\pm\) 19.05 & 70.18 \(\pm\) 15.46 & **65.51 \(\pm\) 12.41** & 41.89 \(\pm\) 14.81 \\ HV\#2 & 94.58 \(\pm\) 21.98 & 76.19 \(\pm\) 23.45 & **71.73 \(\pm\) 22.51** & 58.37 \(\pm\) 17.34 \\ HV\#3 (regular) & 117.36 \(\pm\) 25.89 & 87.59 \(\pm\) 14.34 & **80.79 \(\pm\) 13.89** & 62.59 \(\pm\) 18.93 \\ HV\#3 (deep) & 246.08 \(\pm\) 38.41 & 96.13 \(\pm\) 25.53 & **80.75 \(\pm\) 17.68** & 39.35 \(\pm\) 16.79 \\ Patient\#1 & 261.37 \(\pm\) 28.54 & 258.94 \(\pm\) 34.15 & **252.25 \(\pm\) 31.83** & 226.18 \(\pm\) 32.07 \\ Patient\#2 & 311.98 \(\pm\) 49.22 & 280.19 \(\pm\) 49.81 & **264.52 \(\pm\) 47.50** & 224.97 \(\pm\) 30.48 \\ Patient\#3 & 397.15 \(\pm\) 51.95 & 393.47 \(\pm\) 66.39 & **378.08 \(\pm\) 64.77** & 304.43 \(\pm\) 58.74 \\ Patient\#4 & 489.12 \(\pm\) 44.01 & 468.00 \(\pm\) 52.00 & **445.23 \(\pm\) 56.12** & 400.32 \(\pm\) 43.79 \\ \hline \end{tabular}
\end{table}
Table 1: ROI-based R2* measurements of healthy volunteers (HVs) and patients. |
2309.00274 | Mathematical models of Plasmodium vivax transmission: a scoping review | Plasmodium vivax is one of the most geographically widespread malaria
parasites in the world due to its ability to remain dormant in the human liver
as hypnozoites and subsequently reactivate after the initial infection (i.e.
relapse infections). More than 80% of P. vivax infections are due to hypnozoite
reactivation. Mathematical modelling approaches have been widely applied to
understand P. vivax dynamics and predict the impact of intervention outcomes.
In this article, we provide a scoping review of mathematical models that
capture P. vivax transmission dynamics published between January 1988 and May
2023 to provide a comprehensive summary of the mathematical models and
techniques used to model P. vivax dynamics. We aim to assist researchers
working on P. vivax transmission and other aspects of P. vivax malaria by
highlighting best practices in currently published models and highlighting
where future model development is required. We provide an overview of the
different strategies used to incorporate the parasite's biology, use of
multiple scales (within-host and population-level), superinfection, immunity,
and treatment interventions. In most of the published literature, the rationale
for different modelling approaches was driven by the research question at hand.
Some models focus on the parasites' complicated biology, while others
incorporate simplified assumptions to avoid model complexity. Overall, the
existing literature on mathematical models for P. vivax encompasses various
aspects of the parasite's dynamics. We recommend that future research should
focus on refining how key aspects of P. vivax dynamics are modelled, including
spatial heterogeneity in exposure risk, the accumulation of hypnozoite
variation, the interaction between P. falciparum and P. vivax, acquisition of
immunity, and recovery under superinfection. | Md Nurul Anwar, Lauren Smith, Angela Devine, Somya Mehra, Camelia R. Walker, Elizabeth Ivory, Eamon Conway, Ivo Mueller, James M. McCaw, Jennifer A. Flegg, Roslyn I. Hickson | 2023-09-01T06:10:33Z | http://arxiv.org/abs/2309.00274v3 | # Mathematical models of _Plasmodium vivax_ transmission: a scoping review
###### Abstract
_Plasmodium vivax_ is one of the most geographically widespread malaria parasites in the world, primarily found across South-East Asia, Latin America, and parts of Africa. _P. vivax_ is unique compared to most other _Plasmodium_ parasites due to its ability to remain dormant in the human liver as hypnozoites and subsequently reactivate after the initial infection (i.e. relapse infections). Mathematical modelling approaches have been widely applied to understand _P. vivax_ dynamics and predict the impact of intervention outcomes. Models that capture _P. vivax_ dynamics differ from those that capture _P. falciparum_ dynamics, as they must account for relapses caused by the activation of hypnozoites. In this article, we provide a scoping review of mathematical models that capture _P. vivax_ transmission dynamics published between January 1988 and May 2023. The primary objective of this work is to provide a comprehensive summary of the mathematical models and techniques used to model _P. vivax_ dynamics. In doing so, we aim to assist researchers working on mathematical epidemiology, disease transmission, and other aspects of _P. vivax_ malaria by highlighting best practices in currently published models and highlighting where further model development is required. We categorise _P. vivax_ models according to whether a deterministic or agent-based approach was used. We provide an overview of the different strategies used to incorporate the parasite's biology, use of multiple scales (within-host and population-level), superinfection, immunity, and treatment interventions. In most of the published literature, the rationale for different modelling approaches was driven by the research question at hand. Some models focus on the parasites' complicated biology, while others incorporate simplified assumptions to avoid model complexity. Overall, the existing literature on mathematical models for _P. vivax_ encompasses various aspects of
the parasite's dynamics. We recommend that future research should focus on refining how key aspects of _P. vivax_ dynamics are modelled, including spatial heterogeneity in exposure risk, the accumulation of hypnozoite variation, the interaction between _P. falciparum_ and _P. vivax_, acquisition of immunity, and recovery under superinfection.
**Keywords:**_P. vivax_ malaria, mathematical model, hypnozoites, relapse, malaria model, scoping review
## 1 Introduction
Malaria remains a significant public health problem, with an estimated 247 million cases and 619,000 deaths reported worldwide in 2021 alone [143]. Malaria is most prevalent in the World Health Organisation (WHO) African Region, while the South-East Asia Region has the second-highest estimated malaria burden globally. _Plasmodium vivax_ is currently the most geographically widespread of the malaria parasites, resulting in significant associated global morbidity and mortality [8, 10, 19, 111]. _P. vivax_ has been responsible for approximately 45% of malaria cases in the WHO South-East Asia Region since 2000 and is widely prevalent in countries across Asia, Latin America, and the Pacific Islands [19, 111, 143]. _P. vivax_ has often been overlooked and mistakenly considered as "benign" in the past [111, 112]. More recent research has produced evidence that, in addition to causing severe illness, _P. vivax_ infection can cause long-term health consequences such as anaemia, impaired cognitive development, and chronic kidney disease [12, 23, 72, 131]. The economic impact of _P. vivax_ malaria is also significant, as the disease can lead to decreased productivity, increased healthcare costs, and reduced economic growth in endemic areas [39].
Mathematical modelling is an important tool that allows us to understand dynamic systems in various fields ranging from physics and engineering to social sciences and biology [66]. Mathematical modelling can provide valuable insight into infectious disease dynamics and plays an important role in informing public health policy and decision-making [14, 57]. Infectious disease modelling has been widely used to understand the transmission of malaria, particularly _Plasmodium falciparum_, and the impact of interventions to control and eliminate malaria [82, 124]. Modelling of _P. vivax_ transmission differs from _P. falciparum_ modelling, due to the need to account for recurrent infections caused by the activation of hypnozoites, a dormant liver stage of the parasite.
_P. vivax_ parasites are introduced into the human body through infectious _Anopheles_ mosquito bites. _P. vivax_ parasites then travel to the liver, where they undergo a series of developmental and replication stages [61, 65] before the liver-stage parasites are released into the blood, causing blood-stage infections. Individuals experiencing a blood-stage infection may become symptomatic, with symptoms such as fever and fatigue, or be asymptomatic. One of the significant characteristics of _P. vivax_ infection is that, as part of the parasites' life-cycle, they can remain dormant in the liver for weeks or months [58] as hypnozoites that can cause further blood-stage infections (called relapses) upon reactivation. Importantly, between 79 and 96% of _P. vivax_ cases are due to relapses [2, 31, 54, 116]. It can be challenging to distinguish a relapse from other types of recurrent malaria, such as a reinfection (i.e. malaria due to a new infectious bite) or a recrudescence (i.e. recurrence of malaria due to incomplete elimination of blood-stage infections, often associated with treatment failure) [46]. Relapse dynamics typically follow temperate or tropical phenotypes, relating to the period between primary infection and hypnozoite activation [78]. In tropical regions, relapses occur frequently within a few weeks to a few months, whereas in temperate regions, relapses typically
occur between six to 12 months after initial infection. This variation in relapse frequency relates to vector dynamics and the transmission potential of _P. vivax_. In temperate regions, slower-relapsing hypnozoites may allow the parasites to survive colder months when mosquitoes are less prevalent, whereas, in tropical regions, a faster relapsing frequency may allow the parasite to maximise its transmission potential [18, 55]. As relapses contribute to the majority of blood-stage infections, it is important to capture these relapse dynamics when modelling _P. vivax_ disease transmission.
The methods of incorporating hypnozoites and their associated relapse dynamics vary across the _P. vivax_ modelling literature. Modellers have often adopted the approach of assuming a binary state (presence or absence) for hypnozoites harboured within an individual [28, 42, 59, 60, 116]. The _P. vivax_ hypnozoite reservoir (i.e. the number of hypnozoites) is known to be non-binary [138, 139]. Due to this, more recent _P. vivax_ models have attempted to incorporate the complex hypnozoite dynamics and capture the impact of the hypnozoite reservoir on transmission dynamics [9, 10, 84, 87, 138].
The methods used to capture _P. vivax_ immunity also vary across the modelling literature. When individuals are first infected with malaria, they naturally develop some level of immunity. This immunity can be defined as the body's state of resistance to the infection, and, with each subsequent infection, this acquired immunity is enhanced [24]. Modellers may consider different types of immunity when modelling _P. vivax_ transmission. This includes immunity against new infections, protection against severe malaria, anti-parasite immunity (i.e. the ability to control parasite density upon infection), clinical immunity (i.e. protection against clinical disease), and transmission-blocking immunity (i.e. immunity that reduces the probability of parasite transmission to mosquitoes) [41, 145, 146].
One of the primary reasons for modelling infectious disease transmission is to understand the potential impact of treatment strategies on incidence. In terms of _P. vivax_, a combination therapy, known as radical cure, is needed to target both the acute infection and the dormant hypnozoite reservoir [103, 135, 130]. The two drugs include: (i) a drug that clears parasites from the blood (such as chloroquine or artemisinin-based combination therapy; and (ii) an 8-aminoquinoline drug that clears hypnozoites from the liver (such as primaquine or tafenoquine). Targeting the hypnozoite reservoir is crucial in controlling or eliminating _P. vivax_, as transmission can be re-established from the reactivation of hypnozoites [138]. Incorporating Glucose-6-phosphate dehydrogenase deficiency (G6PD) testing is recommended before administering primaquine or tafenoquine as these drugs can cause life-threatening haemolysis in individuals with G6PD deficiency, an enzymopathy affecting up to 30% of individuals in malaria-endemic regions [114].
Other interventions that have been modelled include vector control, mass drug administration (MDA), mass screening and testing (MSaT), and _P. vivax_ serological testing and treatment (_Pv_SeroTAT). Vector control measures are recommended by the WHO in order to achieve elimination [147]. MDA is an effective intervention for controlling malaria and was advocated by the WHO in the 1950s to control malaria transmission [53]. MDA involves treating the entire population, or a well-defined sub-population, in a geographic location regardless of their infection status [53, 96], such that both individuals who are infected and non-infected are treated. In a radical cure MDA intervention, individuals are given artemisinin-based combination therapy to clear blood-stage parasites and primaquine (or tafenoquine) to clear hypnozoites. Due to the risks associated with radical cure treatment in G6PD-deficient individuals, mass administration of radical cure is not recommended by the WHO without first screening for G6PD deficiency [52, 134, 141]. Another strategy
for reducing and eliminating malaria is MSaT. This involves identifying and treating infected individuals within a specific geographical location by mass testing of all individuals regardless of their symptom status [121]. MSaT is effective in reducing malaria transmission in areas with low to moderate malaria prevalence. However, its success depends on the availability of accurate diagnostic tools, effective antimalarial drugs, and strong community participation [142, 70]. _Pv_SeroTAT is a method for identifying individuals with recent blood-stage infections who are potential hypnozoite carriers by measuring antibodies and providing treatment with radical cure [98]. This method can identify individuals likely harbouring a hypnozoite reservoir, therefore allowing targeted treatment. Mathematical modelling has been used to understand how these different intervention strategies may impact _P. vivax_ transmission [3, 60, 137].
In this article, we synthesise the findings of a scoping review of existing mathematical models for population-level _P. vivax_ transmission to provide a comprehensive overview of the modelling frameworks and methods used to characterise _P. vivax_ dynamics. In Section 2, we provide the search and inclusion criteria. We discuss the search results in Section 3 as per the categorical structure in Figure 2 before concluding remarks and open problems are presented in Section 4.
## 2 Methods
We conducted a literature search on the 21st of May 2023, using the databases PubMed and Google Scholar to capture all relevant studies using the search terms "hypnozoite"; "malaria", "vivax", and "mathematical model" with Boolean operators. We screened the titles, abstracts and full text of articles for the following inclusion criteria:
* the paper either applied or described a mathematical model of population-level _P. vivax_ transmission dynamics, and;
* the mathematical model of _P. vivax_ incorporated hypnozoite dynamics, as this is a distinguishing feature of _P. vivax_ parasites compared to other _Plasmodium_ spp.
We excluded papers that:
* were only concerned with the within-host dynamics of _P. vivax_. Although within-host models of _P. vivax_ dynamics are important for understanding _P. vivax_ transmission, they were not directly relevant to the aim of our study (i.e. to identify and compare mathematical models of population-level _P. vivax_ transmission). Papers that modelled dynamics at both the within-host and population level (i.e. multi-scale models) were included.
* only used or described mathematical models of _Plasmodium_ species other than _P. vivax_ (e.g. a mathematical model of _P. falciparum_ infectious disease dynamics). Models that accounted for both _P. vivax_ and another _Plasmodium_ species were included.
* were currently only available as a preprint.
Search terms were conducted in English only, and only literature published in English were considered. No limitations regarding study location, publication status (e.g. accepted, but no preprints), publication type, or publication year were included. To enhance the probability of finding all relevant literature, we screened all references within the articles that met our inclusion and exclusion criteria. Articles were then downloaded to identify key components, which are discussed
in Section 3.
We categorised models depending on whether they used a stochastic or deterministic approach, and whether they were compartmental or agent-based. Deterministic models have no random variation and typically utilise a compartmental structure within a population to form differential equations to track the rate of flows between compartments. Stochastic models incorporate random variation and are useful for questions and scenarios where small population numbers or extinction are involved. In terms of _P. vivax_ infectious disease modelling, agent-based models explicitly model _P. vivax_ transmission dynamics at an individual-level, for example, modelling the interaction between humans and vectors and associating respective state variables and parameters to each individual and vector. In our review, we found that almost all stochastic models were also agent-based, so even though these features are not mutually exclusive, we categorised models as (i) deterministic compartmental models or (ii) stochastic agent-based models.
## 3 Search results
The initial search yielded 2289 articles, which was reduced to 1005 unique articles after removing duplicates between the two databases. After screening at the title level, a further 901 studies were excluded as they did not fulfil the selection criteria in Section 2. After screening the abstracts, a further 63 studies were excluded due to either (i) no underlying mathematical model being described or (ii) the model was for _P. falciparum_ parasites only. Five additional studies were included from the selected studies' references that were not initially identified. A total of 47 studies were finally selected for review (see Figure 1 for a summary of the selection process).
### Model frameworks
In infectious disease dynamics, modelling frameworks typically involve a combination of mathematical models, statistical analyses, and computer simulations that aim to capture the complex dynamics of disease transmission. The Ross-Macdonald model [81], a compartmental model initially developed to describe malaria transmission dynamics, has been widely used as a modelling framework for _P. vivax_ transmission. This modelling approach has been adapted to investigate a range of vector-borne infectious diseases, and has helped inform public health policies and intervention strategies. The first mathematical model describing _P. vivax_ transmission was introduced by -- to the best of our knowledge -- Zoysa _et al._ (1988) [146] in a Ross-Macdonald style modelling approach. Following this, many models have now been developed.
Out of the 47 studies identified that incorporated a _P. vivax_ transmission model, 37 (79%) utilised a deterministic and differential equation (compartmental) framework [1, 3, 4, 5, 9, 10, 11, 27, 35, 42, 44, 46, 56, 59, 60, 63, 64, 68, 71, 87, 92, 97, 100, 101, 104, 105, 106, 117, 120, 128, 138, 139, 145, 146] and nine (19%) used a stochastic and agent-based framework [45, 54, 94, 95, 98, 102, 133, 136, 137] (Figure 2). Only one study (2%) used both deterministic and stochastic frameworks [116] to model _P. vivax_ transmission. Robinson _et al._ (2015) [116] developed the model in a deterministic framework but implemented a stochastic version of the model as a continuous-time Markov chain. For simplicity, we categorise this model as deterministic in Figure 2). Deterministic models are often the first choice amongst modellers due to their simplicity in comparison to stochastic models. Deterministic models are useful for understanding disease dynamics in large populations. Stochastic models provide more realistic and accurate representations of complex systems when dealing with
small population sizes or low disease prevalence, as they can account for the randomness and variability observed in real life [6, 20].
In contrast to the compartmental differential equation framework, agent-based models represent a system as a collection of individual agents that interact with each other based on a set of rules or behaviours [22, 132]. The main difference between compartmental and agent-based modelling frameworks is that a compartmental model uses aggregate variables or compartments to represent the system, while agent-based models use individuals (agents) [132]. Out of the eight studies that used an agent-based model to capture the dynamics of _P. vivax_ transmission, only two studies modelled both the human and mosquito populations as agents [45, 102]. The other agent-based models modelled the mosquito populations as a deterministic compartmental process, such that they combined ordinary differential equations for mosquitoes with an agent-based model for humans [54, 94, 95, 98, 133, 136, 137]. Modelling mosquito dynamics as a deterministic process is an approximate strategy if the size of the mosquito population is very large and _P. vivax_ is not near elimination. In this case, the average behaviour of the stochastic dynamics agrees with those of a
Figure 1: _Summary of the article selection process, illustrating papers included and excluded at each stage of the review process._
Figure 2: _A summary of the 47 P. vivax transmission models currently available in the literature (published as of May 21, 2023). Related models (either modified or motivated by) are connected with a dashed line. Similar/same models are connected with a solid line. The coloured boxes represent key features incorporated in the models (see legend). The hexagonal boxes with the same name represent that the model was also implemented in other frameworks. The timescale (non-linear) is shown on the left._
deterministic process [22, 89, 144]. The actual behaviour of the system depends on the interactions between individuals and mosquitoes, instead of averages. Treating the mosquito compartment as a deterministic process means that modelling elimination is impossible, as there will always be some non-zero number of infectious mosquitoes remaining that can trigger an infection in humans again [133].
Environmental features, ecology, and mosquito habitat locations were explicitly included when modelling malaria spread in the agent-based models that modelled both the human and mosquito population as agents [45, 102]. The most recent agent-based models modelling _P. vivax_ dynamics [54, 94, 95, 98, 136] have evolved from a model introduced by White _et al._ (2018) [137]. The White _et al._ (2018) [137] model has been adapted to capture disease epidemiology in particular geographical settings [95], and to study the impact of different interventions (drugs or vaccination) [54, 94, 136].
While agent-based models have many advantages, their use poses several challenges. One of the main challenges of agent-based models is the difficulty in parameterising and calibrating the model, given the large number of agents and their interactions [30, 37, 75]. For example, despite being an agent-based model, parameterisation is done using an ordinary differential equation system that describes the process in several models [54, 95, 98, 137]. Despite these challenges in parameterisation, agent-based models also often offer a more intuitive representation of epidemiological processes. The computational demands of agent-based models can be challenging [118], although with improving computer technology, this has become less of a concern [38].
### Population-level multiscale models
Multiscale disease modelling incorporates at least two interacting scales and provides insights into disease dynamics across these scales that cannot be obtained from a single scale alone [43]. Here we only focus on within-host population models as'multiscale models'. For _P. vivax_, multiscale modelling approaches can incorporate the complex hypnozoite dynamics and their relapse effects on onward disease transmission. Most models in the existing literature only capture the population-level impact of _P. vivax_ (boxes with a light lime green border in Figure 2). Few models capture both within-host and population-level impacts (boxes with a strong blue border in Figure 2) [9, 54, 95, 136, 137, 138, 9, 138]. The very first multiscale model for _P. vivax_ transmission was developed by White _et al._ (2018) [137], and modelled the within-host hypnozoite dynamics using an agent-based model that considered heterogeneity in exposure to mosquito bites. This built on White _et al._ (2014) [138], which was the first to develop a within-host model that captured the dynamics of _P. vivax_ hypnozoites. This multiscale model considered the variability in the size of hypnozoite inoculum across each mosquito bite and was subsequently used to parameterise a separate transmission model that captured the entire structure of the hypnozoite reservoir [137]. The White _et al._ (2014) [138] within-host model for temperate settings assumed collective dormancy. This means that the hypnozoites established by each mosquito bite progress through the dormancy states as a group or batch. This assumption may be biologically unrealistic due to the independence of individual hypnozoite activation and clearance dynamics within liver cells [85]. The other within-host models that were adapted from White _et al._ (2018) [137] applied the same assumption regarding batch hypnozoite behaviour [54, 95, 98, 136].
Recent work by Mehra _et al._ (2020) [85] relaxed the collective dormancy assumption. This enabled them to characterise the long-latency period of hypnozoite dynamics (a period of latency prior
to hypnozoite activation) modelled (light purple bordered box in Figure 2) in White _et al._ (2014) [138] in analytical form. Later work by Mehra and colleagues embedded the activation-clearance model governing a single hypnozoite in an epidemiological framework [87]. This framework accounts for successive mosquito bites, where each bite can simultaneously establish multiple hypnozoites [86, 87], and explores the epidemiological consequence of radical cure treatment on a single individual. Anwar _et al._ (2022) [9] have since developed a multiscale model motivated by White _et al._ (2014) [138] by embedding the framework of Mehra _et al._ (2022) [87] for short-latency hypnozoites (deriving the relapse rate by averaging the distribution of hypnozoite burden, which is dependent on the force of reinfection) into a simple population-level model that provides key insights into both within-host level and population level dynamics. The within-host and population models were coupled at each time step (thus producing a multiscale model) to incorporate key parameters that describe the hypnozoite dynamics. This multiscale model can provide the hypnozoite distributions within the population and, more importantly, reduces the infinite compartmental structure of White _et al._ (2014) [138] into three compartments and relaxes the artificial truncation needed in White _et al._ (2014) [138] for numerical simulation. Mehra _et al._ (2022) [84] proposed an alternative approach, constructing a Markov population process to couple host and vector dynamics whilst accounting for (short-latency) hypnozoite accrual and superinfection as per the within-host framework proposed in Mehra _et al._ (2022) [87]. In the infinite population size limit, Mehra _et al._ (2022) [84] recovered a functional law of large numbers for this Markov population process, comprising an infinite compartment deterministic model. This infinite compartment model was then reduced into a system of integrodifferential equations based on the expected prevalence of blood-stage infection derived at the within-host scale [87]. This construction yielded population-level distributions of superinfection and hypnozoite burden, and has been generalised to allow for additional complexity, such as long-latency hypnozoites and immunity [84].
### Hypnozoite dynamics and variation
The eradication of _P. vivax_ is challenging due to the presence of the hypnozoite reservoir, which is undetectable and causes new infections long after the initial infection. In developing the first mathematical model for _P. vivax_, Zoysa _et al._ (1991) were also the first to model the effect of hypnozoite relapse on _P. vivax_ transmission [145]. Since most _P. vivax_ blood-stage infections are due to the reactivation of hypnozoites rather than new primary infections, it is crucial that mathematical models incorporate the size of the hypnozoite reservoir [13, 21, 32, 34, 79]. Zoysa _et al._ (1991) [145] assumed that the transmission dynamics could be accounted for by modelling a hypnozoite reservoir of size two (to account for up to two relapses). This assumption was later followed by Fujita _et al._ (2006) [42]. In reality, the average size of the hypnozoite reservoir is likely to be more than two in endemic settings, particularly those with high transmission intensity [139]. Despite having the relapse characteristic that makes _P. vivax_ parasites unique, Aldila _et al._ (2021) [5] did not incorporate relapses in their _P. vivax_ transmission model. In this model, individuals did not harbour hypnozoites when infected with _P. vivax_ and hence did not experience relapse after recovery from blood-stage infection.
Most _P. vivax_ transmission models consider the hypnozoite reservoir as a single compartment, rather than explicitly accounting for a variable number of hypnozoites in the reservoir [1, 3, 4, 11, 27, 35, 44, 45, 56, 59, 60, 63, 64, 68, 71, 92, 97, 100, 102, 104, 105, 113, 116, 117, 120, 128, 139]. Only a handful of models account for the variability in hypnozoite inoculation across mosquito bites (boxes with a bright pink border in Figure 2) [9, 10, 87, 138]. If the size of the hypnozoite
reservoir is modelled explicitly, the number of compartments in the model increases substantially. The very first model that accounted for the variation in hypnozoites across mosquito bites was introduced by -- to the best of our knowledge -- White _et al._ (2014) [138] for a short-latency strain (where hypnozoites can activate immediately after establishment). To account for the variation of hypnozoites across bites, White _et al._ (2014) modelled a system with an infinite number of compartments to represent individuals with different numbers of hypnozoites. In practice, this is truncated at \(2(L_{\max}+1)\) ordinary differential equations (for human population only), where \(L_{\max}\) is the maximum number of hypnozoites considered. In their model, the hypnozoite reservoir within individuals increases with new infectious bites and decreases with both activation and death of hypnozoites. This infinite compartmental system makes the model very complex, particularly when other important structures must also be incorporated, such as individual heterogeneity in bite exposure. An agent-based model later developed by White _et al._ (2018) [137], and other models that utilise this agent-based model, consider variation in hypnozoites within individuals, but do not account for the variability in hypnozoites across mosquito bites [54, 98, 136]. Furthermore, instead of explicitly modelling hypnozoites independently, they impose the batch hypnozoite model. This assumption means that hypnozoites from a mosquito bite act as a batch, where they all reactivate simultaneously, causing relapse or dying at the same time. This reduces one batch of hypnozoites to a single set of dynamics, which is still truncated at a maximum of \(k\) batches.
The multiscale model developed by Anwar _et al._ (2022) [9] accounted for the variation of hypnozoites dynamics across bites. Unlike the White _et al._ (2014) [138] model, Anwar _et al._ (2022) only utilised three compartments at the population level by embedding the within-host model (short-latency) developed by Mehra _et al._ (2022) [87] as a system of integrodifferential equations. This relaxes the artificial truncation for the maximum number of hypnozoites used within the White _et al._ (2014) [138] model. Under a constant force of reinfection, Anwar _et al._ (2022) [9] analytically proved that the multiscale model [9] exhibits an identical steady-state hypnozoite distribution as the infinite ordinary differential equation model structure in White _et al._ (2014) [138]. The advantage of the multiscale model by Anwar _et al._ (2022) [9] is that the population-level component is considerably simpler than the \(2\big{(}L_{max}+1\big{)}\) ordinary differential equations of White _et al._ (2014) [138]. The transmission models proposed by Mehra _et al._ (2022) [84] likewise account for variation in hypnozoite batch sizes, with Mehra _et al._ (2022) [84] additionally accommodating long-latency hypnozoite dynamics. The models of Mehra _et al._ [84] are formulated as systems of integrodifferential equations, informed by the within-host framework of Mehra _et al._ (2022) [87]. The analyses of Anwar and Mehra _et al._ [9, 84] provided insights into hypnozoite dynamics (e.g. the average size of a hypnozoite reservoir within the population and the average relapse rate), in addition to disease dynamics.
### Superinfection
Superinfection with malaria is a common phenomenon and can be defined as when an individual has more than one blood-stage infection with the same malaria-causing parasite species at a given time [122]. For _P. falciparum_ malaria, when an infected individual (primary infection) receives a second infectious mosquito bite, they can become infected with two different parasite broods. In reference to _P. vivax_ malaria, individuals can harbour hypnozoites in the liver even after they recover from a primary infection. Therefore, relapsing hypnozoites can provide another pathway to superinfection for individuals infected with _P. vivax_[110, 122].
When modelling _P. vivax_ dynamics, it is important to consider the impact of superinfection
on recovery and transmission, as the abundance of mosquitoes and the contribution of hypnozoite activation can frequently trigger superinfection. Superinfection can potentially delay recovery from infection [40, 123]. Most of the literature that incorporates superinfection in _P. vivax_ transmission models (boxes with a brown border in Figure 2) [42, 54, 59, 60, 95, 100, 120, 136, 137, 138] does so via the recovery rate [42, 59, 60, 138]. The superinfection phenomenon was first introduced into malaria models by Macdonald (1950) [80], who assumed "_The existence of infection is no barrier to superinfection, so that two or more broods or organisms may flourish side by side_". In the malaria modelling literature, it has been assumed that each brood could be cleared independently at a constant rate. Following this assumption, Dietz _et al._ (1974) [40] proposed a recovery rate under superinfection for _P. falciparum_ malaria, derived at equilibrium under a constant force of reinfection. This form of the recovery rate was adopted in most studies that included superinfection via the recovery rate. This approach is straightforward when hypnozoites are integrated into the model as a binary state (i.e. an individual either has or does not have hypnozoites) [42, 59, 60]. Since White _et al._ (2014) [138] accounts for the variation of hypnozoites, they modified the recovery rate proposed by Dietz _et al._ (1974) [40] to account for the additional burden of hypnozoites; however, Mehra _et al._ (2022) [84] argued that this modified recovery rate does not hold in the presence of hypnozoite accrual.
Generally, there are two approaches when incorporating superinfection: (i) using a corrected recovery rate that explicitly accounts for the history of past infections in the population and hypnozoite accrual dynamics [10, 84, 93] and (ii) coupling the prevalence of blood-stage infection (derived under a within-host model that accounts for superinfection) directly to the proportion of infected mosquitoes [84]. The within-host model of Mehra _et al._ (2022) [87] included superinfection, with each blood-stage infection (whether primary or relapse) being cleared independently, while the population-level model developed by Anwar _et al._ (2022) [9], which built on work of Mehra _et al._ (2022)[87], did not incorporate superinfection. A correction to account for superinfection, based on the recovery rate formulated by Nasell _et al._ (2013) [93], was proposed in Mehra _et al._ (2023) [84] and incorporated in later work by Anwar _et al._ (2023) [10].
Superinfection was incorporated in later work, where it was assumed that different batches of hypnozoites originated from different mosquito bites [54, 98, 136, 137]. Silal _et al._ (2019) [120] assumed that superinfection increased the severity of the disease. That is, individuals will transition from lower to higher severity classes with a certain probability due to multiple infections. The only other study incorporating a superinfection-like phenomenon was Aldila _et al._ (2021), who modelled _P. vivax_ and _P. falciparum_ co-infection and assumed that _P. vivax_ dominates _P. falciparum_[5], which does not closely resemble the definition of superinfection. This study assumed that if an individual was currently infected with _P. falciparum_, they would become infected with _P. vivax_ if they received an infectious bite from a mosquito that was infected with _P. vivax_. The assumption that _P. vivax_ parasites dominate _P. falciparum_ results in the individual being infected with only _P. vivax_, which is not supported by the empirical biological evidence that shows that the parasiteemic load is much higher for _P. falciparum_[17]. Accordingly, it may not be reasonable to consider this to be a valid model of superinfections.
### _P. vivax_ and _P. falciparum_ co-infection
Within the Asia-Pacific region, the horn of Africa, and South America, both _P. vivax_ and _P. falciparum_ parasites are common [120, 133]. For example, in 2019 in Cambodia, co-infection with both _P. vivax_ and _P. falciparum_ accounted for about 17% of malaria cases [29]. In co-endemic
regions, _P. falciparum_ infections are often followed by _P. vivax_ infection, giving rise to the hypothesis that _P. falciparum_ infections trigger _P. vivax_ hypnozoite activation [76, 120, 125, 140]. The high risk of _P. vivax_ parasitaemia after _P. falciparum_ infection is possibly related to reactivation of hypnozoites [33, 51, 129]. Hypnozoites may be activated when _P. falciparum_ parasites have been introduced into the body [119] or when the individual is exposed to _Anopheles_ specific proteins [55]. This increased risk of _P. vivax_ blood-stage infection following a _P. falciparum_ infection could alternatively be explained by spatial or demographic heterogeneity in exposure and thus infection risk. Individuals either living in areas where both _P. vivax_ and _P. falciparum_ are highly prevalent or those that engage in an activity bringing them in frequent contact with infected mosquitoes (e.g. forest work) are more likely to be exposed to both parasites than the average person. Having a _P. falciparum_ episode indicates the person has recently been exposed to infectious mosquito bites and is thus likely to have hypnozoites from previous exposure events (that may be triggered or activated spontaneously) or acquire a new primary _P. vivax_ infections following recovery from _P. falciparum_ infection [7, 49, 50]. The lack of diagnostics to differentiate primary infections and relapses further complicates determining when an individual is infected with _P. vivax_ hypnozoites. This makes it challenging to disentangle whether _P. falciparum_ infections cause relapses through the reactivation of hypnozoites.
It is also not yet clearly understood how _P. vivax_ and _P. falciparum_ interact, if they compete within the host or if one species causes some, if any, protection against the other [36, 91]. A systematic review and meta-analysis showed that mixed infections (_P. falciparum_ and _P. vivax_) can often cause a high rate of severe infection regardless of infection order [73]. This evidence was in contrast to a previous study which suggested that severe mixed infections were more likely to happen when _P. vivax_ infection occurred on top of an existing _P. falciparum_ infection (i.e. superinfection), whereas the reverse scenario, _P. falciparum_ infection on top of an existing _P. vivax_ infection, were more likely to result in a lower risk of severe malaria [88]. Furthermore, there is likely ascertainment bias associated with mixed infections in areas with co-circulating parasite strains, as efforts might be biased towards _P. falciparum_ detection [127]. This is likely to be particularly common during episodes of clinical malaria when parasitaemia of one species greatly exceeds the other, and the innate host immune response may suppress both infections. Gaining a better understanding of these cross-species interactions and adjusting accounting for this co-existing phenomenon in the co-endemic region will require multi-species transmission models. Only a handful of mathematical models included both these _Plasmodium_ species [3, 5, 102, 105, 106, 120]. While both _P. vivax_ and _P. falciparum_ species are included in a single model by Aldila _et al._ (2021) [5], this model did not account for _P. vivax_ relapses. Five studies included both species but used two independent models for each species, which did not allow for interactions between species [3, 5, 102, 105, 106].
Whether it is important to model species interaction depends on the particular geographical setting. If both parasites are co-endemic in a setting, and the research question being considered relates to both species, then it may be important to use a model that can capture the interactions between the parasite species [120, 125, 133]. To the best of our knowledge, the first model that accounts for the interaction between both species was developed by Silal _et al._ (2019) [120]. In this study, a separate model (deterministic, meta-population) for both species was proposed, and these two models were entangled at each time step to incorporate interactions between the species, including treatment, triggering, and masking (non-_P. falciparum_ infections are misdiagnosed as _P. falciparum_). Following this work, the first agent-based model transmission model accounting for both _P. vivax_ and _P. falciparum_ infections and treatment was developed by Walker _et al._ (2023) [133]. This model had reduced complexity compared with Silal _et al._'s (2019) co-infection model,
but used many of the same parameter values [120] (co-infection models shown with a vivid orange bordered box in Figure 2).
### Immunity
Immunity against disease acquired through infection is usually referred to as adaptive immunity, and the primary function of adaptive immunity is to destroy foreign pathogens [83, 26]. Naturally acquired immunity to malaria is characterised by relatively rapid acquisition of immunity against severe disease and a more gradual establishment of immunity against uncomplicated malaria, while sterile immunity against infections is never achieved [15, 48, 77, 90]. In co-endemic areas, clinical immunity to _P. vivax_ is more rapidly acquired than that due to _P. falciparum_[90].
How immunity is accounted for in mathematical models of malaria varies since different models consider different types of immunity. For example, immunity against new infections, immunity against severe malaria, anti-parasite immunity (i.e. the ability to control parasite density upon infection), clinical immunity (i.e. protection against clinical disease), and transmission-blocking immunity (i.e. reducing the probability of parasite transmission to mosquitoes). Immunity against new infections and severe malaria is assumed to be acquired through infection. This reduces the probability of reinfection from an infectious mosquito bite and has been modelled using up to two immunity levels [145, 146]. This type of immunity is assumed to be boosted by infection [15]. Acquiring some partial immunity (i.e. some degree of protection against malaria) following infection that wanes over time, is most common among published models [128, 104, 44, 3, 45, 63, 46, 105, 44, 106, 3, 44, 107]. Some assumptions regarding immunity include that, if treated, individuals acquire some level of immunity that reduces the probability of reinfection (i.e. gain immunity against new infection) and that this wanes over time [138, 117]. The assumption regarding permanent immunity against malaria is not considered valid, as immunity often wanes rapidly when immune adults leave malaria-endemic regions [74]. Despite this, one model assumed that recovered individuals become permanently immune to _P. vivax_[5]. Another study assumed that only a fixed proportion of individuals are immune against _P. vivax_ rather than explicitly incorporating immunity into the model [102]. Strategies for incorporating immunity into _P. vivax_ transmission models thus widely vary, where some assumptions are more realistic and appropriate than others.
Individuals who have not previously experienced malaria infection almost invariably become infected when first exposed to infectious mosquito bites, as immunity against malaria has not yet developed [74]. Repeated exposure to infectious bites will still likely result in infection, though these individuals may be protected against severe malaria or death [74]. Silal _et al._ (2019) [120] applied the opposite assumption and assumed that repeated exposure to infectious bites would likely result in severe infection [120]. With increasing exposure, naturally acquired immunity will also give some level of protection against symptomatic malaria. Adults living in endemic areas are more likely to have developed protective immunity compared to children due to repeated exposure over their lifetime. Adults living in endemic areas are likely to have experienced substantially more infectious mosquito bites compared to children due to age (and therefore lengthened opportunity to acquire infectious mosquito bites), greater skin surface area, and more time spent outside in environments with a higher prevalence of mosquitoes [137, 25, 109].
Immunity should be considered when using mathematical models to capture underlying disease dynamics. The assumption regarding immunity varies among models (boxes with a purple border in Figure 2). The only model that explicitly accounts for the acquisition of immunity that increases
with new bites was developed by White _et al._ (2018) [137]. The assumption in regards to both anti-parasite immunity (ability to reduce parasite density upon infection) and clinical immunity (protection against clinical disease) depends on age and exposure to mosquito bites which is modelled using partial differential equations [137]. They also assumed that children acquired immunity through their birth parent's immunity, which then decayed exponentially from birth. Models that were adapted from White _et al._ (2018) [137] also allow for the acquisition of immunity [54, 95, 98, 136]. However, the immunity acquired from a primary infection may protect more strongly against relapses (which are genetically related to the primary infection) than against a new, genetically distinct primary infection. That is, hypnozoites established from an infectious bite, when reactivated, may be less likely to cause clinical infection. This is because the parasites could be genetically identical or related, which could elicit a more protective immune response due to familiarity with the primary infection [62, 140]. Thus, relapses from the same batch of hypnozoites may only cause asymptomatic infections. Despite this, no models to date have fully accounted for the relationship between relapse and immunity. Model assumptions regarding the acquisition of immunity may be too simple to capture the true underlying biology and dynamics.
### Effect of interventions for malaria control
In most of the models included in this review, it was assumed that treatment would be targeted towards infected individuals [27, 42, 44, 54, 56, 69, 95, 100, 102, 117, 120, 128, 138], but a range of other interventions can contribute to malaria control. For example, ten studies (21%) evaluated the effect of MDA on disease transmission, despite few national control programs considering MDA for _P. vivax_ control [3, 10, 59, 60, 87, 94, 98, 116, 133, 137] (boxes with a dark green border in Figure 2). Since MDA is recommended as an important tool to reduce asymptomatic _P. falciparum_ infection, it is also likely to be of great importance for _P. vivax_ elimination [47, 99, 116]. One study examined the effect of multiple MDAs and MSaTs (up to two rounds) with different drug combinations (blood-stage drug only, blood-stage drug and primaquine, or blood-stage drug and tafenoquine), finding that MDA with tafenoquine following G6PD screening could significantly reduce transmission compared to MSaT, given that no tools were available at the time to identify individuals with hypnozoites [116]. The effect of long-lasting insecticide nets along with MDA was studied using an agent-based model in Papua New Guinea, where the model predicted that MDA could reduce _P. vivax_ transmission by between 58% and 86% [137]. The same agent-based model was later used to investigate the effect of multiple treatment strategies, including MDA, MSaT with light microscopy detection of blood-stage parasitemia, and _P. vivax_ serological test and treatment (PvSeroTAT) [94, 98], as well as the effect of chloroquine and primaquine with vector control [95], and the potential effect of three different types of vaccines that target different stages of the _P. vivax_ life cycle [136] in different geographical settings. The mixed-species agent-based model [133] was used to investigate different treatment scenarios, including current practice, accelerated radical cure, and unified radical cure provided with and without MDA (radical cure was with 14 days of primaquine and a G6PD test while the MDA was with blood-stage treatments only).
The only within-host model that accounted for the effect of MDA on each of the hypnozoites and infections was proposed by Mehra _et al._ (2022) [87]. This model provided base analytical expressions for the effect of multiple rounds of MDA on hypnozoite dynamics and provided the epidemiological impact of one round of MDA on a single individual. Anwar _et al._ (2023) recently embedded Mehra _et al._'s work [87] and extended the model to study the effect of multiple MDA rounds (up to \(N\) rounds) on both within-host and population-level [10]. To the best of our knowledge, no other
multiscale model has been developed that explicitly accounts for the effect of multiple rounds of MDA. Anwar _et al._ (2023) [10] also provided optimal intervals if multiple MDA rounds were under consideration.
## 4 Open questions and conclusion
Mathematical modelling is a powerful tool for understanding, analyzing, and predicting complex real-world phenomena, as well as simulating different scenarios, testing hypotheses, and making informed decisions based on the results. Mathematical models have proven useful to characterise _P. vivax_ transmission in different parts of the world and provide insights into the effect of different strategies to achieve elimination, including treatment, vaccination, and vector control. In this work, we provided a review of the existing mathematical models that capture _P. vivax_ disease progression and transmission. _P. vivax_ transmission dynamics are particularly challenging to model given the difficulties discerning relapses from reinfections and recrudescences. The choice of transmission model framework comes down to the research question at hand.
While mathematical models can provide key insights without the expense of large trials or epidemiological studies, it is important to recognize that mathematical models are not perfect representations of reality and are always subject to limitations, uncertainties, and assumptions. Therefore, using mathematical models in conjunction with empirical data, expert knowledge, and critical thinking is essential to obtain meaningful and reliable results.
Across the different approaches of mathematical modelling of _P. vivax_, there were varying assumptions regarding parasite dynamics and acquisition of immunity. Some models were motivated to capture realistic biological aspects of the parasite [9, 138, 146], or epidemiological and public health aspects [3, 10, 27, 35, 42, 45, 54, 59, 60, 87, 95, 98, 102, 116, 117, 120, 128, 133, 136, 137, 139, 145], whereas some models were motivated to construct a novel or extended mathematical model of _P. vivax_ dynamics, i.e., focusing on the mathematical aspects of _P. vivax_ dynamics [1, 5, 44, 46, 56, 63, 64, 68, 71, 92, 97, 100, 104, 105, 106, 107, 113]. As the dynamics of these type of models are well established, we argue that more importance should be placed on using these models to address the current hurdles and setbacks in achieving _P. vivax_ elimination. For example, the effect of new drugs, emerging drug resistance, and the potential effect of vaccination (when it becomes available). Modelling different scenarios with the available tools under the current recommendations is crucial to inform decision-making regarding malaria elimination. Furthermore, given that some of the biological aspects of _P. vivax_ are well understood, we argue that researchers should shift their focus to modelling these important aspects.
The spatial distribution of _P. vivax_ transmission is heterogeneous, and the number of hypomooites that an individual harbours might vary significantly; this contributes directly to the risk of hypnozoite reactivation and _P. vivax_ relapse [108, 126]. This heterogeneity can be partially captured by modelling individuals' movement using metapopulations and including parasite movement between different sub-populations [95, 137]. However, none of the current models explicitly consider this spatial heterogeneity. Given the high degree of heterogeneity of _P. vivax_ risk in almost all populations, future model development should address this.
As more than 80% of _P. vivax_ infections may be due to relapse, and multiple hypnozoites can be established from each infectious bite, modelling the dynamics of hypnozoite variation and activation is crucial [9, 87, 116]. Another important aspect that requires more detailed attention is the
interaction between multiple species of _Plasmodium_, particularly in areas where _P. falciparum_ and _P. vivax_ are co-endemic (Asia, the Horn of Africa, and the Americas). Studies show that there is a high risk of _P. vivax_ parasitaemia after _P. falciparum_ infection that is possibly related to reactivation of hypnozoites [33, 51, 129]. This is in line with the hypothesis that _P. falciparum_ infection might trigger underlying _P. vivax_ infection [76, 120, 125, 140]. Hence, we argue that this hypothetical triggering phenomenon should be investigated when modelling _P. falciparum_ and _P. vivax_ interactions.
Future _P. vivax_ modelling efforts should also account for superinfections. Where mosquito abundance is high, transmission intensity is also likely to be high if malaria parasites are present [16, 67, 110, 115]. In these scenarios, infected individuals are likely to experience multiple episodes of infection at once (i.e. superinfection). Superinfection can significantly delay recovery time, leaving ample opportunity for onward transmission from the infected individual to susceptible mosquitoes. _P. vivax_ models should hence account for the transmission dynamics associated with superinfection. Immunity against _P. vivax_ strongly correlates to past exposure; therefore, focus should also be placed on modelling the acquisition (and waning) of immunity related to superinfection, as multiple concurrent exposures may boost immunity more than singular exposures [84, 137]. Furthermore, as parasites from relapse are either genetically identical or related to a previous primary infection, they are more efficiently targeted by naturally acquired immune responses previously developed from the primary infection than further, genetically unrelated primary infections. As a consequence, relapses are less likely to be associated with (severe) clinical symptoms. [62, 140]. This interplay between immunity and relapse has not been fully addressed in any models developed to date. Given these important biological aspects, we suggest that future modelling should focus on developing the above-mentioned key areas: (i) spatial heterogeneity in exposure risk, (ii) accumulation of hypnozoites variation, (iii) _P. falciparum_ and _P. vivax_ interactions, (iv) acquisition of immunity, and (v) recovery under superinfection. Different modelling communities have recently started focusing on these areas recently, for example, modelling hypnozoite dynamics [85, 87], multispecies interactions (_P. falciparum_ and _P. vivax_) [120, 133], bite exposure immunity [137] and superinfection [10, 84, 87]. No model currently includes all of the above factors that play a role in _P. vivax_ transmission due to the complexity the resulting model would have, and not all of the factors may need to be modelled to answer the research questions at hand. Therefore, when developing models to explore _P. vivax_ disease progression with a focus on answering specific research questions, mathematical epidemiologists and modellers should consider relevant aspects within the context of existing recommendations.
To address the outstanding research questions identified here, a suitably skilled interdisciplinary team is required. We hope that this review can contribute to developing the common language needed for communication between different scientists by highlighting the progress of _P. vivax_ transmission models to date.
## Funding
L. Smith is supported by the National Health and Medical Research Council (NHMRC) (GNT2016726) and the Department of Foreign Affairs and Trade Australia through the project Strengthening Preparedness in the Asia-Pacific Region through Knowledge (SPARK). A. Devine's research is supported through the NHMRC (2019152). E. Conway, and I. Mueller's research are supported by the NHMRC (GNT2016726) and the Department of Foreign Affairs and Trade Australia through the
project Strengthening Preparedness in the Asia-Pacific Region through Knowledge (SPARK). J.M. McCaw's research is supported by the Australian Research Council (DP210101920) and the NHMRC Australian Centre of Research Excellence in Malaria Elimination (ACREME) (APP1134989). J.A. Flegg's research is supported by the Australian Research Council (DP200100747, FT210100034) and the NHMRC (APP2019093). The contents of the published material are solely the responsibility of the individual authors and do not reflect the views of NHMRC.
|
2301.04630 | ShadowNav: Crater-Based Localization for Nighttime and Permanently
Shadowed Region Lunar Navigation | There has been an increase in interest in missions that drive significantly
longer distances per day than what has currently been performed. Further, some
of these proposed missions require autonomous driving and absolute localization
in darkness. For example, the Endurance A mission proposes to drive 1200km of
its total traverse at night. The lack of natural light available during such
missions limits what can be used as visual landmarks and the range at which
landmarks can be observed. In order for planetary rovers to traverse long
ranges, onboard absolute localization is critical to the ability of the rover
to maintain its planned trajectory and avoid known hazardous regions.
Currently, to accomplish absolute localization, a ground in the loop (GITL)
operation is performed wherein a human operator matches local maps or images
from onboard with orbital images and maps. This GITL operation limits the
distance that can be driven in a day to a few hundred meters, which is the
distance that the rover can maintain acceptable localization error via relative
methods. Previous work has shown that using craters as landmarks is a promising
approach for performing absolute localization on the moon during the day. In
this work we present a method of absolute localization that utilizes craters as
landmarks and matches detected crater edges on the surface with known craters
in orbital maps. We focus on a localization method based on a perception system
which has an external illuminator and a stereo camera. We evaluate (1) both
monocular and stereo based surface crater edge detection techniques, (2)
methods of scoring the crater edge matches for optimal localization, and (3)
localization performance on simulated Lunar surface imagery at night. We
demonstrate that this technique shows promise for maintaining absolute
localization error of less than 10m required for most planetary rover missions. | Abhishek Cauligi, R. Michael Swan, Masahiro Ono, Shreyansh Daftry, John Elliott, Larry Matthies, Deegan Atha | 2023-01-11T18:35:31Z | http://arxiv.org/abs/2301.04630v1 | # ShadowNav: Crater-Based Localization for Nighttime and Permanently Shadowed Region Lunar Navigation
###### Abstract
There has been an increase in interest in missions that drive significantly longer distances per day than what has currently been performed. For example, Endurance-A proposes driving several kilometers a day in order to reach its target traverse of 2000 km in 4 years. Additionally, some of these proposed missions, including Endurance-A and rovers for Permanently Shadowed Regions (PSRs) of the moon, require autonomous driving and absolute localization in darkness. Endurance-A proposes to drive 1200 km of its total traverse at night. The lack of natural light available during these missions limits what can be used as visual landmarks and the range at which landmarks can be observed. In order for planetary rovers to traverse long-ranges, onboard absolute localization is critical to the rover's ability to maintain its planned trajectory and avoid known hazardous regions. Currently, the localization performed onboard rovers is relative to the rover's frame of reference and is performed through the integration of wheel and visual odometry and inertial measurements. To accomplish absolute localization, a "ground-in-the-loop" (GITL) operation is performed wherein a human operator matches local maps or images from onboard with orbital images and maps. This GITL operation places a limit on the distance that can be driven in a day to a few hundred meters, which is the distance that the rover can maintain acceptable localization error via relative methods. Previous work has shown that using craters as landmarks is a promising approach for performing absolute localization on the moon during the day. In this work we present a method of absolute localization that utilizes craters as landmarks and matches detected crater edges on the surface with known craters in orbital maps. We focus on a localization method based on a perception system which has an external illuminator and a stereo camera. While other methods based on lidar exist, lidar is not currently planned for deployment on the current proposed nighttime and PSR missions. In this paper, we evaluate (1) both monocular and stereo based surface crater edge detection techniques, (2) methods of scoring the crater edge matches for optimal localization, and (3) localization performance on simulated Lunar surface imagery at night. We demonstrate that this technique shows promise for maintaining absolute localization error of less than 10 m required for most planetary rover missions.
## 1 Introduction
Long-range Lunar navigation, and specifically navigating within darkness, has gained a significant amount of traction recently. For example, missions to Permanently Shadowed Regions of the moon have been proposed such as the VIPER mission [1], [2] and the Lunar Polar Volatiles Explorer mission concepts. Furthermore, there are missions that have proposed driving during the Lunar night in order to traverse longer distances. For example, the new Decadal Survey [3] recommends the Endurance-A Lunar rover mission should be implemented as a strategic medium-class mission as the highest priority of the Lunar Discovery and Exploration Program. The Endurance-A rover proposal plans to drive \(2000\,\mathrm{km}\) in the South Pole-Aitken (SPA) Basin to collect \(100\,\mathrm{kg}\) of samples, which would be delivered to Artemis astronauts. This mission concept study [4] identified several key capabilities required to complete this mission which are: (1) Endurance will need to drive 70% of its total distance during the night to enable daytime hours dedicated to science and sampling. (2) The mission will require on-board autonomy for the majority of its operations, while the
Figure 1: The ShadowNav localization algorithm performs absolute localization for a Lunar rover mission located at the red position in the left image by matching known craters from (_left_) an orbital map against (_right_) detected craters from the rover stereo cameras.
ground only handles contingencies. (3) Global localization is necessary to maintain an error of \(<\)\(10\,\mathrm{m}\) relative to orbital maps.
At present, existing rovers perform onboard localization relative to their own reference frame. This is accomplished by using wheel and visual odometry and inertial measurements. Absolute localization is performed periodically with a "ground-in-the-loop" (GITL) operation. This is acceptable for current driving distances which are a few hundred meters a day. Existing relative localization has around 2% drift and therefore can only drive at most \(500\,\mathrm{m}\) before the error will be larger than \(10\,\mathrm{m}\). In order to traverse longer distances, on the order of several kilometers a day proposed by missions such as Endurance-A, autonomous absolute localization becomes critical. At present the Lunar surface does not have continuous communication with Earth. Therefore, having to perform several GITL operations for absolute localization in a day will significantly reduce the distance that can be driven. The lack of frequent absolute localization for the rover would lead to errors greater than the maximum \(10\,\mathrm{m}\) localization error which would present significant risks to the mission through deviations from the desired trajectory and risk for unidentified obstacles.
Craters as landmarks have been shown to be promising for absolute localization on the Moon [5], [6]. However, the lack of natural light available while driving within a PSR or during the Lunar night limits what can be used as a landmark and the range at which the landmarks can be observed. Using craters is still promising as the average distance between craters of \(\geq\)\(10\,\mathrm{m}\) in diameter is \(~{}100\,\mathrm{m}\) on terrain with relatively fresh craters and \(~{}10\,\mathrm{m}\) on terrain with old craters [7]. Additionally the Lunar Reconnaissance Orbiter Camera (LROC) provides digital elevation models (DEMs) with a resolution between \(0.5\,\mathrm{m}\)-\(5\,\mathrm{m}\) per pixel [8] and there are some DEMs within PSRs [9].
In this work, we propose using a stereo camera with an illuminator positioned below the stereo camera in order to detect crater rims within the darkness. The use of such an illuminator is motivated by the Endurance-A mission concept study [4], which proposes the use of a stereo camera with an illumination source as the perception system for a rover operating in darkness. Global localization is then accomplished by matching the detected crater rims against known craters from an orbital image as shown in Figure 2. To handle the uncertainty and nonlinearity of the crater rim detection model, we utilize a particle filter with a novel _Q-Score_ metric for ranking potential crater matches in order to estimate the absolute position of the rover within an orbital map. This paper demonstrates the initial results of both crater detection within darkness and absolute localization within simulation which are the results of the first two years of a planned three year effort to validate this approach. Work is ongoing to collect and validate this approach in a real-world Lunar analogue test location.
_Statement of Contributions:_ This paper presents an approach to absolute localization on the Moon that can be performed while a rover is in darkness, such as within a PSR or during the Lunar night. The main contributions of the work as summarized below:
1. We developed a simulator based on Blender [10] which renders simulated surface stereo imagery of the Lunar surface in darkness located within a known orbital position. The rendering process utilizes the Hapke lighting model for more accurate surface reflectance as well as DEMs captured by LROC for realistic crater distributions.
2. We evaluated different crater-edge detection techniques and demonstrate a method which captures 80% of the leading crater arc at \(10\,\mathrm{m}\) and can detect crater arcs out to \(20\,\mathrm{m}\).
3. We present a method to localize a rover within an orbital map using surface crater-edge detections and known orbital craters based on a particle filter and a metric we call the Q-Score which is detailed in Section 3.
4. We demonstrate our absolute localization technique can achieve less than \(2\,\mathrm{m}\) absolute error with an assumed odometry drift of 2% and an initial 3-sigma uncertainty of \(3\,\mathrm{m}\).
## 2 Related Works
Absolute localization on planetary surfaces is critical for expanding the range rovers can travel in a day and over the course of a mission and there have been many previous works that investigate this problem. There have been techniques proposed for the Martian surface. Works such as [11], [12] consider far range and horizon features which are at ranges that are beyond what is expected can be seen in the dark.
Figure 2: Schematic of the ShadowNav algorithm proposed to perform absolute localization on the Moon. A particle filter is used to match craters detected by the rover store cameras with known craters from an orbital map.
[13] proposes a technique on the Martian surface for absolute localization that uses rocks and DEMs surface features.
In our work, we focus on the problem of global localization in darkness which is relevant for permanently shadowed regions of the moon, for which there has been a surge of interest in conducting scientific measurements and activities [14]. Our solution approach is inspired by a host of recent works that seek to leverage orbital maps for global rover localization in these shadowed regions. In [13], the authors propose a localization procedure that matches an observed rover image with an orbital map, but this approach neglects the rover motion model and yields a deterministic estimate of the robot belief. A purely data-driven approach is presented in [15], wherein a convolutional neural network is trained on synthetic data to match the rover observations with orbital imagery. Closest to our approach, [16] presents a particle filtering technique to compare rover monocular camera imagery with orbital imagery and uses a Siamese neural network approach to assign each particle a likelihood weight. The authors in [6] propose a similar approach for Lunar absolute localization known as LunarNav. However, LunarNav focuses on the daytime localization problem and therefore considers different methods of crater matching that rely on greater knowledge of the surface geometry than available in the nighttime case.
## 3 Approach
In this work, we propose an absolute localization approach which utilizes a crater's leading edge as landmarks for localization. The end result of this approach will be an estimated position and uncertainty within the orbital frame. At present, this approach only considers position localization. Rover orientation is assumed to be given by a star tracker which can compute orientation in three dimensions from celestial measurements. Our approach consists of two primary components:
1. A leading-edge crater detection methodology for use with a Lunar rover equipped with a stereo camera system and illumination source.
2. A particle filter for computing a position belief based on a score computed based on the association of crater edges and known orbital ground truth craters, which we call the Q-Score, and the robot motion model.
### _Surface Crater Detection_
In order to identify craters on the surface, the system was designed to be used in conjunction with a perception system that contained a stereo camera and an illumination source. This perception system was configured where the illumination source was beneath the stereo camera. Examples of simulated images with the light at the same height as the cameras and the light positioned beneath the cameras are in Figure 3. It was observed that placing the illumination source below the camera results in a shadow at the leading edge of a negative obstacle. Furthermore, offsetting the light with the cameras reduced the impact of the Hapke model washing out some of the surface texture. Further details on the Hapke model and its impact on surface terrain are provided in Section 4.
Here, we first review the three different techniques studied in this work for detecting a crater's leading edge: (1) a method of detecting jumps within stereo disparities, (2) a Canny edge detector used to find the shadow on the leading edge, and (3) a convolutional neural network (CNN)-based edge detector that uses both the monocular and disparity image as input.
_1. Stereo Disparity Discontinuity Method_ The first approach for leading edge crater detection relies on detecting discontinuities within the stereo disparity image. To accomplish this, the stereo disparity image must first be generated using methods such as the JPLV algorithm [17] or the Semi-Global Block Matching (SGBM) approach [18], among others. To account for the low contrast that may be present in the Lunar rover case, Contrast Limited Adaptive Histogram Equalization (CLAHE) is first run on the input images prior to running stereo. CLAHE is an adaptive histogram equalization and operates on sub-regions of an image which allows more consistent equalization across different lighting conditions within an image. This is useful for this application as there is a light-to-dark gradient from near-to-far within the images. The resulting disparity image is then scanned column-by-column and, when the difference between any two disparities is greater than some pre-defined threshold, the larger column index is marked as a crater edge. Further, any numerical issues stemming from stereo holes are accounted for by omitting any pixels with spurious values during comparison.
_2. Canny Edge Detector Method_ For sensor configurations that contain an illuminator located beneath the stereo cameras, shadows appear on the leading edge of negative obstacles. In such cases, a Canny edge detector can be used to distinguish the stark contrasting dark line along the rim. In this work, the Canny edge detector from OpenCV [19] is used to find these shadows.
_3. CNN-Based Edge Detector Method_ The Holistically-Nested Edge Detection (HED) approach presents a CNN-based deep learning based method for leading edge crater detection [20]. This method uses the HED approach and can be performed by directly using the publicly released neural network weights. HED is capable of performing both monocular and stereo depth based edge detection. For HED to perform edge detection within a depth image, it generates a three channel image that contains horizontal disparity, height above ground, and angle of the local surface normal with the inferred direction of gravity. The RGB and depth predictions of the CNN are then merged to generate the desired output.
_Positive Obstacle False Positive Rejection_--One shortcoming of the aforementioned leading edge crater detection approach is the susceptibility of false positive cases in the presence of positive obstacles. In order to account for this positive
Figure 3: Figure demonstrating the impact of placement of light source on crater rim shadows. _Left:_ Sample render of a crater with light source even with camera. _Right_ Sample render of a crater with light source below the camera.
obstacle issue, the detected edge points are passed through a filter that removes points which have hits on the far side of the crater edge with a detected negative or flat slope. Detected edge points are kept only if, within the region directly beyond the detected edge, there exists a positive slope or if there is not enough stereo to accurately compute the slope. Thus, the case of a detected positive slope is assumed to correspond to the rising edge of the crater under the assumption that the detected edge is the leading edge of a negative obstacle. Alternatively, a detected edge is also retained if the far edge is not captured due to low light conditions, as this is assumed to be an indication of the presence of a large crater.
### _Particle Filter_
Here, we provide an overview of the proposed ShadowNav particle filtering approach. First, we provide further details on the Q-Score metric that is used in the belief update step.
```
0: Belief \(b_{i}^{t}\), set of crater observations \(\{z_{0,\text{rover}}^{t},...,z_{m,\text{row}}^{t}\}\), set of ground truth craters \(\{c_{0,\text{woat}}^{t},...,c_{t,\text{woat}}^{t}\}\), positive value \(\varepsilon\)
1:\(\mathcal{Q}_{\text{inc}}\leftarrow\varepsilon\)
2:for\(i=1,\ldots,m\)do
3:\(z_{0,\text{woat}}^{t}\leftarrow\text{rover\_to\_world}(z_{0,\text{rover}}^{t})\)
4:\(d_{\text{cr}}\leftarrow\min\|c_{j,\text{woat}}-z_{0,\text{woat}}^{t}\|\)
5:\(\mathcal{Q}_{\text{inc}}\leftarrow\mathcal{Q}_{\text{inc}}+d_{\text{cr}}\)
6:endfor
7:\(Q_{\text{score}}\leftarrow\min\left(1,(\frac{1}{m}\mathcal{Q}_{\text{inc}})^{-1}\right)\)
8:return\(Q_{\text{score}}\)
```
**Algorithm 1** Q-Score Computation
```
0: Initial belief distribution \((\mu_{0},\Sigma_{0})\), number of particles \(N_{s}\), number of effective particles threshold \(N_{\text{eff,thresh}}\)
1:\(\{b_{1}^{0},...,b_{N_{s}}^{0}\}\leftarrow\text{sample\_beliefs}(\mu_{0}, \Sigma_{0})\)
2:\(\{w_{1}^{0},...,w_{N_{s}}^{0}\}\leftarrow\{1,...,1\}\)
3:\(t\leftarrow\)
4:while particle filter running do
5:\(\{z_{0}^{t},...,z_{m}^{t}\}\leftarrow\text{get\_observations()}\)
6:\(\{q_{1}^{t},...,q_{N_{s}}^{t}\}\leftarrow\{0,...,0\}\)
7:for\(i=1,\ldots,N_{s}\)do
8:\(b_{i}^{t}\leftarrow\text{propagate\_sample}(b_{i}^{t-1})\)
9:\(q_{i}^{t}\leftarrow\text{log Q\_score}(b_{i}^{t},\{z_{0}^{t},...,z_{m}^{t}\})\)
10:endfor
11:\(q_{\text{init}}^{t}\leftarrow\min(q_{1}^{t},...,q_{N_{s}}^{t})\)
12:for\(i=1,\ldots,N_{s}\)do
13:\(w_{i}^{t}\gets w_{i}^{t-1}+q_{i}^{t}-q_{\text{init}}^{t}\)
14:endfor
15:\(N_{\text{eff}}^{t}\leftarrow\text{compute\_N_{\text{eff}}}(w_{1}^{t},...,w_{N_{s }}^{t})\)
16:if\(N_{\text{eff}}\leq N_{\text{eff,thresh}}\)then
17:\(\{b_{1}^{t},...,b_{N_{s}}^{t}\}\leftarrow\text{sample\_beliefs}(\{b_{i}^{t}\}_{i=1}^{N_{s }},\{w_{i}^{t}\}_{i=1}^{N_{s}})\)
18:\(\{w_{1}^{t},...,w_{N_{s}}^{t}\}\leftarrow\{1,...,1\}\)
19:endif
20:\(t\gets t+1\)
21:endwhile
```
**Algorithm 2** ShadowNav Particle Filtering Algorithm
_Q-Score_--The Q-Score provided the measurement probability of some position belief based on rover frame observations and an orbital map. The procedure for computing the Q-Score is given in Algorithm 1. The algorithm takes as input a given belief \(b_{i}^{t}\), a set of \(m\) observed edges in rover frame, and a set of \(\ell\) ground truth crater observations to associate these measurements with (Line 1). A value \(\mathcal{Q}_{\text{inc}}\) is initialized to some negligibly small, positive value \(\varepsilon\) to later avoid divide-by-zero issues (Line 1). Next, for each measurement \(z_{i}^{t}\) in the rover frame, the detected edge is converted to world frame (Line 3) and the minimum distance to an edge from the ground truth map computed (Line 4). The \(\mathcal{Q}_{\text{inc}}\) is incremented by the distance between the observed edge and its associated ground truth observation (Line 5). The \(\bar{\text{Q}}\)-Score is computed as the reciprocal of \(\mathcal{Q}_{\text{inc}}\) and a \(\min\) operation is applied to ensure that the score provided by any particular run is between 0 and 1 (Line 7). This implies that observations and belief pairs which are less than \(1\,\mathrm{m}\) away from ground truth will receive the same score as those exactly 1m away from ground truth, which is seen as acceptable given the orbital DEM resolution and mission concept localization requirements.
In addition to the shortest distance formulation from Line 4, additional approaches were also explored for determining the Q-Score. One alternate approach investigated included fitting a Gaussian normal distribution on the orbital map crater edges and the Q-Score value was them computed based on the intensity (i.e., distance to the computed mean) of the point hit by observations or 0 in cases when no point was hit. In practice, it was determined that the shortest distance formulation provided the most robust results for use with the particle filter and also did not require additional projection calculations to project each belief from the orbital frame to rover frame.
_Overview_--A description of the ShadowNav particle filtering algorithm is given in Alg. 2. The algorithm takes as input a Gaussian belief distribution \((\mu_{0},\Sigma_{0})\) assumed for the initial robot position, the number of particles \(N_{s}\) to use in the particle filter, and a threshold for the effective number of beliefs \(N_{\text{eff,thresh}}\) used to trigger resampling (Line 2). The filter is initialized by sampling \(N_{s}\) particles from the initial belief distribution and assigning a weight of equal importance for each particle (Lines 1-2). As common in particle filtering implementations [21], we note that we used the \(\log\) of the weights for improved numerical stability of the weight update step [22]. Given a new set of crater observations (Line 5), a set of Q-Score measurements is initialized for computing for each individual particle (Line 6). After applying the motion model update to each particle (Line 8), the Q-Score for each updated particle is computed using the procedure from Alg. 1 by comparing against the current measurements
(Line 9). The particle weights are then updated in \(\log\)-domain (Line 13) with a normalization step to ensure non-negative weights (Line 11). Next, the number of effective samples \(N_{\text{eff}}\) at the current iteration is calculated (Line 15). A common pitfall of particle filters is "degeneracy", wherein the weights \(\{w_{i}^{t}\}\) collapse around a handful of particles and computational resources are wasted on propagating low likelihood particles [21]. If \(N_{\text{eff}}\) is below the threshold \(N_{\text{eff,thresh}}\), then this indicates that the filter is degenerating and a resample operation is triggered (Line 17).
Further details on the systematic resampling approach used in this work are provided in Algorithm 3. Given a set of particles and their associated weights, the weights are first normalized to \((0,1]\) from \(\log\)-domain (Lines 1-4) and the cumulative sum of these normalized weights \(\tilde{w}_{i}^{t}\) computed (Line 6). The key step in systematic resampling is to sample a random value \(u_{0}\) from a uniform distribution inversely proportional to \(N_{s}\) (Line 9) and then incrementally sample a new particle from this "bin" of width \(\frac{1}{N_{s}}\). This ensures that, after resampling, at least one particle is retained from each \(\frac{1}{N_{s}}\) interval from the previous belief distribution.
### Surface to Orbital Crater Transformation
For every observation step, rover frame crater edges were detected with a stereo camera pair that provided the depth, and thus a relative position for the crater edge was saved. This relative crater distance was added to each particle's belief position to form an estimate of the observed crater position in the world frame for each particle. The orbital map was projected to the world frame and then the shortest distance metric noted in the Q-Score algorithm was used to determine which particle belief positions were most likely and thus which observed crater was the most likely one to match the known orbital craters.
ture discussions were based on this scaled resolution. This scaled DEM was imported into Blender and a surface texture was added. The surface texture comprised of two scales of fractal Brownian motion, which is a natural noise that was added to the DEM in order to simulate Lunar surface texture for stereo to utilize. Figure 4 demonstrates three sample renders, two in the daylight and one at night with an illumination source from our simulation. It demonstrates what the surface looks like in daytime conditions as well as the effect of the Hapke model during the day with the sun behind the camera and the effect of the illumination source. From this it was observed that the full amount of daytime texture is not observed during the night with an illumination source.
### _Simulated Craters for Detection Analysis_
In order to evaluate the performance of different crater detection techniques, a dataset with different sized craters was built. This dataset was built using the simulation process within Blender and captured stereo pair renders between \(5\,\mathrm{m}\) and \(20\,\mathrm{m}\) from the front crater rim in increments of \(0.1\,\mathrm{m}\). This dataset contained 10 different craters with varying sizes and depths. The sizes of the craters within this dataset are in Table 1 and their locations corresponding to the crater ID in our simulated environment are marked in Figure 5.
### _Simulated Trajectories for Localization Analysis_
In order to evaluate the localization performance, several trajectories were run in the simulated environment. These trajectories were run to generate an image every \(1\,\mathrm{m}\) and were designed to approach craters in different ways that might present challenges to our filtering approach. The \(1\,\mathrm{m}\) observation delta was used to reduce render times of our dataset, as rendering every \(0.1\,\mathrm{m}\) did not result in a significant localization performance change. An overview of the trajectories within the orbital environment are displayed in Figure 5.
### _Real Data of Negative Obstacles at Night_
In addition to the simulated data generated, a dataset was collected in the Arroyo, which is a dry river bed near the NASA Jet Propulsion Laboratory. This dataset contained a few different negative obstacles that were imaged at 5, 10, and \(15\,\mathrm{m}\) away from the leading edge. This dataset was used to validated that the stereo and crater edge detection algorithms work on real data collected at night with an external illuminator.
## 5 Crater Detection Performance
### _Metrics_
In order to evaluate the performance of surface crater detection, the dataset referenced in Section 4 was utilized. Five different combinations of algorithms were evaluated. These were disparity discontinuity detection within SGBM stereo, disparity discontinuity detection within JPLV stereo, HED using SGBM stereo, HED using JPLV stereo, and a hybrid JPLV disparity discontinuity detection and canny edge detection approach. The hybrid discontinuity detection and canny approach was implemented so that Canny only ran on the portion of the image that was \(10\,\mathrm{m}\) away or further. This was done since it was observed the discontinuity detection worked well in the near range but stereo began to degrade beyond \(10\,\mathrm{m}\).
These algorithms were evaluated with two different metrics. The first was an image based edge scoring method which captures an average Gaussian probability that a detected edge is on a ground truth crater edge. It utilizes a distance error computed in image space as represented in Equation 1 where \(\mathrm{Error_{dist_{p}}}\) is the pixel error from ground truth to detection, \(\mathrm{range_{gt}}\) is the known ground truth range, \(\mathrm{f}\) is the focal length of camera, as is the sensor size of the camera, and \(\mathrm{Error_{dist}}\) is the error in meters of the detection.
\[\mathrm{Error_{dist}}=\mathrm{Error_{dist_{p}}*\frac{range_{gt}}{(fl*ss)}} \tag{1}\]
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Crater** & **Diameter (m)** & **Depth (m)** \\ \hline \hline
1 & 9.2 & 1.0 \\
2 & 9.1 & 0.75 \\
3 & 11.3 & 0.84 \\
4 & 4.4 & 0.55 \\
5 & 3.7 & 0.40 \\
6 & 8.3 & 0.27 \\
7 & 11.9 & 0.44 \\
8 & 3.9 & 0.48 \\
9 & 4.1 & 0.49 \\
10 & 2.3 & 0.25 \\ \hline \end{tabular}
\end{table}
Table 1: Table of crater sizes in crater detection dataset.
Figure 6: Plots of different metrics evaluating crater detection performance. _Left:_ Plot that shows image-based crater edge detection score versus range for all craters evaluated. _Right:_ Plot that shows percent of the crater front arc detected for all craters evaluated.
Figure 7: Sample stereo results using JPLV stereo on a sample negative obstacle.
The distance error was then passed into a Gaussian. The Gaussian probabilities for all of the detected pixels were summed together and normalized by number of detected points to obtain a score. This scoring method infused ground truth range values to remove the impact of stereo holes and stereo range uncertainty on the projection in order to better isolate the specific performance of the crater detection algorithms. The sigma value for the Gaussian that was used in these experiments was \(0.25\)m. This was chosen because the resolution of the DEM utilized was \(0.25\,\mathrm{m}\). Therefore most detections should fall within this boundary if they are highly accurate. The second metric used was "percent of front arc detected". In this metric, there is a ground truth circle of the orbital crater. Depending on the pose of the simulated cameras, the half arc of the ground truth circle that was nearest the simulated camera was projected into image space. The crater detection was then matched to the half arc and the percentage of the half arc that was successfully identified was determined. This metric removes the Gaussian component from the first metric; however, it does not capture the entire image.
Figure 8: **The efficacy of the JPLV HED approach over JPLV Disparity + Canny is demonstrated in simulations of crater rim detection overlay samples for crater 1.**
false positives like the first metric.
### Detection Results on Simulated Data
The results of running the different algorithms on the simulated dataset are observed in Figure 6. There were several notable observations from the results. The first was that the algorithms tended to perform the best around \(10\,\mathrm{m}\) and did not improve as craters came closer. This was believed to be because as the camera gets closer to the crater, more of the crater becomes visible and the discontinuities become smaller. However, as the crater becomes further than \(10\,\mathrm{m}\), the stereo began to degrade. Additionally, for the hybrid stereo and Canny technique, the Canny detection started detection at \(10\,\mathrm{m}\) and led to a significant jump in performance.
In terms of algorithm comparison, JPLV disparity discontinuity performed better than SGBM disparity discontinuity which is likely due JPLV having more holes than SGBM. These holes at the boundary helped the disparity discontinuity detector find a better edge. However, for HED, it performed well with either stereo technique, likely due to its representation of depth containing height values. HED was used with its out of the box weights from its authors. It likely could be improved with finetuning on a Lunar dataset.
In addition to quantitative results, samples of crater rim detection overlays are in Figure 8. These results were on crater 1 which is a nearly \(10\,\mathrm{m}\) in diameter crater. Both methods were able to detect the craters well, but JPLV HED did have more falloff at \(17\,\mathrm{m}\) than the Canny detector. However, the Canny edge detector was optimized for this environment where as HED was a generalized detector. Overall the generalization of HED was extremely promising as a crater rim detection approach.
### Detection Results on Real Data
As described previously, data was collected from a location with negative obstacles at night. This dataset was used to validate the performance of stereo and crater detection algorithms. Figure 7 presents a sample of \(5\,\mathrm{m}\) and \(10\,\mathrm{m}\) negative obstacles and the corresponding stereo results from JPLV. From this figure is was observed that stereo is dense up into the leading edge of the negative obstacle. Additionally, at \(5\,\mathrm{m}\), the far edge of the negative obstacle was captured in the disparity values. At \(10\,\mathrm{m}\), the far edge, did contain some disparity values but it was sparse. While not fully representative of the Lunar surface, this demonstrated that current stereo techniques do have the capability to work in low light conditions at the ranges necessary. The data was also used to evaluate the edge detection techniques. The JPLV disparity discontinuity and Canny edge detection hybrid was found to be the best on simulation data and therefore it was used on the real data. Figure 9 demonstrates sample detections at different ranges. These detection results did contain false positives on some of the vegetation as the false positive rejection was not run. Vegetation is not present on the moon, however, objects such as rocks could present similar issues. Overall, the negative obstacle edge detection qualitatively performs well.
## 6 Localization Performance
In this section, we provide Monte Carlo results on the performance of the proposed ShadowNav filtering algorithm. For each simulation, we analyzed the performance of the ShadowNav filter on the basis of the following metrics:
Figure 9: **Qualitative edge detections using JPLV disparity discontinuity detection and Canny hybrid on negative obstacles on a real dataset collected in a dry river bed at night. These results demonstrate the transferability of the crater detection algorithms from simulated data to a real environment.**
Ground truth error
We computed the weighted average mean \(\mu^{t}=\sum_{i=1}^{N_{s}}w_{i}^{t}b_{i}^{t}\) at time \(t\) for the filter using the particle weights and beliefs and compute the \(\ell_{2}\)-distance to the ground truth \(\text{gt}^{t}\), i.e., \(\|\mu^{t}-\text{gt}^{t}\|_{2}\).
Particle filter uncertaintyTo capture the uncertainty associated with the current belief, we additionally computed the weighted covariance matrix \(\Sigma^{t}=\sum_{i=1}^{N_{s}}\widehat{w}_{i}^{t}(b_{i}^{t}-\mu^{t})(b_{i}^{t}- \mu^{t})\), where \(\widehat{w}_{i}^{t}\) are the normalized weights detailed in Alg. 3. The metric we report at each time step was the square root of the largest eigenvalue \(\sqrt{\lambda_{\text{max}}(\Sigma^{t})}\), which corresponded to the worst case variance of the estimation error [27, 28].
Mahalanobis distanceThe final metric we computed was the Mahalanobis distance, which measures the distance between and the particle filter distribution and ground truth position. We approximately computed this by fitting a Gaussian distribution \(\mathcal{N}(\mu^{t},\Sigma^{t})\) to the particle filter distribution, for which the Mahalanobis distance is simply a weighted \(\ell_{2}\)-norm \(\sqrt{(\mu^{t}-\text{gt}^{t})^{T}(\Sigma^{t})^{-1}(\mu^{t}-\text{gt}^{t})}\).
### _Resampling Scheme Comparison_
In this section, we compared the baseline systematic resampling approach detailed in Alg. 3 against three other resampling methods utilized: multinomial, residual, and stratified (we refer the reader to [21, 29, 30] for a thorough review of these approaches.) Figure 10 presents the ground truth error and filter uncertainty for the four different resampling approaches. We saw that, for the two trajectories compared in Figure 10, systematic resampling led to comparable ground truth error as the other resampling approaches, but that systematic resampling outperformed the other approaches in terms of the overall uncertainty of the filter. Indeed, we note that multinomial resampling, the most commonly employed resampling technique, fared quite poorly in terms of the variance of the filter uncertainty (Fig. (b)b and (d)d).
### _Baseline Performance Evaluation_
Finally, we evaluated the performance of the proposed ShadowNav particle filter approach on three test trajectories. Our analysis consisted of Monte Carlos simulations with 25 seeds and utilizing 2% odometry noise and initial belief distribution with \(\sigma_{0}=\)\(3\,\mathrm{m}\). Each simulation was run with \(N_{s}=100\) particles and systematic resampling as the resampling scheme with \(N_{\text{eff,thresh}}=50\) as the resampling threshold.
Figure 11 shows Monte Carlo simulation results for the three test trajectories. We saw that the initial uncertainty in the filter began at approximately \(3\,\mathrm{m}\) as expected by sampling from a distribution with \(\sigma_{0}=\)\(3\,\mathrm{m}\). Thereafter, the filter was able to improve the rover position estimate, which led to an absolute error reduction of \(4\,\mathrm{m}\). Further, we see in Table 2 that the metrics computed at the final time step indicate convergence of the filter, with an average final error of \(\leq\)\(4\,\mathrm{m}\) and an absolute error reduction of \(4\,\mathrm{m}\).
As seen in Figure 13, while the filter performed well on trajectories 2 and 3, the filter was less performant for the trajectory 1 test case. Figure 12 illustrates the performance of the filter on trajectory 1 for two different random seeds as the rover starts from the northern edge of the orbital map and moves southward. During the middle portion of this traverse, the craters were out-of-sight for the rover and, as we
Figure 11: Monte Carlo simulations for trajectories 1–3 demonstrated the efficacy of the Q-Score based particle filtering approach at accomplishing global rover localization.
Figure 12: Two Monte Carlo trials for trajectory 1 are illustrated with the ground truth in red and the weighted average belief \(\mu^{t}\) in blue. The comparatively better performance of the filter in case A _(left)_ was due to false positive crater rim measurements in case B _(right)_ that led to worse localization.
Figure 10: A comparison of the four proposed resampling schemes demonstrated that systematic resampling empirically outperforms the other scheme in terms of relatively lower ground truth error and reduced uncertainty in the filter.
see in Figure 11, false positive observations led to increases in the error and uncertainty in the filter. As the crater in the southern portion of the orbital map became observable for the rover, we saw that the estimate quickly improved in case A (Fig. 12a), but continues to have a residual error in case B (Fig. 12b). This poor convergence behavior was also explained by false positive observations, wherein the filter had difficulty reconciling the front edge of the rim with the back edge, an issue that requires further investigation.
### _Debugging_
When testing the particle filter, we found it helpful to generate "perfect" datasets where ground truth depth was generated directly from the simulator as shown in Figure 14b) and crater edges were plotted into the rover frame using their exact known world coordinates (see Fig. 8a-8c). This approach uncovered bugs with our perception and projection pipeline as well as the particle filter pipeline and it is highly recommended to build such a dataset for all similar work.
## 7 Conclusions
In this work we present a system to perform autonomous absolute localization on a Lunar rover while it is in darkness. This system entails using a stereo camera and illuminator. We enhanced a Blender based simulation with a custom Lunar texture and an implementation of the Hapke model to model surface reflectance as accurately as possible. We further demonstrate both geometric and learning based techniques for detecting the leading edge of a crater with ability to detect some craters out to \(20\,\mathrm{m}\) range. We propose a method of matching the detected leading crater rims with known craters within an orbital map and using these matches to score observations with our Q-Score. Finally we demonstrate absolute localization within our simulation environment with less than \(4\,\mathrm{m}\) error, and an absolute error reduction of \(4\,\mathrm{m}\) upon detecting craters. These results show promise for further investigation in the future on more simulation environments as well as on to be collected real analogue datasets.
### _Future Work_
In the future, we seek to perform several updates and additional evaluations. The primary focus is to experimentally collect a nighttime dataset using representative hardware in an analogue Lunar environment with negative obstacles to evaluate the system. Additional evaluation is planned to evaluate the performance of the proposed approach along longer trajectories, on more varied Lunar type locales, and for different rover specific parameters such as camera height off of the ground. Finally, we plan to validate our proposed approach on a flight-like embedded computer (e.g., a Snapdragon) to demonstrate that it is computationally feasible for use onboard a Lunar rover.
## Acknowledgments
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). The authors would like to thank Yang Cheng, Olivier Lamarre, and Scott Tepsuporn for their discussions during the development of this work.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Error** & **Uncertainty** & **Mahalanobis Dist.** \\ \hline \hline Traj. 1 & \(3.84\pm 2.78\) & \(1.84\pm 1.12\) & \(8.74\pm 10.03\) \\ Traj. 2 & \(1.75\pm 0.78\) & \(1.32\pm 0.76\) & \(2.75\pm 1.88\) \\ Traj. 3 & \(1.68\pm 0.7\) & \(1.39\pm 0.61\) & \(2.92\pm 1.91\) \\ \hline \end{tabular}
\end{table}
Table 2: The metrics computed at the end of a long-range lunar traverse indicate convergence of the particular filter on trajectories 2 and 3, but spurious measurements from unlabeled crater lead to relatively poor performance on trajectory 1.
Figure 14: Crater 1 viewed from \(5\,\mathrm{m}\) away from front rim.
Figure 13: The final ground truth error distribution for 25 Monte Carlo simulations showed filter convergence to \(\leq 4\)m error in all cases for trajectories 2 and 3 and for the majority of cases for trajectory 1. |
2310.12713 | Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization | Adversarial Training (AT), pivotal in fortifying the robustness of deep
learning models, is extensively adopted in practical applications. However,
prevailing AT methods, relying on direct iterative updates for target model's
defense, frequently encounter obstacles such as unstable training and
catastrophic overfitting. In this context, our work illuminates the potential
of leveraging the target model's historical states as a proxy to provide
effective initialization and defense prior, which results in a general proxy
guided defense framework, `LAST' ({\bf L}earn from the P{\bf ast}).
Specifically, LAST derives response of the proxy model as dynamically learned
fast weights, which continuously corrects the update direction of the target
model. Besides, we introduce a self-distillation regularized defense objective,
ingeniously designed to steer the proxy model's update trajectory without
resorting to external teacher models, thereby ameliorating the impact of
catastrophic overfitting on performance. Extensive experiments and ablation
studies showcase the framework's efficacy in markedly improving model
robustness (e.g., up to 9.2\% and 20.3\% enhancement in robust accuracy on
CIFAR10 and CIFAR100 datasets, respectively) and training stability. These
improvements are consistently observed across various model architectures,
larger datasets, perturbation sizes, and attack modalities, affirming LAST's
ability to consistently refine both single-step and multi-step AT strategies.
The code will be available at~\url{https://github.com/callous-youth/LAST}. | Yaohua Liu, Jiaxin Gao, Xianghao Jiao, Zhu Liu, Xin Fan, Risheng Liu | 2023-10-19T13:13:41Z | http://arxiv.org/abs/2310.12713v2 | # Learn from the Past: A Proxy based Adversarial Defense Framework to Boost Robustness
###### Abstract
In light of the vulnerability of deep learning models to adversarial samples and the ensuing security issues, a range of methods, including Adversarial Training (AT) as a prominent representative, aimed at enhancing model robustness against various adversarial attacks, have seen rapid development. However, existing methods essentially assist the current state of target model to defend against parameter-oriented adversarial attacks with explicit or implicit computation burdens, which also suffers from unstable convergence behavior due to inconsistency of optimization trajectories. Diverging from previous work, this paper considers the update rule of target model and corresponding deficiency to defend based on its current state. By introducing the historical state of the target model as a proxy, which is endowed with much prior information for defense, we formulate a two-stage update rule, resulting in a general adversarial defense framework, which we refer to as 'LAST' (**L**earn from the **Past**). Besides, we devise a Self Distillation (SD) based defense objective to constrain the update process of the proxy model without the introduction of larger teacher models. Experimentally, we demonstrate consistent and significant performance enhancements by refining a series of single-step and multi-step AT methods (e.g., up to \(\mathbf{9.2}\%\) and \(\mathbf{20.5}\%\) improvement of Robust Accuracy (RA) on CIFAR10 and CIFAR100 datasets, respectively) across various datasets, backbones and attack modalities, and validate its ability to enhance training stability and ameliorate catastrophic overfitting issues meanwhile.
## 1 Introduction
Amidst the rapid development of deep learning models and their widespread deployment in real-world applications (Krizhevsky et al., 2012; Jian et al., 2016; Gao et al., 2023), there is a growing recognition of the vulnerability of these models to the imperceptible adversarial perturbation in input data (Kurakin et al., 2018; Carlini and Wagner, 2017). The introduction of perturbed adversarial samples can lead to the model producing specified or alternative erroneous predictions, thus jeopardizing the functionality of real-world surveillance (Dai et al., 2018), autonomous driving systems (Szegedy et al., 2013), and giving rise to critical safety concerns. Consequently, the enhancement of model robustness against adversarial samples generated by various attacks has emerged as a focal research topic in the current landscape (Papernot et al., 2016; Chen et al., 2020; Latorre et al., 2023).
While various defense methods (Zhang et al., 2019; Dong et al., 2020) have been investigated to mitigate adversarial attacks, Adversarial Training (AT) (Madry et al., 2017; Shafahi et al., 2019) is
widely acknowledged as among the most efficacious strategies, of which the essence lies in addressing the min-max optimization problem. Under this Standard AT (SAT) formulation, different adversarial attacks (Rebuffi et al., 2022; Yuan et al., 2021) could be incorporated to improve the attack process of the attacked model (i.e., target model), including the single-step attack (Goodfellow et al., 2014) based and multi-step attack based AT (Madry et al., 2017). As for the defense process, various factors such as the perturbation sizes and the data quality always lead to the unstable convergence behavior of target model (Dong et al., 2022). In particular, catastrophic overfitting (Li et al., 2020) refers to significant performance decrease during the training process when trained with larger perturbation, which severely limits the robustness improvement of target model when trained under larger perturbation sizes. On top of that, several lines of works have explored heuristic defense techniques to enhance the defense process, including introducing additional robust teacher models (Pang et al., 2020) and designing specialized regularization terms (Andriushchenko and Flammarion, 2020). Whereas, (_i_) these methods essentially introduce additional prior knowledge or design complex learning strategies with explicit or implicit computation cost (e.g., introducing regularization constraints online or pretrained teacher models offline). Besides, (_ii_) they have always spared efforts to assist the current state of target model itself to defend the parameter-oriented attack, which always suffers from inconsistency among the historical states, and leads far too easily to unstable convergence behavior.
### Contributions
In this paper, we do not follow the SAT process to use the target model to directly respond to the generated adversarial example, and reconsider the update paradigm of defense model from the perspective of its optimization trajectories. Specifically, we adopt the historical parameter state of the target model denoted as the proxy model, and design a two-stage update rule to construct a general adversarial defense framework, termed LAST (**L**earam from the **Past**). During the defense process, we first perform gradient descent to update the proxy model to estimate the next state to defend the parameter-oriented attack, and then employ the estimated state and current state of target model to calculate the different unit for update of target model. At the second stage, we update the proxy model and target model with the current state and differential unit as the update direction, respectively. Furthermore, we propose a new Self Distillation (SD) based defense objective to regularize the update process of proxy model without introducing additional teacher models, which effectively alleviates the catastrophic overfitting problem.
Experimentally, we demonstrate the effectiveness and consistent performance improvement of LAST framework by improving four single-step and multi-step AT methods based on various datasets and commonly used backbones, which also verify its ability to stabilize the training and alleviates the catastrophic overfitting problem. Especially, in Fig. 1, we plot the adversarial loss landscape (Liu et al., 2020) of four original SAT methods and the corresponding improved versions trained using PreActResNet18 with \(\mathbf{\epsilon}=8/255\). The adversarial loss is calculated with \(\mathcal{L}_{\texttt{att}}(\mathbf{I}+x\mathbf{\tilde{t}}+y\mathbf{\tilde{\sigma}})\), where \(\mathbf{I}\) denotes the original image from CIFAR10 dataset, \(\mathbf{\tilde{t}}=\texttt{sgn}(\nabla_{\mathbf{I}}\mathcal{L}_{\texttt{att}}( \mathbf{I}))\) and \(\mathbf{\tilde{\sigma}}\sim\text{Rademacher}(0,0.5)\) are the sign gradient direction and random direction (\(x\) and \(y\) are the corresponding linear coefficients). As it can be observed, the models trained with LAST framework exhibit lower loss, smoother landscapes and smaller loss gaps within the range of surfaces plotted in the subfigure, which validates the significant robustness improvement of the proposed adversarial defense framework.
We summarize our main contributions as follows.
1. As one of the most significant features, this paper stands as the first of its kind to revisit the defense update rule of SAT process and its deficiency from the perspective of its optimization trajectories, which also emphasizes the importance of the historical states of target model to help defend against the parameter-oriented adversarial attack.
2. By introducing the historical state of the target model as its proxy, we construct a simple but much effective two-stage adversarial defense framework, named LAST, endowed with great potential to serve as an alternative of SAT and consistently improve existing methods to boost robustness with almost no additional cost.
3. Based on the proxy model, we design a SD defense objective to constrain the learning process of proxy model without requirements of pretrained teacher models. The new defense objective (along with the new update rule) could be flexibly integrated into SAT methods to stabilize the training process and alleviate the catastrophic overfitting problem.
4. We implement the LAST framework based on various SAT methods, and verify its consistent performance improvement (e.g., up to \(\mathbf{9.2\%}\) and \(\mathbf{20.5\%}\) increase of RA compared with PGD-AT under AutoAttack (\(\epsilon=16/255\)) on CIFAR10 and CIFAR100 datasets, respectively) with different backbones, datasets, attack modalities, and also demonstrate its ability to enhance training stability and ameliorate overfitting issues.
More detailed related works on the adversarial attack and defense could be found in Appendix. A.1. In Sec. 2, we first review the SAT process, and then proposed our LAST framework along with the SD defense objective. Note that we also provide comprehensive analysis on the effectiveness of LAST framework and differences from previous techniques in Sec. 2.4. In Sec. 3, we conduct extensive experiments and analyze the training convergence behavior by consistently improving various SAT methods. The detailed hyperparameter settings and descriptions of baselines could be found in Appendix A.2. Last but not least, we provide ablation studies and more comparative results in Appendix A.3 and A.4.
## 2 A Proxy based Two-Stage Adversarial Defense Framework
In this section, we first review the general formulation about the SAT process. Based on its deficiency during the defense process, we propose to introduce the historical state of target model as its proxy, and construct a proxy-based two-stage adversarial defense framework. Furthermore, a new self distillation based defense objective without introducing any additional pretrained teacher models is proposed to stabilize the training process and alleviate the catastrophic overfitting problem. More discussion on the proposed update rule of LAST framework is also provided.
### Preliminaries
To enhance the robustness of deep learning model, SAT has been thoroughly evaluated and regarded as one of the most effective adversarial defense strategies. Generally speaking, SAT could be formulated as the min-max optimization problem (Madry et al., 2017), where the attack model aims to maximize the objective by injecting imperceptible adversarial perturbation to the original input, while the defense model (i.e., target model) optimizes the parameters with gradient descent to stay robust against the perturbation. The attack and defense objectives for this problem are usually defined as the same form. Here we first define the training dataset and input data pair as \(\mathcal{D}=\{\mathbf{u}_{i},\mathbf{v}_{i}\}_{i=1}^{\mathcal{M}}\), and denote the target model as \(\mathcal{T}_{\mathbf{\theta}}\), where \(\mathbf{\theta}\) are parameters of the target model. Then a general-purpose SAT formulation could be written as follows
\[\min_{\mathbf{\theta}}\mathbb{E}_{\{\mathbf{u}_{i},\mathbf{v}_{i}\}\in\mathcal{D}}\left[ \max_{\mathbf{\delta}\in\mathcal{S}}\mathcal{L}_{\mathtt{atk}}\big{(}\mathcal{T}_ {\mathbf{\theta}}(\mathbf{u}_{i}+\mathbf{\delta}),\mathbf{v}_{i}\big{)}\right], \tag{1}\]
where \(\mathbf{\delta}\) is the perturbation subject to the constraint \(\mathcal{S}=\{\mathbf{\delta}\mid\|\mathbf{\delta}\|_{\mathbf{\rho}}\leq\epsilon\}\) with \(\epsilon\)-toleration \(\mathbf{\rho}\) norm, and \(\mathcal{L}_{\mathtt{atk}}\) denotes the attack objective. Typically, \(\mathbf{\delta}\) is generated by \(K\)-step maximization of the attack
Figure 1: The four subfigures visualize the adversarial loss landscape w.r.t. input variations of four original SAT methods and corresponding improved version with LAST framework. We also report the gap of maximum and minimum losses within the range of \(x,y\in[-0.25,0.25]\). As it can be observed evidently, the models trained with LAST framework exhibit lower loss, smoother loss landscapes along with smaller loss gaps within the perturbation range.
objective following
\[\mathbf{\delta}_{k+1}\leftarrow\Pi_{\mathbf{\epsilon}}\big{(}\mathbf{\delta}_{k}+\mathbf{\alpha} \cdot\texttt{sgn}\nabla_{\mathbf{\delta}}\mathcal{L}_{\texttt{atk}}(\mathcal{T}_{ \mathbf{\theta}}(\mathbf{u}_{i}+\mathbf{\delta}),\mathbf{v}_{i})\big{)},\ k=0,1,\cdots,K-1. \tag{2}\]
where \(\Pi\) and \(\mathrm{sgn}\) are the projection and element-wise \(sign\) operation. \(\mathbf{\delta}_{0}\) is uniformly initialized from \((-\mathbf{\epsilon},\mathbf{\epsilon})\). When \(K\) is set as \(K=1\) or \(K>1\), we could derive two major types of adversarial attacks, i.e., FGSM and PGD attacks. These perturbation generated online according to the current state of target model \(\mathbf{\theta}_{i}\) is actually parameter-oriented to a great extent. As for the SAT process, the target model improves its robustness by performing gradient descent according to the attack objective by
\[\mathbf{\theta}_{i+1}=\mathbf{\theta}_{i}-\nabla_{\mathbf{\theta}}\mathcal{L}_{\texttt{ atk}}\big{(}\mathcal{T}_{\mathbf{\theta}}\left(\mathbf{u}_{i}+\mathbf{\delta}_{K}\right),\mathbf{v}_{i} \big{)}. \tag{3}\]
In this paper, we do not focus on the adversarial attack process of this min-max optimization problem, but turn our foresight to how the target model reacts to these adversarial examples. It has been discussed (Nakkiran et al., 2021) before that the target model trained with SAT always suffers from unstable convergence behavior and even more severe problems such as catastrophic overfitting phenomenon due to various factors, e.g., the size of target models, the perturbation radius and data quality causing label noise in the perturbed data pairs. Moreover, the agn operation and constraint of input images (i.e., \([0,1]\)) also introduce much bias to the gradient-based defense of target model. From this perspective, when faced with adversarial examples w.r.t. the current state of target model, it is always too hard for the target model to capture the attack modality and the correspondence between \(\mathbf{\delta}_{K}\) and \(\mathbf{\theta}_{i}\). On top of that, updating \(\mathbf{\theta}_{i}\) along the gradient descent direction of \(\mathcal{L}_{\texttt{atk}}\left(\mathcal{T}_{\mathbf{\theta}}\left(\mathbf{u}_{i}+\bm {\delta}_{K}\right),\mathbf{v}_{i}\right)\) based on this generated parameter-oriented perturbation in Eq. (3), unintentionally leads to significant inconsistency among the optimization trajectories, i.e., \(\{\cdots,\mathbf{\theta}_{i}-\mathbf{\theta}_{i-1},\mathbf{\theta}_{i+1}-\mathbf{\theta}_{i},\cdots\}\), and exacerbates the unstable training process meanwhile.
From this new perspective to understand the update rule of SAT, several lines of works have been made, explicitly or implicitly, to modify the process of updating target models, i.e., adding regularization terms to optimize new forms of defense objectives (Andrithshenko and Flammion, 2020), introducing pretrained teacher model to correct the labels for supervision (Dong et al., 2022), estimations of hyper gradient through Bilevel Optimization(BLO) reformulation (Zhang et al., 2022). To summarize, (_i_) these methods have spared efforts to reconsider the influence of training data, forms of defense objective and coupled relationship of SAT formulation to introduce extra prior or design complex learning strategies along with additional computation cost (increased runtime online or offline). Besides, (_ii_) they follow the commonly used criterion to assist the current state of target model \(\mathbf{\theta}_{i}\) itself to defend the adversarial perturbation \(\mathbf{\delta}_{K}\), which is ineffective to maintain the consistency among the optimization trajectories, and easily causes unstable convergence behavior. Therefore, we pose the following inquiry: _is there a more effective response of the target model to the parameter-oriented attacks?_ In the next subsection, we reconsider the update rule of defense model from the perspective of its optimization trajectories, and proposed to reuse the historical state of the target model to construct a new adversarial defense framework.
### Enhance Robustness with the LAST framework
As it is summarized before, the perturbation is generated according to the current state of the target model parameters throughout the attack process, which makes the attack parameter-oriented in essence. To verify this hypothesis, we first generate the adversarial example, i.e., \(\mathbf{u}_{\texttt{adv}}=\mathbf{u}+\mathbf{\delta}_{K}\), where \(\mathbf{\delta}_{K}\) targets at the best model trained with early stopping, denoted as \(\mathcal{T}_{\mathbf{\theta}}\). Then we use proxy model to represent the historical state of target model \(\mathcal{T}_{\mathbf{\theta}}\), denoted as \(\mathcal{P}_{\omega}\), and use \(\mathbf{x}_{\texttt{adv}}\) to attack both \(\mathcal{T}_{\mathbf{\theta}}\) and \(\mathcal{P}_{\omega}\). Generally speaking, \(\mathcal{T}_{\mathbf{\theta}}\) obtained with early stopping has definitely stronger robustness than \(\mathcal{P}_{\omega}\). In Fig. 2, we illustrate the heatmap of input gradient to analyze how the target model and its proxy model react to \(\mathbf{u}_{\texttt{adv}}\). The input gradient of an image represents how sensitive the model is to changes in the pixel values of this image (Chan et al., 2020), and the output of robustly trained model will generate salient input gradients which resemble the clean image. When faced with the parameter-oriented attack, it is shown in subfigure (e) that the output of \(\mathcal{T}_{\mathbf{\theta}}\) is seriously degraded and no longer produce salient input gradient in each channel. In comparison, \(\mathcal{P}_{\omega}\) is more robust against this parameter-oriented adversarial example and has salient gradient around these pixels which matter most to the model decision in subfigure (f).
Therefore, although the target model shows great vulnerability to this perturbation, _the historical states of the target model and its gradient information is inaccessible to the attack model, and is of great value to provide prior information for the adversarial defense_. Furthermore, this hypothesis
could also be essentially verified by the phenomenon that the target model consistently exhibits superior performance when subjected to transfer-based black-box attacks compared to white-box attacks of equivalent intensity.
Inspired by this principle, we make the first attempt to introduce the historical states of target model to estimate better response to the parameter-oriented attack w.r.t. the current state of target model. In detail, we first define the last state of target model as its proxy, i.e., \(\mathbf{\omega}_{i}=\mathbf{\theta}_{i-1},i=1,\cdots,\mathcal{M}\), where \(\mathbf{\omega}_{0}\) is initialized using \(\mathbf{\theta}_{0}\). In the following, we use \(\mathcal{L}_{\mathsf{def}}\) to represent the defense objective. During the attack process, we adopt the same scheme as SAT to generate the adversarial perturbation, i.e., \(\mathbf{\delta}_{K}\). As for the defense strategy, we first perform gradient descent with \(\mathcal{P}_{\mathbf{\omega}}\) according to \(\mathcal{L}_{\mathsf{def}}(\mathcal{P}_{\mathbf{\omega}}(\mathbf{u}_{i}+\mathbf{\delta}_{ K}),\mathbf{v}_{i})\) to estimate the next state of target model, which could be described as
\[\mathbf{\tilde{\omega}}=\mathbf{\omega}_{i}-\mathbf{\beta}\cdot\nabla_{\mathbf{\omega}} \mathcal{L}_{\mathsf{def}}\big{(}\mathcal{P}_{\mathbf{\omega}_{i}}(\mathbf{u}_{i}+ \mathbf{\delta}_{K}),\ \mathbf{v}_{i}\big{)}, \tag{4}\]
where \(\mathbf{\beta}\) denotes the learning rate of \(\mathcal{P}_{\mathbf{\omega}}\). Then we employ \(\mathbf{\tilde{\omega}}\) and current state of target model (i.e., \(\mathbf{\theta}_{i}\)) to calculate the differential unit \(\mathcal{G}_{\mathbf{\theta}}\) as the update direction, which is denoted as \(\mathcal{G}_{\mathbf{\theta}}=\mathbf{\theta}_{i}-\mathbf{\tilde{\omega}}\). For the second stage, we update \(\mathbf{\omega}_{i}\) to record the current state of target model, and then perform gradient descent of \(\mathbf{\theta}_{i}\) with \(\mathcal{G}_{\mathbf{\theta}}\). The whole adversarial defense framework including the attack and two-stage defense update rule is described in Alg. 1. The step size of \(\mathbf{\theta}_{i}\), i.e., \(\mathbf{\gamma}\), will be discussed further in Sec 2.4. This new update rule is supposed to generated better response which is more robust to defend against this parameter-oriented attack. In the next subsection, we introduce constraints to the update of proxy model to estimate \(\mathbf{\tilde{\omega}}\) inspired by the self distillation idea which helps stabilize the training and alleviate the catastrophic overfitting problem.
### Self Distillation Based Defense Objective
Based on the introduced proxy model, which captures the historical states to introduce prior information for defense (Step 12 in Alg. 1), we further delve into the defense objective to constrain the learning process of proxy model and alleviate the overfitting problem. As it is shown in Fig. 2, although \(\mathcal{T}_{\mathbf{\theta}}\) is less sensitive to the adversarial attack targeted at \(\mathbf{\theta}\), the perturbation still deteriorates the output of \(\mathcal{T}_{\mathbf{\theta}}(\mathbf{u}_{\mathsf{adv}})\) which may lead to misclassification. Whereas, the direct output of target model, which refers to the soft targets in Knowledge Distillation (KD) (Li, 2018), also reflects which part the target model concerns about. When faced with the clean image and adversarial perturbation, the proxy model is supposed to generate outputs that have more similar distributions. Unlike these methods generating supervised soft targets with a larger teacher model, we propose to constrain the estimation of \(\mathbf{\tilde{\omega}}\) with the distance between soft targets of clean image and the corresponding adversarial image. Here we denote the temperature as \(\mathbf{\tau}\), then the proposed defense objective could be written as follows
\[\mathcal{L}_{\mathsf{def}}=(1-\mathbf{\mu})\cdot\mathcal{L}_{\mathsf{atx}}\big{(} \mathcal{P}_{\mathbf{\omega}}(\mathbf{u}_{\mathsf{adv}}),\mathbf{v}\big{)}+\mathbf{\mu}\cdot \mathcal{L}_{\mathsf{KL}}\big{(}\mathcal{P}_{\mathbf{\omega}}(\mathbf{u}_{\mathsf{adv} })/\mathbf{\tau},\mathcal{P}_{\mathbf{\omega}}(\mathbf{u})/\mathbf{\tau}\big{)}, \tag{5}\]
where \(\mathbf{\mu}\in[0,1)\) is the distillation coefficient to balance two loss terms, and \(\mathcal{L}_{\mathsf{KL}}\) denotes the Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951) to measure the distance between two
distributions of the soft targets. In this way, the proxy model is supposed to behave as consistently as possible when faced with clean or adversarial examples and generate correct classification results meanwhile. Moreover, the introduced defense objective supervises the learning process of proxy model without introducing (larger) pretrained teacher models or additional updates of models, thus can be flexibly integrated to the proposed algorithmic framework in Alg. 1 at the least computational cost. In the experimental part, we demonstrate the effectiveness of LAST framework along with the SD defense objective which stabilizes the training and alleviates the catastrophic overfitting problem.
```
0: Training epochs \(\mathcal{J}\), \(\mathcal{M}\) batches of data pairs \((\mathbf{u}_{i},\mathbf{v}_{i})\), attack iteration \(K\), target model \(\mathcal{T}_{\mathbf{\theta}}\) parameterized by \(\mathbf{\theta}\), and proxy model \(\mathcal{P}_{\mathbf{\omega}}\) parameterized by \(\mathbf{\omega}\), perturbation range \(\mathbf{\epsilon}\).
1:// Initialize the proxy model \(\mathcal{P}_{\mathbf{\omega}}\).
2:\(\mathbf{\omega}_{0}=\mathbf{\theta}_{0}\).
3:for\(j=0\rightarrow\mathcal{J}-1\)do
4:for\(i=0\rightarrow\mathcal{M}-1\)do
5: Initialize \(\delta_{0}\).
6:// Generate the perturbation with target model \(\mathcal{T}_{\mathbf{\theta}}\).
7:for\(k=0\to K-1\)do
8:\(\mathbf{\delta}_{k+1}=\mathbf{\delta}_{k}+\mathbf{\alpha}\cdot\text{sgn}\big{(}\nabla_{ \delta}\mathcal{L}_{\texttt{ctk}}(\mathcal{T}_{\mathbf{\theta}_{i}}(\mathbf{u}_{i}+ \mathbf{\delta}_{k}),\ \mathbf{v}_{i})\big{)}\). (\(\mathbf{\alpha}\) denotes the attack step size)
9:\(\mathbf{\delta}_{k+1}=\max\big{[}\min(\mathbf{\delta}_{k+1},\mathbf{\epsilon}),-\mathbf{ \epsilon}\big{]}\).
10:endfor
11:// Stage 1: Estimate update direction of \(\mathbf{\theta}_{i}\) to defend.
12:\(\mathbf{\tilde{\omega}}=\mathbf{\omega}_{i}-\mathbf{\beta}\cdot\nabla_{\omega}\mathcal{ L}_{\texttt{def}}\big{(}\mathcal{P}_{\mathbf{\omega}_{i}}(\mathbf{u}_{i}+\mathbf{\delta}_{K}), \ \mathbf{v}_{i})\). (\(\mathbf{\beta}\) denotes the learning rate of \(\mathcal{P}_{\mathbf{\omega}}\))
13:\(\mathcal{G}_{\mathbf{\theta}}=\mathbf{\theta}_{i}-\mathbf{\tilde{\omega}}\). (Compute the differential unit \(\mathcal{G}_{\mathbf{\theta}}\))
14:// Stage 2: Update \(\mathbf{\omega}_{i}\) and \(\mathbf{\theta}_{i}\) sequentially.
15:\(\mathbf{\omega}_{i+1}=\mathbf{\theta}_{i}\).
16:\(\mathbf{\theta}_{i+1}=\mathbf{\theta}_{i}-\mathbf{\gamma}\cdot\mathcal{G}_{\mathbf{\theta}}\). (\(\mathbf{\gamma}\) denotes the learning rate of \(\mathcal{T}_{\mathbf{\theta}}\))
17:endfor
18:endfor
```
**Algorithm 1** The Proposed LAST Framework
### Discussion on the Proxy based Update Rule
Here we provide more discussion and different perspectives to analyze the effectiveness of the introduced proxy model and two-stage update rule. With a simple substitution and deformation to Step. 16, we could derive \(\mathbf{\theta}_{i+1}-\mathbf{\theta}_{i}=-\mathbf{\gamma}\cdot\mathcal{G}_{\mathbf{\theta}}= \mathbf{\gamma}\cdot(\mathbf{\tilde{\omega}}-\mathbf{\theta}_{i})\), where \(\mathbf{\gamma}\) denotes the learning rate of \(\mathcal{T}_{\mathbf{\theta}}\). It can be observed that the historical sequences of \(\mathbf{\theta}_{i}\) is always constrained by the estimation of distance between \(\mathbf{\tilde{\omega}}\) and \(\mathbf{\theta}_{i}\), both of which is derived from \(\mathbf{\theta}_{i-1}\). Assume that the target model of the critical state unexpectedly diverges, Eq. (4) could estimate \(\mathbf{\tilde{\omega}}\) which is more robust against this parameter-oriented perturbation to assist the updates of \(\mathbf{\theta}_{i}\). This update rule is supposed to improve the consistency between adjacent states of the target model as the target model converges, i.e., \(\{\cdots,\mathbf{\theta}_{i}-\mathbf{\theta}_{i-1},\mathbf{\theta}_{i+1}-\mathbf{\theta}_{i}, \cdots\}\).
Besides, we could describe the update format as \(\mathbf{\theta}_{i+1}=(1-\mathbf{\gamma})\cdot\mathbf{\theta}_{i}+\mathbf{\gamma}\cdot\mathbf{ \tilde{\omega}}\), where \(\mathbf{\tilde{\omega}}\) serve as estimated response generated by \(\mathbf{\theta}_{i-1}\) to defend the adversarial example targeted at \(\mathbf{\theta}_{i}\). On top of that, \(\mathbf{\gamma}\) is the aggregation coefficient to balance the influence of responses to historical attacks and current attacks. The format of this update rule is similar to these momentum based optimizers (Sutskever et al., 2013) to some extent, which refer to the accumulation of historical gradients to perform gradient descent. Furthermore, we could also find evidence to demonstrate the effectiveness of this update rule from other techniques such as Stochastic Weight Averaging (SWA) (Izmailov et al., 2018), which smoothes the weights by averaging multiple checkpoints along the training process. This technique have been demonstrated to be effective to find flatter solutions than SGD, and applied in various applications (Athiwaratkun et al., 2018; Yang et al., 2019). Specifically, the weights of SWA are simply accumulated by the exponential weighted average of the historical weights. In comparison, the new update rule _combines the response of proxy model to the parameter-oriented attack, which bridge the historical states and current states to improve consistency among the optimization trajectories and introduce extra prior for defense_. Therefore, the introduced proxy model is of great significance and cannot be simply replaced by using momentum-like optimizers or the stochastic averaging of weights by the SWA technique.
## 3 Experiments
In this section, we first demonstrate the robustness improvement of LAST framework based on popular single-step and multi-step methods in two subsections, respectively. We also compare the loss landscape and convergence behavior of test robust loss, and RA to verify its stronger stability and defense capability against larger adversarial perturbation. Finally, we also analyze the defense performance of the proposed framework under transfer-based black box attacks. Note that we provide the basic experimental settings for different AT methods, datasets and models used for robustness evaluation in Appendix A.2 due to limited space. And more ablation results of hyperparameters and full results are provided in Sec. A.3 and Sec. A.4.
### Evaluation with Single-Step AT methods
In this subsection, we first evaluate the Standard Accuracy (SA) and RA of Fast-AT, Fast-AT-GA, Fast-BAT and our improved versions trained on the CIFAR10 dataset using PARN-18 backbone with \(\mathbf{\epsilon}=8/255\) in Tab. 1. It can be observed that the LAST framework shows consistent performance improvement of RA on PGD-10, PGD-50 and AutoAttack. In particular, the target models trained with LAST framework are significantly more robust when faced with unknown adversarial attacks of larger perturbation size (test with \(\epsilon=16/255\)). Furthermore, we evaluate the average runtime of SAT methods and our improve ones for each iteration. It can be observed that improving existing AT methods with our framework only slightly increases the runtime, which demonstrates its potential to serve as an alternative of SAT with almost no additional computation cost. Besides, the adversarial landscapes in the subfigure (a)-(c) of Fig. 1 also show that combining our update rule will generate smoother adversarial loss surfaces with the smaller loss gap, which make the model stay more robust when faced with adversarial input under different perturbation sizes and noise levels.
In Tab. 2, we also compare the performance of these methods with our LAST framework trained on the CIFAR100 dataset, and report the average improvement of these methods by combining the LAST framework. Note that we calculate the mean RA and its standard deviation by running different methods with different random seeds on both CIFAR10 and CIFAR100 datasets. As it is shown, our framework exhibits the capacity to consistently enhance existing methods on the larger dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multicolumn{4}{c}{CIFAR-10 dataset, PARN-18 trained with \(\mathbf{\epsilon}=8/255\)} \\ \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{PGD-10 (\%)} & \multicolumn{2}{c}{PGD-50 (\%)} \\ & \(\mathbf{\epsilon}=8/255\) & \(\mathbf{\epsilon}=16/255\) & \(\mathbf{\epsilon}=8/255\) & \(\mathbf{\epsilon}=16/255\) \\ \hline Fast-AT & \(47.03\pm 0.29\) & \(13.79\pm 0.15\) & \(44.94\pm 0.52\) & \(8.85\pm 0.20\) \\ LF-AT(Ours) & \(\mathbf{47.17\pm 0.15}\) & \(\mathbf{14.48\pm 0.23}\) & \(\mathbf{45.50\pm 0.04}\) & \(\mathbf{9.89\pm 0.14}\) \\ \hline Fast-AT-GA & \(48.30\pm 0.13\) & \(16.36\pm 0.14\) & \(46.63\pm 0.33\) & \(11.12\pm 0.12\) \\ LF-AT-GA (Ours) & \(\mathbf{48.60\pm 0.06}\) & \(\mathbf{17.52\pm 0.02}\) & \(\mathbf{47.25\pm 0.09}\) & \(\mathbf{12.63\pm 0.17}\) \\ \hline Fast-BAT & \(50.42\pm 0.36\) & \(18.29\pm 0.18\) & \(49.07\pm 0.39\) & \(13.31\pm 0.16\) \\ LF-BAT (Ours) & \(\mathbf{50.65\pm 0.19}\) & \(\mathbf{19.73\pm 0.05}\) & \(\mathbf{49.66\pm 0.20}\) & \(\mathbf{15.25\pm 0.20}\) \\ \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SA (\%)} & \multicolumn{2}{c|}{AutoAttack (\%)} & \multicolumn{2}{c}{Time} \\ & & \(\mathbf{\epsilon}=8/255\) & \(\mathbf{\epsilon}=16/255\) & (Sec/ Iteration) \\ \hline Fast-AT & \(\mathbf{83.56\pm 0.06}\) & \(41.80\pm 0.68\) & \(7.32\pm 0.27\) & \(5.543\times 10^{-2}\) \\ LF-AT (Ours) & \(81.70\pm 0.15\) & \(\mathbf{42.11\pm 0.19}\) & \(\mathbf{8.13\pm 0.20}\) & \(5.719\times 10^{-2}\) \\ \hline Fast-AT-GA & \(\mathbf{81.00\pm 0.59}\) & \(43.17\pm 0.21\) & \(9.04\pm 0.18\) & \(1.632\times 10^{-1}\) \\ LF-AT-GA (Ours) & \(79.18\pm 0.13\) & \(\mathbf{43.31\pm 0.23}\) & \(\mathbf{10.22\pm 0.05}\) & \(1.643\times 10^{-1}\) \\ \hline Fast-BAT & \(\mathbf{82.01\pm 0.04}\) & \(45.51\pm 0.44\) & \(10.98\pm 0.19\) & \(1.644\times 10^{-1}\) \\ LF-BAT (Ours) & \(79.72\pm 0.14\) & \(\mathbf{45.54\pm 0.27}\) & \(\mathbf{12.23\pm 0.27}\) & \(1.656\times 10^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: We report the SA and RA of Fast-AT, Fast-AT-GA and Fast-BAT under PGD attack (PGD-10 and PGD-50) and AutoAttack. We use \(m\pm n\) to denote the mean SA (i.e., \(m\)) with standard deviation (i.e., \(n\)) by running all the algorithms with 3 random seeds.
Besides, we also implement these methods and our LAST framework based on the larger WRN-34-10 backbone, and the detailed results could be found in Tab. 4.
Furthermore, in Fig. 3, we focus on the catastrophic overfitting phenomenon when faced with stronger adversaries by setting \(\mathbf{\epsilon}=16/255\). It can be observed that the robustness of Fast-AT drops significantly during the training process, and its loss landscape (obtained with the best model by early stopping) shows violent fluctuations influence by the injected perturbation and random noise. When we implement Fast-AT under the LAST framework together with SD objective, the update direction are continuously corrected by the proxy model and prior information of soft targets, which finally leads to more stable convergence behavior, performance boost and also smoother loss landscape. More ablation results without the proposed SD objective can be found in the Appendix.
### Evaluation with Multi-Step AT methods
To demonstrate that the LAST framework consistently and universally enhances established approaches, we extend the framework to the stronger PGD base AT methods. In Tab. 3, we present the test results of PGD-10 based AT (denoted as PGD-AT) and the improved method by LAST framework (denoted as LPGD-AT). It can be obviously seen that LPGD-AT shows significant better performance compared to the original SAT trained with PGD-10, and even slightly improves the SA on CIFAR 10 dataset. When we train both methods on CIFAR100 dataset, LPGD-AT achieves a substantial leap in performance compared with PGD-10 based AT. We attribute the reason why the LAST framework achieves more significant improvement on PGD (\(9.2\%\) and \(20.5\%\) improvement in the last column of Tab. 3) to the fact that the correspondence between the target model and this parameter-specific adversarial attack obtained by using multi-step attack steps is more difficult to characterize, thus the consistency between the update sequences of SAT will be worse, which makes the performance improvement of our method even more significant.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{CIFAR-100 dataset, PARN-18 trained with \(\mathbf{\epsilon}=8/255\)} \\ \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SA (\%)} & \multicolumn{2}{c|}{PGD-10 (\%)} & \multicolumn{2}{c|}{PGD-50 (\%)} & \multicolumn{2}{c}{AutoAttack (\%)} \\ & & \(\mathbf{\epsilon}=8/255\) & \(\mathbf{\epsilon}=16/255\) & \(\mathbf{\epsilon}=8/255\) & \(\mathbf{\epsilon}=16/255\) & \(\mathbf{\epsilon}=16/255\) \\ \hline Fast-AT & \(\mathbf{55.087}\) & \(24.330\) & \(7.430\) & \(23.533\) & \(5.520\) & \(4.153\) \\ LF-AT (Ours) & \(50.817\) & \(\mathbf{25.190}_{10.86}\) & \(\mathbf{9.003}_{11.57}\) & \(\mathbf{24.373}_{10.84}\) & \(\mathbf{7.497}_{11.98}\) & \(\mathbf{5.443}_{11.29}\) \\ \hline Fast-AT-GA & \(\mathbf{53.253}\) & \(25.660\) & \(8.603\) & \(24.853\) & \(6.836\) & \(5.320\) \\ LF-AT-GA (Ours) & \(48.220\) & \(\mathbf{25.887}_{10.23}\) & \(\mathbf{10.277}_{11.67}\) & \(\mathbf{25.433}_{10.58}\) & \(\mathbf{8.813}_{11.97}\) & \(\mathbf{6.270}_{10.95}\) \\ \hline Fast-BAT & \(\mathbf{42.793}\) & \(22.603\) & \(8.920\) & \(22.059\) & \(7.813\) & \(5.807\) \\ LF-BAT (Ours) & \(42.460\) & \(\mathbf{23.153}_{10.55}\) & \(\mathbf{9.840}_{10.92}\) & \(\mathbf{22.783}_{10.72}\) & \(\mathbf{8.740}_{10.93}\) & \(\mathbf{6.103}_{10.30}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Illustrating the SA and RA of Fast-AT, Fast-AT-GA and Fast-BAT under PGD attack (PGD-10 and PGD-50) and AutoAttack on CIFAR100 dataset. We use \(\uparrow\) to report the average improvement of RA by running all the algorithms with 3 random seeds. More detailed results with standard deviations can be found in Tab. 5.
Figure 3: Subfigure (a) illustrates the convergence behavior of test loss and RA for Fast-AT and ours on CIFAR10 dataset under PGD-10 attack with \(\mathbf{\epsilon}=16/255\). In Subfigure (b), we compare the adversarial loss landscape for Fast-AT and our improved version for comparison. Note that we follow the method described in Fig. 1 with larger scale of linear coefficients \(x,\ y\in[-0.5,0.5]\).
Besides, It can be observed in subfigure (d) of Fig. 1 that the loss landscape of our model trained with LAST framework has been rendered smoother, accompanied by a reduced disparity between its highest and lowest values. In addition, we compare the convergence behavior of robust loss and RA for PGD-AT and our LPGD-AT on both CIFAR10 and CIFAR100 datasets in Fig. 4. As it is illustrated, by improving the consistency among the historical states of model parameters, LPGD-AT exhibits more stable convergence behavior of both robust loss and accuracy, and finally gains higher performance after performing the multi-step learning rate decay twice.
### Evaluation of Generalization Performance
Last but not least, we also conduct analysis about the robustness of defense against black-box attacks for thorough evaluation. Practically, we plot the heatmaps of RA for different SAT methods against the transfer-based black-box adversarial attacks on CIFAR10 dataset under PGD-10 attack with \(\epsilon=8/255\) in Fig. 5. Note that the source model corresponds to the surrogate model used to generate the adversarial perturbation to attack the target models. We use F-AT and LF-AT to denote Fast-At and the improved version with LAST framework, and other methods follow the similar abbreviations. It is shown that adversarial attacks generated based on the source models trained by LAST are more difficult to defend for stan
Figure 4: The first two subfigures compare the convergence behavior of test robust loss and RA trained with PGD-AT and LAST both trained with \(\epsilon=8/255\) on CIFAR10 dataset, while the left two subfigures illustrate the convergence behavior of same metrics trained on CIFAR100 dataset. The black dashed line denotes the epoch where multi-step learning rate decays.
Figure 5: We visualize the heatmap of four SAT methods including Fast-AT, Fast-AT-GA, Fast-BAT, PGD-AT (i.e., 2-step PGD-AT) and their improved version under transfer-based PGD-10 attack on CIFAR10 dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{6}{c}{CIFAR-10 dataset, PARN-18 trained with \(\epsilon=8/255\)} \\ \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SA (\%)} & \multicolumn{2}{c|}{PGD-10 (\%)} & \multicolumn{2}{c|}{PGD-50 (\%)} & \multicolumn{1}{c}{AutoAttack (\%)} \\ & & \(\epsilon=8/255\) & \(\epsilon=16/255\) & \(\epsilon=8/255\) & \(\epsilon=16/255\) & \(\epsilon=16/255\) \\ \hline PGD-AT & \(81.948\) & \(51.923\) & \(20.310\) & \(50.757\) & \(15.677\) & \(13.093\) \\ LPGD-AT (Ours) & \(\mathbf{82.17}\) & \(\mathbf{53.230}_{1\uparrow 31}\) & \(\mathbf{22.203}_{1\uparrow 89}\) & \(\mathbf{52.137}_{1\uparrow 31}\) & \(\mathbf{17.587}_{1\uparrow 91}\) & \(\mathbf{14.297}_{1\uparrow 20}\) \\ \hline \multicolumn{6}{c}{CIFAR-100 dataset, PARN-18 trained with \(\epsilon=8/255\)} \\ \hline \multirow{2}{*}{Method} & \multirow{2}{*}{SA (\%)} & \multicolumn{2}{c|}{PGD-10 (\%)} & \multicolumn{2}{c|}{PGD-50 (\%)} & \multicolumn{1}{c}{AutoAttack (\%)} \\ & & \(\epsilon=8/255\) & \(\epsilon=16/255\) & \(\epsilon=8/255\) & \(\epsilon=16/255\) & \(\epsilon=16/255\) \\ \hline PGD-AT & \(\mathbf{49.457}\) & \(25.837\) & \(9.980\) & \(25.377\) & \(8.749\) & \(6.667\) \\ LPGD-AT (Ours) & \(48.150\) & \(\mathbf{31.267}_{15.43}\) & \(\mathbf{14.903}_{1\uparrow 49.92}\) & \(\mathbf{30.857}_{15.48}\) & \(\mathbf{13.573}_{14.83}\) & \(\mathbf{8.033}_{1\uparrow 37}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: We report the SA and RA of PGD-AT(-10) and our improved version under PGD attack (PGD-10 and PGD-50) and AutoAttack on CIFAR10 and CIFAR100 dataset by running both methods with 3 random seeds. More detailed results with standard deviation could be found in Tab. 6.
dard model, and both original AT methods and our improved ones perform better under transfer-based attacks than white-box attacks.
## 4 Conclusion
In this study, we addressed the vulnerability of deep learning models to adversarial attacks particularly focusing on the SAT methods. Firstly, we revisit the model update process based on its optimization trajectory and introduce the historical state as proxy model, leading to the development of the novel LAST framework. We also propose the SD defense objective that doesn't rely on large pretrained teacher models. Through extensive experiments, we demonstrated LAST's consistent performance improvements across datasets, backbones, and attack scenarios, along with its ability to enhance training stability.
|
2308.07632 | A Moebius sum | We provide numerical bounds for $\Sigma(X)=\sum_{\substack{d_1,d_2\le
X}}\frac{\mu(d_1)\mu(d_2)}{[d_1,d_2]}$. We show in particular that $0\le
\Sigma(X)\le 17/25$ for every $X\ge2$. | Olivier Ramaré | 2023-08-15T08:30:06Z | http://arxiv.org/abs/2308.07632v1 | # A Moebius sum
###### Abstract.
We provide numerical bounds for \(\Sigma(X)=\sum_{d_{1},d_{2}\leq X}\frac{\mu(d_{1})\mu(d_{2})}{[d_{1},d_{2}]}\). We show in particular that \(0\leq\Sigma(X)\leq 17/25\) for every \(X\geq 2\).
## 1. Introduction and results
In several problems of analytic number theory appears the sum
\[\sum_{d_{1},d_{2}\leq X}\frac{\mu(d_{1})\mu(d_{2})}{[d_{1},d_{2}]}. \tag{1}\]
It has been shown in [1] by F. Dress, G. Tenenbaum and H. Iwaniec, that this sum converges to a constant, and H. Helfgott computes precisely its first four decimals in Proposition 6.30 of [2]. This implies in particular that this constant is \(\leq 0.4408\). Our aim in this note is to provide a uniform and explicit upper bound for this sum.
**Theorem 1.1**.: _We have_
\[0\leq\sum_{d_{1},d_{2}\leq X}\frac{\mu(d_{1})\mu(d_{2})}{[d_{1},d_{2}]}\leq \begin{cases}17/25&\text{when }X\geq 2,\\ 0.574&\text{when }X\geq 10^{9},\\ 0.536&\text{when }X\geq 3\cdot 10^{10},\\ 0.504&\text{when }X\geq 2.4\cdot 10^{12}.\end{cases}\]
_We have \(17/25=0.68\)._
The bounds for \(X\) have been chosen with an application in mind. The lower estimate comes from the interpretation of this constant as follows. By the Parseval equality, we have that, when \(\sigma>1\):
\[\lim_{T\to\infty}\frac{1}{2i\pi T}\int_{-\infty}^{\infty}\biggl{|} \zeta(s)\sum_{n\leq X}\frac{\mu(d)}{d^{s}}\biggr{|}^{2}ds =\sum_{n\geq 1}\biggl{(}\sum_{\begin{subarray}{c}d|n\\ d\leq X\end{subarray}}\mu(d)\biggr{)}^{2}/n^{2\sigma-1}\] \[=\zeta(2\sigma-1)\sum_{d_{1},d_{2}\leq X}\frac{\mu(d_{1})\mu(d_{ 2})}{[d_{1},d_{2}]^{2\sigma-1}}\]
and a continuity argument ends the proof.
**Notation**.: Our notation is classical: we use
\[m_{q}(y)=\sum_{\begin{subarray}{c}d\leq y\\ (d,q)=1\end{subarray}}\frac{\mu(d)}{d},\quad\text{and}\quad m(y)=m_{1}(y). \tag{2}\]
Furthermore, \(f=\mathcal{O}^{*}(g)\) means that \(|f|\leq g\).
## 2. On the Moebius function
**Lemma 2.1**.: _We have \(|m(x)|\leq\sqrt{2/x}\) for \(0<x\leq 10^{14}\) and \(|m_{2}(x)|\leq\sqrt{3/x}\) for \(0<x\leq 10^{12}\)._
This is [2, Lemma 5.10] and [2, Eq. (5.79)].
**Lemma 2.2**.: _We have \(|m(x)|\leq 0.0144/\log x\) for \(x\geq 463\,421\) and \(|m_{2}(x)|\leq 0.0296/\log x\) for \(x\geq 5379\)._
This follows from [5, Theorem 1.2] and from [2, Lemma 5.17]
**Lemma 2.3**.: _With \(\xi=1-1/(12\log 10)\), \(y>1\) and any \(t\in(0,y]\), we have_
\[|m(t)|\leq\sqrt{2/t}+0.0144\cdot\mathbf{1}_{y\geq 10^{12}}\frac{y^{1-\xi}}{ \log y}\frac{1}{t^{1-\xi}}\]
_as well as_
\[|m_{2}(t)|\leq\sqrt{3/t}+0.0296\cdot\mathbf{1}_{y\geq 10^{12}}\frac{y^{1-\xi}}{ \log y}\frac{1}{t^{1-\xi}}\]
This is a trivial modification of [2, Lemma 5.12]: we degraded the value of \(\xi\) in the first estimate to get a more uniform result, added the condition \(\mathbf{1}_{t\geq 10^{12}}\) which was obvious in the proof and transposed almost verbatim the argument to \(m_{2}\).
**Lemma 2.4**.: _With \(\xi=1-1/(12\log 10)\) and for \(y>1\) and any squarefree \(d\geq 1\), we have_
\[|m_{d}(y)|\leq g_{0}(d)\sqrt{2/y}+0.0144g_{1}(d)\frac{\mathbf{1}_{y\geq 10^{12}} }{\log y}\]
_where the multiplicative functions \(g_{0}\) and \(g_{1}\) are defined on primes by:_
\[g_{0}(p)=\begin{cases}\sqrt{3/2}&\text{when }p=2\\ \frac{\sqrt{p}}{\sqrt{p}-1}&\text{when }p\geq 3\end{cases}\quad\text{and} \quad g_{1}(p)=\begin{cases}2.06&\text{when }p=2\\ \frac{p^{6}}{p^{6}-1}&\text{when }p\geq 3.\end{cases}\]
This is a trivial modification of [2, Proposition 5.15]: when \(d\) is even we remove only the coprimality to \(d/2\) and use directly the estimate for \(m_{2}\) given in Lemma 2.3. The value \(2.06\) is an upper bound for \(0.0296/0.0144\). The main outcome of Lemma 2.4 over [2, Proposition 5.15] is the improved value of \(g_{0}(2)\).
## 3. Auxiliaries
Here is [6, Lemma 3.2].
**Lemma 3.1**.: _Let \(f\) be a C\({}^{4}\) non-negative, non-increasing function over \([P,\infty[\), where \(P\geq 3\,600\,000\) is a real number and such that \(\lim_{t\to\infty}tf(t)=0\). We have_
\[\sum_{p\geq P}f(p)\log p\leq(1+\epsilon)\int_{P}^{\infty}f(t)dt+\epsilon Pf(P )+Pf(P)/(5\log^{2}P)\]
_with \(\epsilon=1/914\). When we can only ensure \(P\geq 2\), then a similar inequality holds, simply replacing the last \(1/5\) by a 4._
**Lemma 3.2**.: _When \(D\geq 0\), we have \(\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d}g_{0}(d)^{2}\leq 2.07\,D\)._
The maximum is reached at \(D=42\).
**Lemma 3.3**.: _When \(D\geq 0\), we have \(\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d}g_{0}(d)g_{1}(d)\leq 1.60\,D\)._
The maximum is reached at \(D=7\).
**Lemma 3.4**.: _When \(D\geq 0\), we have \(\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d}g_{1}(d)^{2}\leq 1.57\,D\)._
The maximum is reached at \(D=3\).
Proof of Lemmas 3.2, 3.3 and 3.4.: The three proofs are similar. We use \(G\) for either the multiplicative function \(g_{0}^{2}\), \(g_{0}g_{1}\) or \(g_{1}^{2}\). We readily check that
\[\sum_{d\geq 1}\frac{\mu^{2}(d)\varphi(d)G(d)}{d^{1+s}}=\prod_{p\geq 2}\biggl{(} 1+\frac{(p-1)G(p)-p}{p^{1+s}}-\frac{(p-1)G(p)}{p^{1+2s}}\biggr{)}\zeta(s)=H(s) \zeta(s)\]
say. Notice that in the three cases, we have \((p-1)G(p)-p\geq 0\). Thus, by adopting an obvious notation and using [4, Lemma 3.2] with \(k_{n}=1/n\) and \(g(d)=\mu^{2}(d)\varphi(d)G(d)/d\) together with the second part of[3, Lemma 2.1], we deduce that our sum \(S\) satisfies
\[S=H(1)D+\mathcal{O}^{*}(2.5\times\gamma\overline{H}(2/3)D^{2/3})\]
where
\[H(1)=\prod_{p\geq 2}\biggl{(}1+\frac{(p-1)G(p)-p}{p^{2}}-\frac{(p-1)G(p)}{p^ {3}}\biggr{)}\leq\begin{cases}2.0004&\text{when $G=g_{0}^{2}$},\\ 1.34&\text{when $G=g_{0}g_{1}$},\\ 1.06&\text{when $G=g_{1}^{2}$},\end{cases}\]
and
\[\overline{H}(2/3)=\prod_{p\geq 2}\biggl{(}1+\frac{(p-1)G(p)-p}{p^{5/3}}+ \frac{(p-1)G(p)(p)}{p^{7/3}}\biggr{)}\leq\begin{cases}72.9&\text{when $G=g_{0}^{2}$},\\ 23.4&\text{when $G=g_{0}g_{1}$},\\ 9.20&\text{when $G=g_{1}^{2}$}.\end{cases}\]
Bounding above numerically \(\overline{H}(2/3)\) requires some care: we use an Euler product for \(p\leq 10^{8}\) and the following Pari/GP-script which relies on Lemma 3.1:
{g0(p) = if(p == 2, return(sqrt(3/2)), return(1/(1-1/sqrt(p))));}
f0(p) = ((p-1)*g0(p)^2-p)/p^2 - (p-1)*g0(p)^2/p^3; f1(p) = ((p-1)*g0(p)^2-p)/p^(5/3) + (p-1)*g0(p)^2/p^(7/3);
{val(boundP, myf) = my(res = prodeuler(p = 2, boundP, 1.0 + myf(p)), eps = 1/914, aux); aux = (1+eps) * intnum(t = boundP, oo, myf(t)/log(t)); aux += eps * boundP * myf(boundP) / log(boundP); aux += boundP * myf(boundP) / 5 /(log( boundP)^3); return(res * exp(aux));}
We called val(10000, f0) and val(10^7, f1). We called val(100000, f2) and val(10^7, f3). We called val(100000, f4) and val(10^7, f5). This proves that, when \(D\geq 0\), we have
\[\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d}G(d)\leq\begin{cases}2.0004D+106D^ {2/3}&\text{when $G=g_{0}^{2}$},\\ 1.34D+33.8D^{2/3}&\text{when $G=g_{0}g_{1}$},\\ 1.06D+13.3D^{2/3}&\text{when $G=g_{1}^{2}$}.\end{cases}\]
To complete, we called respectively check(4*10^9, af1atp, 2.0004, 106), check(10^7, af2atp, 1.34, 33.8) and check(10^6, af3atp, 1.06, 13.3) of the same script.
## 4. A bound for the tail
**Lemma 4.1**.: _When \(x\geq D\geq 0\), we have_
\[\sum_{d\leq\min(D,x/10^{12})}\frac{\mu^{2}(d)\varphi(d)}{d^{3/2}\log(x/d)}g_{0}( d)g_{1}(d)\leq 0.05\sqrt{D}.\]
Proof.: Set \(y=\min(D,x/10^{12})\). We have
\[\frac{d}{dt}\frac{1}{\sqrt{t}\log(x/t)}=-\frac{1}{2t^{3/2}\log(x/t)}+\frac{1}{ t^{3/2}\log^{2}(x/t)}\]
which is negative when \(x/t\geq 10^{12}\). Hence, by Lemma 3.3, our sum \(S\) satisfies
\[S \leq\int_{1}^{y}1.60\frac{2\log(x/t)-1}{2\sqrt{t}\log(x/t)^{2}}dt +\frac{1.60\sqrt{D}}{\log(x/D)}\] \[\leq 0.80\sqrt{x}\int_{x/y}^{x}(2\log u-1)\frac{du}{u^{3/2}\log^{2 }u}+0.0497\sqrt{D}\leq 0.05\sqrt{D}.\]
The lemma follows readily.
**Lemma 4.2**.: _When \(x\geq D\geq 0\), we have_
\[\sum_{d\leq\min(D,x/10^{12})}\frac{\mu^{2}(d)\varphi(d)}{d^{2}\log(x/d)^{2}}g_ {1}(d)^{2}\leq 0.047.\]
Proof.: Set \(y=\min(D,x/10^{12})\). We have
\[\frac{d}{dt}\frac{1}{t\log(x/t)^{2}}=-\frac{1}{t^{2}\log(x/t)^{2}}+\frac{2}{t ^{2}\log^{2}(x/t)^{3}}\]
which is negative when \(x/t\geq 10^{12}\). Hence, by Lemma 3.4, our sum \(S\) satisfies
\[S \leq\int_{1}^{y}1.57\frac{\log(x/t)-2}{t\log^{3}(x/t)}dt+\frac{1. 57}{\log(x/D)^{2}}\] \[\leq 1.57\int_{x/y}^{x}(\log u-1)\frac{du}{u\log^{3}u}+0.00152\leq 0.047\]
The lemma follows readily.
**Lemma 4.3**.: _When \(x\geq D>0\), we have_
\[\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}m_{d}(x/d)^{2}\leq 4.14\frac{D} {x}+0.00205.\]
Proof.: We readily find that
\[\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d^{2}} \bigg{(}g_{0}(d)\sqrt{\frac{2d}{x}}+\mathbf{1}_{x/d\geq 10^{12}}g_ {1}(d)\frac{0.0144}{\log(x/d)}\bigg{)}^{2}\] \[\leq 2.07\frac{2D}{x}+\frac{2\sqrt{2}\cdot 0.0144}{\sqrt{x}}\sum_{d \leq\min(D,x/10^{12})}\frac{\mu^{2}(d)\varphi(d)}{d^{3/2}\log(x/d)}g_{0}(d)g_{ 1}(d)\] \[+0.0144^{2}\sum_{d\leq\min(D,x/10^{12})}\frac{\mu^{2}(d)\varphi(d )}{d^{2}\log(x/d)^{2}}g_{1}(d)^{2}.\]
By Lemmas 3.2, 4.1 and 4.2, we get
\[\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}\bigg{(}g_{0}(d) \sqrt{\frac{2d}{x}}+\mathbf{1}_{x/d\geq 10^{12}}g_{1}(d)\frac{0.0144}{\log(x/d)} \bigg{)}^{2}\\ \leq 4.14\frac{D}{x}+0.00204+0.00000975.\]
## 5. Some refinements due to coprimality
We shall use this lemma when bounding \(|r_{2}^{*}(X;q)|\) below.
**Lemma 5.1**.: _We have_
\[\max_{M\geq 1,K>0}\left|\!K\sum_{\begin{subarray}{c}k\geq K\\ (k,M)=1\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}\right|=1.\]
Proof.: Let us denote by \(S\) our sum.
When \(K<1\): We notice that
\[\sum_{\begin{subarray}{c}k\geq 1\\ (k,M)=1\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}=\prod_{p\neq M}\biggl{(} 1-\frac{1}{p^{2}}+\frac{1}{p^{3}}\biggr{)}\leq 1\]
which establishes the estimate \(|S|\leq 1/K\) when \(K\leq 1\).
When \(1\leq K<2\): We find that
\[S=\prod_{p\neq M}\biggl{(}1-\frac{1}{p^{2}}+\frac{1}{p^{3}}\biggr{)}-1.\]
Our sum is thus non-positive and its smallest value is
\[\prod_{p\geq 2}\biggl{(}1-\frac{1}{p^{2}}+\frac{1}{p^{3}}\biggr{)}-1\geq-0.252 \geq-\frac{0.504}{K}\quad\text{(when $1<K\leq 2$)}.\]
When \(2\leq K<3\): We find that
\[S=\begin{cases}\prod_{p\neq M}\bigl{(}1-\frac{1}{p^{2}}+\frac{1}{p^{3}} \bigr{)}-1&\text{when $2|M$},\\ \prod_{p\neq M}\bigl{(}1-\frac{1}{p^{2}}+\frac{1}{p^{3}}\bigr{)}-\frac{7}{8}& \text{when $2\nmid M$}.\end{cases}\]
This implies in the first case that \(S\) is non-positive, and that it bounded above by \(1/8\) in the second case. Thus
\[|S|\leq\begin{cases}0.145\leq\frac{0.435}{K}&\text{when $2|M$},\\ \max(0.127,1/8)\leq\frac{0.381}{K}&\text{when $2\nmid M$}.\end{cases}\]
When \(2|M\): In that case, \(k\) is odd and a comparison to an integral gives us
\[|S|\leq\frac{1}{K^{2}}+\int_{(K-1)/2}^{\infty}\frac{dt}{(2t+1)^{2}}\leq\frac{ 1}{K^{2}}+\frac{1}{2K}\leq\frac{5}{6K} \tag{3}\]
on assuming \(K\geq 3\), establishing our estimate in this case.
When \(3\leq K<4\) and \(2\nmid M\): We find that
\[S=\begin{cases}\prod_{p\neq M}\bigl{(}1-\frac{1}{p^{2}}+\frac{1}{p^{s}}\bigr{)}- \frac{7}{8}&\text{when }3|M,\\ \prod_{p\neq M}\bigl{(}1-\frac{1}{p^{2}}+\frac{1}{p^{3}}\bigr{)}-\frac{173}{216 }&\text{when }3\nmid M.\end{cases}\]
This implies that
\[|S|\leq\begin{cases}\max(\frac{25}{27}-\frac{7}{8},0.127)\leq\frac{0.508}{K}& \text{when }3|M,\\ 0.053\leq\frac{0.212}{K}&\text{when }3\nmid M.\end{cases}\]
This proves our estimate in this case.
When \(2\nmid M\): We may write
\[S =\sum_{\begin{subarray}{c}k\geq K\\ (k,M)=1\\ (k,2)=1\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}+\sum_{\begin{subarray}{c}k \geq 2K\\ (k,M)=1\\ 2|k\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}+\sum_{\begin{subarray}{c}2K> k\geq K\\ (k,M)=1\\ 2|k\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}\] \[=\frac{7}{8}\sum_{\begin{subarray}{c}k\geq K\\ (k,M)=1\\ (k,2)=1\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}-\frac{1}{8}\sum_{ \begin{subarray}{c}K>k\geq K/2\\ (k,M)=1\\ (k,2)=1\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}\]
We use Eq. (3) to infer that
\[|S|\leq\frac{7}{8}\biggl{(}\frac{1}{K^{2}}+\frac{1}{2K}\biggr{)}+\frac{1}{8} \biggl{(}\frac{1}{(K/2)^{2}}+\frac{1}{2(K/2)}\biggr{)}.\]
We readily check that this quantity is \(<1/K\) when \(K\geq 4\), therefore concluding this proof.
## 6. On a family of functions: initial step
In this section and the next one, we study the family
\[G^{*}_{q}(X)=\sum_{\begin{subarray}{c}d\leq X\\ (d,q)=1\end{subarray}}\frac{\mu^{2}(d)\varphi(d)}{d^{2}},\quad G^{*}(X)=G^{*} _{1}(X). \tag{4}\]
It will transpire from the proof to come that the study of \(G^{*}\) is of special importance for the general case. We devote this section to the precise modification of initial case that will be required. The approach is different: in the next section, we shall compare the function \(\mathbf{1}_{(d,q)=1}\mu^{2}(d)\varphi(d)/d^{2}\) to the function \(1/d\) (see Lemma 7.3) while here, we compare the function \(\mu^{2}(d)\varphi(d)/d^{2}\) to \(\mu^{2}(d)/d\), as in [7]. Such a comparison is the subject of our first lemma.
**Lemma 6.1**.: _We have_
\[\mu^{2}(d)\frac{\varphi(d)}{d}=\sum_{\ell m=d}\mu^{2}(\ell)g(m)\]
_where \(g\) is the multiplicative function defined by \(g(p^{k})=(-1)^{k}/p\) for every positive integer \(k\) and every prime \(p\)._
Proof.: We simply compare the \(p\)-factors of the corresponding Dirichlet series, and check that
\[\frac{1+\frac{p-1}{p^{1+s}}}{1+\frac{1}{p^{s}}}=1-\frac{1}{p^{s}}\frac{1}{1+ \frac{1}{p^{s}}}=1+\sum_{k\geq 1}\frac{g(p^{k})}{p^{ks}}\]
from which the lemma follows.
**Lemma 6.2**.: _We have_
\[\forall X\geq X_{0},\quad\sum_{d\leq X}\mu^{2}(d)=\frac{6}{\pi^{2}}X+\mathcal{O}^ {*}\big{(}c(X_{0})\sqrt{X}\big{)}\]
_where
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(X_{0}\) & \(0\) & \(8\) & _1664_ & _82005_ & _438653_ \\ \hline \(c(X_{0})\) & \(1\) & _1/2_ & _0.1333_ & _0.036438_ & _0.02767_ \\ \hline \end{tabular}
**Lemma 6.3**.: _We have_
\[\max_{X\geq 0}\frac{1}{\sqrt{X}}\bigg{(}\sum_{d\leq X}\mu^{2}(d)\frac{\varphi( d)}{d}-AX\bigg{)}=1-A\leq 0.572\]
_where_
\[A=\prod_{p\geq 2}\biggl{(}1-\frac{2}{p^{2}}+\frac{1}{p^{3}}\biggr{)}=0.428257\cdots.\]
Proof.: By Lemma 6.1 and when \(X\geq M_{0}X_{0}\), we may write
\[\sum_{d\leq X}\mu^{2}(d)\frac{\varphi(d)}{d} =\sum_{m\geq 1}g(m)\sum_{\ell\leq X/m}\mu^{2}(\ell)\] \[=\sum_{m\leq M_{0}}g(m)\biggl{(}\frac{6}{\pi^{2}}\frac{X}{m}+ \mathcal{O}^{*}\biggl{(}c(X_{0})\sqrt{\frac{X}{m}}\biggr{)}\biggr{)}\] \[\qquad+\sum_{m>M_{0}}g(m)\biggl{(}\frac{6}{\pi^{2}}\frac{X}{m}+ \mathcal{O}^{*}\biggl{(}\sqrt{\frac{X}{m}}\biggr{)}\biggr{)}\] \[=AX+\mathcal{O}^{*}\biggl{(}\sqrt{X}\sum_{m\geq 1}\frac{|g(m)|}{ \sqrt{m}}\times\begin{cases}c(X_{0})&\text{when }m\leq M_{0},\\ 1&\text{when }m>M_{0}.\end{cases}\biggr{)}\]
We have therefore reached the fundamental formula, valid for \(X\geq X_{0}M_{0}\):
\[\sum_{d\leq X}\mu^{2}(d)\frac{\varphi(d)}{d}=AX+\mathcal{O}^{*}\biggl{(} \biggl{(}\sum_{m\geq 1}\frac{|g(m)|}{\sqrt{m}}-(1-c(X_{0}))\sum_{m\leq M_{0}} \frac{|g(m)|}{\sqrt{m}}\biggr{)}\sqrt{X}\biggr{)}.\]
We numerically check that
\[\forall X\leq 4\cdot 10^{9},\quad\sum_{d\leq X}\mu^{2}(d)\frac{\varphi(d)}{d} \leq AX+(1-A)\sqrt{X}.\]
See AMoebiusSum-r1-01.gp/getmaxlocr0(). Therefore, we may take \(M_{0}=9118\). The script AMoebiusSum-r1-01.gp/getmaxasympr0() concludes the proof.
## 7. On a family of functions: first batch
Let us mention that a related study appears in [9, Lemma 4.5] by S. Zuniga Alterman. Here is the final result of this part. We proceed as for the proof of [3, Theorem 1.1] which relies on [3, Theorem 1.4]. Let us start by a more precise form though more special form of this lemma.
**Lemma 7.1**.: _Let \((g(m))_{m\geq 1}\) be a sequence of complex numbers such that both series \(\sum_{m\geq 1}g(m)/m\) and \(\sum_{m\geq 1}g(m)(\log m)/m\) converge. We define \(G^{\sharp}(x)=\sum_{m>x}g(m)/m\) and assume that \(\int_{1}^{\infty}|G^{\sharp}(t)|dt/t\) converges. We then have_
\[\sum_{n\leq D}\frac{(g\star 1)(n)}{n}=\sum_{m\geq 1}\frac{g(m)}{m} \Bigl{(}\log\frac{D}{m}+\gamma\Bigr{)}+\int_{e^{\gamma}D}^{\infty}G^{\sharp} (t)\frac{dt}{t}\\ +\mathcal{O}^{*}\Bigl{(}\frac{1}{D}\int_{1}^{e^{\gamma}}\sum_{m \leq uD}|g(m)|\frac{du}{u}\Bigr{)}.\]
Proof.: We established in [3, Eq. (2.2)] the following formula:
\[\sum_{n\leq D}\frac{(g\star 1)(n)}{n}=\sum_{m\geq 1}\frac{g(m)}{m} \Bigl{(}\log\frac{D}{m}+\gamma\Bigr{)}+\int_{\eta D}^{\infty}G^{\sharp}(t)\frac{ dt}{t}\\ -(\gamma-\log\eta)G^{\sharp}(\eta D)+\sum_{m\leq\eta D}\frac{g(m) }{m}R\left(\frac{D}{m}\right). \tag{5}\]
where \(\eta\geq 1\) and \(R(t)=\sum_{n\leq t}1/n-\log t-\gamma\). We also showed that, for any positive \(t\), we have \(|R(t)|\leq\gamma\). Let us specialize \(\eta=e^{\gamma}\). We see that \(R\left(\frac{D}{m}\right)=-\int_{D/m}^{e^{\gamma}}du/u\), so that
\[\sum_{m\leq\eta D}\frac{|g(m)|}{m}\biggl{|}R\left(\frac{D}{m}\right)\biggr{|} \leq\int_{1}^{e^{\gamma}}\sum_{m\leq D}\frac{|g(m)|}{m}\frac{du}{u}+\int_{1}^ {e^{\gamma}}\sum_{D<m\leq uD}\frac{|g(m)|}{m}\frac{du}{u}\]
from which our lemma readily follows.
**Lemma 7.2**.: _For every \(X>0\), we have_
\[G_{q}^{*}(X)=A\prod_{p|q}\frac{p^{2}}{p^{2}+p-1}\bigl{(}\log X+c_{q}\bigr{)}+ \mathcal{O}^{*}(4.73j_{1}^{*}(q)/\sqrt{X})\]
_where (as in Lemma 6.3)_
\[A=\prod_{p\geq 2}\biggl{(}1-\frac{2}{p^{2}}+\frac{1}{p^{3}}\biggr{)}=0.42825 7\cdots,\quad j_{1}^{*}(q)=\prod_{p|q}\frac{p^{3/2}+p}{p^{3/2}+1},\]
_then_
\[c_{q}=\gamma+\sum_{p|q}\frac{(p-1)\log p}{p^{2}+p-1}+\sum_{p\geq 2}\frac{(3p-2) \log p}{(p-1)(p^{2}+p-1)},\]
_Moreover_
\[G_{q}^{*}(X)-G_{q}^{*}(Y)=A\prod_{p|q}\frac{p^{2}}{p^{2}+p-1}\log \frac{X}{Y}\\ +\mathcal{O}^{*}\biggl{(}2.18j_{1}^{*}(q)\biggl{(}\frac{2(e^{ \gamma/2}-1)}{\sqrt{X}}+\frac{2(e^{\gamma/2}-1)}{\sqrt{Y}}+\frac{2}{\sqrt{e^{ \gamma}Y}}-\frac{2}{\sqrt{e^{\gamma}X}}\biggr{)}\biggr{)}.\]
**Lemma 7.3**.: _We have_
\[\mathbf{1}_{(d,q)=1}\frac{\mu^{2}(d)\varphi(d)}{d}=\sum_{\begin{subarray}{c}k^ {2}\ell r|d\\ r|q\\ (k,\ell)=1\end{subarray}}\frac{\mu(rk\ell)\varphi(k)}{k\ell}.\]
This is the counterpart of [3, Lemma 4.1].
Proof.: We find that
\[D_{q}(s)=\sum_{\begin{subarray}{c}d\geq 1\\ (d,q)=1\end{subarray}}\frac{\mu^{2}(d)\varphi(d)}{d^{1+s}}=\prod_{ \begin{subarray}{c}p\geq 1\\ (p,q)=1\end{subarray}}\left(1+\frac{p-1}{p^{1+s}}\right)\]
which we decompose in
\[D_{q}(s)=\zeta(s)\prod_{p|q}\biggl{(}1-\frac{1}{p^{s}}\biggr{)}\prod_{ \begin{subarray}{c}p\geq 1\\ (p,q)=1\end{subarray}}\left(1-\frac{1}{p^{1+s}}-\frac{p-1}{p^{1+2s}}\right). \tag{6}\]
The identity then follows either by using the unitarian convolution as in [8, Theorem 3.5] or by simply checking that both left and right hand side of are multiplicative functions, and that they coincide on prime powers.
We define in this section
\[r_{2}^{*}(X;q)=\sum_{\begin{subarray}{c}k^{2}\ell r>X\\ r|q\\ (k\ell,q)=(k,\ell)=1\end{subarray}}\frac{\mu(rk\ell)\varphi(k)}{rk^{3}\ell^{2}} \tag{7}\]
as well as
\[r_{1}^{*}(X;q)=\sum_{\begin{subarray}{c}k^{2}\ell r\leq X\\ r|q\\ (k\ell,q)=(k,\ell)=1\end{subarray}}\frac{\mu^{2}(rk\ell)\varphi(k)}{k\ell}. \tag{8}\]
Let us first majorize \(r_{1}^{*}(X;q)\).
**Lemma 7.4**.: _The function \(j_{1}^{*}\) being defined in Lemma 7.2, we have_
\[\forall X\geq 0,\quad r_{1}^{*}(X;q)\leq 2.18\sqrt{X}j_{1}^{*}(q).\]
_We also have_
\[r_{1}^{*}(X;q)\leq 0.931\sqrt{X}j_{1}^{*}(q)+1.96X^{1/4}j_{5}^{*}(q).\]
_where_
\[j_{5}^{*}(q)=\prod_{p|q}\frac{p^{5/4}+p}{p^{5/4}+1}. \tag{9}\]
This is the counterpart of [3, Lemma 6.1].
Proof.: We take advantage of the variable \(k\) by writing
\[r_{1}^{*}(X;q)\leq\sum_{\begin{subarray}{c}\ell r\leq X\\ r|q\\ (\ell,q)=1\end{subarray}}\frac{\mu^{2}(r\ell)}{\ell}\sqrt{\frac{X}{\ell r}}= \sqrt{X}\prod_{p|q}\frac{p^{3/2}+p}{p^{3/2}+1}\prod_{p\geq 2}\biggl{(}1+ \frac{1}{p^{3/2}}\biggr{)}.\]
We finally notice that \(\prod_{p\geq 2}(1+p^{-3/2})=\zeta(3/2)/\zeta(3)\). The first part of the lemma follows readily. The proof we have just followed used the upper bound
\[\sum_{\begin{subarray}{c}k\leq K\\ (k,q\ell)=1\end{subarray}}\mu^{2}(k)\frac{\varphi(k)}{k}\leq K.\]
As it turns out, the quantity to be majorized can be handled by Lemma 7.2! Such a recursive treatment leads to constants that are too big for us. But we may still forget of the coprimality condition and appeal to Lemma 6.3. This gives us
\[r_{1}^{*}(X;q) \leq A\sum_{\begin{subarray}{c}\ell r\leq X\\ r|q\\ (\ell,q)=1\end{subarray}}\frac{\mu^{2}(r\ell)}{\ell}\sqrt{\frac{X}{\ell r}}+(1 -A)\sum_{\begin{subarray}{c}\ell r\leq X\\ r|q\\ (\ell,q)=1\end{subarray}}\frac{\mu^{2}(r\ell)}{\ell}\biggl{(}\frac{X}{\ell r} \biggr{)}^{1/4}\] \[\leq A\frac{\zeta(3/2)}{\zeta(3)}j_{1}^{*}(q)\sqrt{X}+(1-A)\frac{ \zeta(5/4)}{\zeta(5/2)}\prod_{p|q}\frac{p^{5/4}+p}{p^{5/4}+1}.\]
A numerical application concludes the proof.
**Lemma 7.5**.: _The function \(j_{1}^{*}\) being defined in Lemma 7.2, we have_
\[|r_{2}^{*}(X;q)|\leq\frac{2.18}{\sqrt{X}}j_{1}^{*}(q).\]
This is the counterpart of [3, Lemma 6.2].
Proof.: We again take advantage of the variable \(k\) and use Lemma 5.1.
\[|r_{2}^{*}(X;q)| \leq\sum_{\begin{subarray}{c}\ell\geq 1,r|q\\ (\ell,q)=1\end{subarray}}\frac{\mu^{2}(r\ell)}{r\ell^{2}}\bigg{|}\sum_{ \begin{subarray}{c}k^{2}>X/(\ell r)\\ (k,q\ell)=1\end{subarray}}\frac{\mu(k)\varphi(k)}{k^{3}}\bigg{|}\] \[\leq\frac{1}{\sqrt{X}}\sum_{\begin{subarray}{c}\ell\geq 1,r|q\\ (\ell,q)=1\end{subarray}}\frac{\mu^{2}(r)\mu^{2}(\ell)}{\sqrt{r}\ell^{3/2}}\]
where we recognize the quantities that appeared in the proof of Lemma 7.4. The lemma then follows swiftly.
Proof of Lemma 7.2.: We employ [3, Theorem 1.4] with the function \(g\) being
\[g(m)=\sum_{\begin{subarray}{c}k^{2}\ell r=m\\ (k\ell,q)=1\\ (k,\ell)=1\end{subarray}}\frac{\mu(rk\ell)\varphi(k)}{k\ell}.\]
This is a consequence of Lemma 7.3. Then \(r_{2}^{*}(X;q)\) is the \(G^{\sharp}(X)\) of [3, Theorem 1.4] while \(r_{1}^{*}(X;q)\) is \(\sum_{m\leq X}|g(m)|\). As a consequence, and with \(\eta=e^{\gamma}\), we find that, for \(X\geq 1\), we have
\[G_{q}^{*}(X)=\sum_{m\geq 1}\frac{g(m)}{m}\Big{(}\log\frac{X}{m}+\gamma\Big{)} +\int_{\eta X}r_{2}^{*}(t;q)\frac{dt}{t}+\mathcal{O}^{*}\bigg{(}\int_{1}^{e^{ \gamma}}\frac{r_{1}^{*}(uX;q)du}{uX}\bigg{)}. \tag{10}\]
Lemmas 7.4 and 7.5 gives
\[G_{q}^{*}(X)=\sum_{m\geq 1}\frac{g(m)}{m}\Big{(}\log\frac{X}{m}+\gamma\Big{)} +\mathcal{O}^{*}\bigg{(}2.18(2(e^{\gamma/2}-1)+2e^{-\gamma/2})\frac{j_{1}^{*} (q)}{\sqrt{X}}\bigg{)}.\]
We identify the main term by using the Dirichlet series \(H_{q}(s)=D_{q}(s)/\zeta(s)\) (see Eq. (6)) of \(g\), and get
\[G_{q}^{*}(X)=H_{q}(1)\bigg{(}\log X+\frac{H_{q}^{\prime}(1)}{H_{q}(1)}+\gamma \Big{)}+\mathcal{O}^{*}(4.73j_{1}^{*}(q)/\sqrt{X}).\]
We find that
\[H_{q}(1)=\prod_{p|q}\frac{p^{2}}{p^{2}+p-1}\prod_{p\geq 2}\bigg{(}1-\frac{2}{ p^{2}}+\frac{1}{p^{3}}\bigg{)}.\]
We get to \(H_{q}^{\prime}/H_{q}\) by using the logarithmic derivatives and find that
\[\frac{H_{q}^{\prime}(1)}{H_{q}(1)} =\sum_{p|q}\frac{\log p}{p-1}+\sum_{p|q}\frac{(3p-2)\log p}{(p-1 )(p^{2}+p-1)}\] \[=\sum_{p|q}\frac{(p-1)\log p}{p^{2}+p-1}+\sum_{p\geq 2}\frac{(3p-2) \log p}{(p-1)(p^{2}+p-1)}.\]
On the difference \(G_{q}^{*}(X)-G_{q}^{*}(Y)\): To get the more precise evaluation of \(G_{q}^{*}(X)-G_{q}^{*}(Y)\), we go back to Eq. (10) to save on the factor involving \(r_{2}^{*}\). The proof is then rapidly completed.
## 8. On a family of functions: second batch
**Lemma 8.1**.: _When \(q\) has all its prime factors below 30, we have_
\[\forall X\geq 0,\quad r_{1}^{*}(X;q)\leq 1.17\sqrt{X}j_{1}^{*}(q).\]
Proof.: We first prove by hard computations, that when \(q\) has all its prime factors below 30, we have
\[\forall X\leq 10^{6},\quad r_{1}^{*}(X;q)\leq 1.17\sqrt{X}j_{1}^{*}(q).\]
See the Pari/GP script AMoebiusSum-r1-01.gp/getmaxr1(). For \(X\) large, we use the second bound provided by Lemma 7.4 and get
\[\frac{r_{1}^{*}(X;q)}{\sqrt{X}j_{1}^{*}(q)}\leq 0.931+1.96\frac{j_{5}^{*}(q)}{X ^{1/4}j_{1}^{*}(q)}.\]
The lemma follows readily.
## 9. Direct computations
**Lemma 9.1**.: _When \(422\leq X\leq 11\,000\,000\), we have_
\[0\leq\sum_{d_{1},d_{2}\leq X}\frac{\mu(d_{1})\mu(d_{2})}{[d_{1},d_{2}]}\leq 0. 445.\]
_On \([6,10\,040\,000]\), this sum is bounded above by \(0.528\). On \([2,10\,040\,000]\), this sum is bounded above by \(19/30=0.633\cdots\). When \(X\geq 1000\), this sum remained \(\geq 0.437\)._
A value larger than \(0.44455\) is reached around \(D=1321\). Let us set
\[\Sigma(X)=\sum_{d_{1},d_{2}\leq X}\frac{\mu(d_{1})\mu(d_{2})}{[d_{1},d_{2}]}. \tag{11}\]
When \(X\) is an integer, say \(d\), we find that
\[\Sigma(d)-\Sigma(d-1)=\frac{\mu^{2}(d)}{d}+2\mu(d)\sum_{d^{\prime}<d}\frac{\mu (d^{\prime})}{[d,d^{\prime}]}.\]
This yields a formula to find the maximum of \(\Sigma(d)\) over some range, but each step is costly. We continue with
\[\Sigma(d)-\Sigma(d-1) =\frac{\mu^{2}(d)}{d}+2\frac{\mu(d)}{d}\sum_{d^{\prime}<d}(d,d^{ \prime})\frac{\mu(d^{\prime})}{d^{\prime}}.\] \[=\frac{\mu^{2}(d)}{d}+2\frac{\mu(d)}{d}\sum_{\delta|d}\mu(\delta )\sum_{\begin{subarray}{c}d^{\prime}<d/\delta\\ (d^{\prime},d)=1\end{subarray}}\frac{\mu(d^{\prime})}{d^{\prime}}.\]
We finally use the Landau formula (see for instance [2, (5.73)]):
\[\sum_{\begin{subarray}{c}d^{\prime}<d/\delta\\ (d^{\prime},d)=1\end{subarray}}\frac{\mu(d^{\prime})}{d^{\prime}}=\sum_{ \ell|d^{\infty}}\frac{1}{\ell}\sum_{d^{\prime}<d/(\ell\delta)}\frac{\mu(d^{ \prime})}{d^{\prime}}.\]
Therefore
\[\Sigma(d)-\Sigma(d-1)=\frac{\mu^{2}(d)}{d}+2\frac{\mu(d)}{d}\sum_{\begin{subarray} {c}\delta|d\\ \ell|d^{\infty}\end{subarray}}\frac{\mu(\delta)}{\ell}m((d-1)/(\delta\ell)).\]
Let us join \(\delta\) and \(\ell\) in \(k=\delta\ell\). We have \(k|d^{\infty}\) and
\[\sum_{\delta\ell=k}\frac{\mu(\delta)}{\ell}=\frac{1}{k}\prod_{p|k}(1-p)= \frac{(-1)^{\Omega(k)}\varphi(k)}{k^{2}}.\]
Here is thus the identity we use:
\[\Sigma(d)-\Sigma(d-1)=\frac{\mu^{2}(d)}{d}+2\frac{\mu(d)}{d}\sum_{k|d^{\infty}} \frac{(-1)^{\Omega(k)}\varphi(k)}{k^{2}}m\Big{(}\frac{d-1}{k}\Big{)}. \tag{12}\]
See script MAoebiusSum-02.gp/DITgreat(). This entails to precompute all the values \(m(t)\) for \(t\) upto where we want to compute and this is a large array. So we decided to only store the values \(m_{M_{0}}(t)\) for \(M_{0}=6\). We stored in fact
\[m(t;u,M_{0})=\sum_{\begin{subarray}{c}n\leq t\\ n\equiv u[M_{0}]\end{subarray}}\mu(n)/n \tag{13}\]
for \(u\) covering a reduced congruence system modulo \(M_{0}\), i.e. in practice all \(u\in\{1,\cdots,M_{0}\}\) that are coprime to \(M_{0}\). This reduced sizeably the amount of storage while accessing to the values \(m(t)\) takes more time in a classical time/space bargain. See script MAoebiusSum-02.gp/DITb().
## 10. Proof of Theorem 1.1
Proof.: We readily find that
\[S =\sum_{d\leq x}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}\bigg{(}\sum_{ \begin{subarray}{c}n\leq x/d\\ (n,d)=1\end{subarray}}\frac{\mu(n)}{n}\bigg{)}^{2}\] \[=\sum_{D<d\leq x}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}m_{d}(x/d)^{2} +\sum_{d\leq D}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}m_{d}(x/d)^{2}.\]
The second sum is handled in Lemma 4.3. Let us call \(S_{0}\) the first one. We set \(\Delta(j)=\prod_{p\leq j}p\) and write
\[S_{0} =\sum_{j\leq x/D}\sum_{\max(x/D,x/(j+1))<d\leq x/j}\frac{\mu^{2} (d)\varphi(d)}{d^{2}}m_{d}(j)^{2}\] \[=\sum_{j\leq x/D}\sum_{\delta|\Delta(j)}m_{\delta}(j)^{2}\sum_{ \begin{subarray}{c}\max(x/D,x/(j+1))<d\leq x/j\\ (d,\Delta(j))=\delta\end{subarray}}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}\] \[\leq\sum_{j\leq x/D}\sum_{\delta|\Delta(j)}\frac{\mu^{2}(\delta )\varphi(\delta)}{\delta^{2}}m_{\delta}(j)^{2}\sum_{\begin{subarray}{c}\frac{ x}{(j+1)\delta}<d\leq\frac{x}{j\delta}\\ (d,\Delta(j))=1\end{subarray}}\frac{\mu^{2}(d)\varphi(d)}{d^{2}}.\]
A direct usage of Lemma 7.2 gives us
\[S_{0} =A\sum_{j\leq x/D}\prod_{p|\Delta(j)}\frac{p^{2}}{p^{2}+p-1}\log \frac{j+1}{j}\sum_{\delta|\Delta(j)}\frac{\mu^{2}(\delta)\varphi(\delta)}{ \delta^{2}}m_{\delta}(j)^{2}\] \[\qquad\qquad+\mathcal{O}^{*}\bigg{(}\frac{1}{\sqrt{x}}\sum_{j \leq x/D}j_{1}^{*}(\Delta(j))\bigg{(}2\times 2.18(e^{\gamma/2}-1)(\sqrt{j+1}+ \sqrt{j})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{2\times 2.18e^{- \gamma/2}}{\sqrt{j+1}+\sqrt{j}}\bigg{)}\sum_{\delta|\Delta(j)}\frac{\mu^{2}( \delta)\varphi(\delta)}{\delta^{2}}m_{\delta}(j)^{2}\sqrt{\delta}\bigg{)}.\]
This expression may be improved in two manners: when \(\delta\) has all its prime factors below 30, we may rely on Lemma 8.1 rather than on Lemma 7.4, therefore changing the first 2.18 (in factor of \(\sqrt{j+1}+\sqrt{j}\)) by 1.17. The second improvement consists in localizing \(X\) in an interval \([Y,2Y]\). When \(j\delta>2Y\), no contribution is to be incorporated.
By selecting \(x/D=22.99\) and assuming \(x\geq 11\,000\,000\), we reach \(S\leq 0.679\). See Pari/GP script MAoebiusSumMT.gp/DoIt().
* When \(x\geq 10^{9}\), we reach \(S\leq 0.574\) (on taking \(x/D=38.99\)).
* When \(x\geq 3\cdot 10^{10}\), we reach \(S\leq 0.536\) (on taking \(x/D=55.99\)).
* When \(x\geq 2.4\cdot 10^{12}\), we reach \(S\leq 0.504\) (on taking \(x/D=75.99\)).
|
2308.11050 | Optimal Dorfman Group Testing for Symmetric Distributions | We study Dorfman's classical group testing protocol in a novel setting where
individual specimen statuses are modeled as exchangeable random variables. We
are motivated by infectious disease screening. In that case, specimens which
arrive together for testing often originate from the same community and so
their statuses may exhibit positive correlation. Dorfman's protocol screens a
population of n specimens for a binary trait by partitioning it into
non-overlapping groups, testing these, and only individually retesting the
specimens of each positive group. The partition is chosen to minimize the
expected number of tests under a probabilistic model of specimen statuses. We
relax the typical assumption that these are independent and identically
distributed and instead model them as exchangeable random variables. In this
case, their joint distribution is symmetric in the sense that it is invariant
under permutations. We give a characterization of such distributions in terms
of a function q where q(h) is the marginal probability that any group of size h
tests negative. We use this interpretable representation to show that the set
partitioning problem arising in Dorfman's protocol can be reduced to an integer
partitioning problem and efficiently solved. We apply these tools to an
empirical dataset from the COVID-19 pandemic. The methodology helps explain the
unexpectedly high empirical efficiency reported by the original investigators. | Nicholas C. Landolfi, Sanjay Lall | 2023-08-21T21:29:05Z | http://arxiv.org/abs/2308.11050v2 | # Optimal Dorfman Group Testing For Symmetric Distributions
###### Abstract
We study Dorfman's classical group testing protocol in a novel setting where individual specimen statuses are modeled as exchangeable random variables. We are motivated by infectious disease screening. In that case, specimens which arrive together for testing often originate from the same community and so their statuses may exhibit positive correlation. Dorfman's protocol screens a population of \(n\) specimens for a binary trait by partitioning it into nonoverlapping groups, testing these, and only individually retesting the specimens of each positive group. The partition is chosen to minimize the expected number of tests under a probabilistic model of specimen statuses. We relax the typical assumption that these are independent and identically distributed and instead model them as exchangeable random variables. In this case, their joint distribution is symmetric in the sense that it is invariant under permutations. We give a characterization of such distributions in terms of a function \(q\) where \(q(h)\) is the marginal probability that any group of size \(h\) tests negative. We use this interpretable representation to show that the set partitioning problem arising in Dorfman's protocol can be reduced to an integer partitioning problem and efficiently solved. We apply these tools to an empirical dataset from the COVID-19 pandemic. The methodology helps explain the unexpectedly high empirical efficiency reported by the original investigators.
p 1
robabilistic group testing, Dorfman procedure, probabilistic symmetries, exchangeable random variables, set partitioning problem, integer partitions, disease screening, COVID-19 pandemic
60G09, 62E10, 62H05, 62P10, 90-08, 90C39, 90C90
## 1 Introduction
Group testing is widely used to conserve resources while performing large-scale disease screening. Logistical considerations often lead to the use of Dorfman's simple two-stage adaptive procedure in practice. This protocol is usually based on probabilistic analyses of disease prevalence arising from models of specimen statuses as mutually independent random variables. In this paper, we generalize and study the case in which the statuses are modeled as exchangeable, but not necessarily independent, random variables.
Given a population of \(n\) specimens to screen for a binary trait, the group testing framework allows for several specimens to be pooled and tested together as a group. The group tests positive if any of its individual specimens is positive. The group tests negative if, and only if, all of its specimens are negative. Numerous protocols using this testing capability have been proposed, of which Dorfman's two-stage adaptive procedure is the earliest, simplest, and most widely used. In this protocol, the population is partitioned into nonoverlapping groups and these are tested in the first stage. If a group of size \(h>1\) tests negative, each of its \(h\) specimens is immediately determined negative and \(h-1\) tests are saved. If a group tests positive, each of its specimens is retested individually in the second stage and determined according to the outcome of its individual test. The key question is how to partition the specimens.
Since tests are saved only when a group tests negative and these group test outcomes depend on the distribution and prevalence of positive specimens, a standard approach specifies a probabilistic model of specimen statuses and finds a partition to minimize the expected
number of tests used. In general, both specifying the model and finding the partition are difficult. The first requires a parameterization, and the second a computation, which grows exponentially in the number of specimens to be tested. Historically, this complexity has been avoided via simple probabilistic models arising from a strong assumption of independence.
It is desirable from both a theoretical and practical point of view to alleviate the independence assumption. From a theoretical point of view, it is interesting to consider how one might efficiently find partitions for more complicated distributions. From a practical point of view, it is natural to suppose that the statuses of specimens arriving together for testing may be correlated because they originate from the same family, living place, or workplace and the disease is contagious. Indeed, a recent large-scale study cited this phenomenon when explaining the failure of current theoretical tools to predict observed empirical test savings [21]. It is a pleasant surprise, therefore, that one can model specimen statuses as exchangeable while maintaining interpretability of the probabilistic model and tractability of the computation.
### Contributions
When individual statuses are modeled as exchangeable random variables their joint distribution is symmetric in the sense that it is invariant under permutations of its arguments. Our first contribution is to characterize such a symmetric distribution \(p\) in terms of a function \(q\), where \(q(h)\) is the probability that a group of size \(h\) tests negative. The representation \(q\) is key to finding a partition to minimize the expected number of tests.
Our second contribution is to use this characterization, along with a natural reduction of additive and symmetric set partitioning problems to additive integer partitioning problems, to show how to efficiently compute optimal partitions for exchangeable statuses. In contrast to additive _set_ partitioning problems, additive _integer_ partitioning problems are tractable and several efficient algorithms are known for their solution. For details, see Section 4.
Lastly, we apply these tools to an empirical dataset from the COVID-19 pandemic. The data we use indicate empirical efficiency exceeding that predicted by the classical theory, which models statuses as independent and identically distributed. Our tools partially explain this empirical efficiency and also indicate a different and more efficient partition than that used by the original investigators. We make our numerical implementation available [122].
In summary, we study Dorfman's two-stage adaptive group testing procedure for the case of exchangeable specimen statuses. Our three contributions are:
1. a characterization of symmetric distributions over binary outcomes
2. a method to efficiently find optimal testing partitions under such distributions
3. a numerical experiment applying these tools to an empirical COVID-19 dataset
OutlineIn the following two subsections we give further background on related work and introduce our notation. In Section 2, we formalize Dorfman's two-stage adaptive group testing protocol. In Section 3, we discuss and characterize symmetric distributions. In Section 4, we study the structure of symmetric and additive set partitioning problems and present tools for their solution. In Section 5, we numerically apply these tools to a COVID-19 dataset. We briefly conclude in Section 6 and list some directions for future work.
### Background
We provide background on group testing, probabilistic symmetries and partitioning problems. Each is a highly developed field with an extensive literature.
#### 1.2.1 Group testing
In 1943, Dorfman initiated the study of group testing, also called _pooled testing_, by proposing his original methodology for disease screening [55]. The field has grown considerably since. Today, it can be distinguished along several axes. We briefly characterize the setting of this paper before further outlining these areas.
_Our setting_. We study Dorfman's two-stage, adaptive procedure in the probabilistic, finite-population setting with binary specimen statuses and binary, noiseless, unconstrained tests. The central novelty is in modeling the specimen statuses as exchangeable random variables. We are aware of only one other article studying restricted forms of exchangeability [126]. We also emphasize that we use the term symmetric, see Section 3, to describe the joint distribution of the statuses and not, as others have done [164, 56], to describe the testing model.
The key prior work for situating our contribution is Dorfman's original article [55] and Hwang's follow-up [90]. Both considered Dorfman's _adaptive_ two-stage procedure in a _probabilistic_ setting, with _noiseless binary_ test results. Hwang moved from Dorfman's _infinite_ population setting to a _finite_ population setting and generalized Dorfman's probabilistic model to allow for specimen-specific positive status probabilities. Our work generalizes Dorfman's in a similar but parallel way. Rather than dropping the _identically distributed_ assumption as Hwang does, we drop the _independence_ assumption. We visualize this in Figure 1.
_Areas of group testing_. Although a comprehensive survey of group testing is beyond the scope of this paper, we highlight some of the variety within the field.
_A) Specimen status models, side information, and objective_. A natural first distinction in group testing is between the probabilistic and combinatorial approach. This paper considers _probabilistic_ group testing. Here, one specifies a probabilistic model of specimen statuses and performs testing to minimize some criterion usually related to expected efficiency. In the alternative _combinatorial_ approach, one specifies information about the number of positive specimens and performs testing to minimize a worst-case criterion usually related to the maximum number of tests required. Hence a worst-case, or _minimax_[56], analysis replaces an average-case analysis. For examples of combinatorial group testing, starting with Li in 1962 [127], see [88, 111, 37, 38, 75, 63]. Hereafter we assume the probabilistic setting.
Within probabilistic group testing, a _first further_ subdivision involves the related aspects of model choice and side information. For example, Dorfman [55] models binary specimen statuses as IID and assumes only knowledge of the _prevalence_, i.e. the probability that any given specimen is positive. Historically, several authors have followed his approach [168, 162, 174, 67, 180, 80, 164, 78, 154, 172], including two influential textbooks containing the so-called _blood testing problem_ as exercises [66, 182]. This simple probabilistic model has been called
Figure 1: Assumptions for Dorfman’s two-stage adaptive group testing procedure with noiseless binary test results. (a) drops the independence assumption whereas (b) drops the identically distributed assumption.
the _IID model_[7], _binomial model_[150, 39], _B-model_[160], or _binomial_[119] or _homogeneous_ population [10]. Some authors use the term _binomial group testing_[160, 89].
Many other probabilistic models have been considered besides the binomial. In 1968, Sobel considered a setting in which \(d\) positive specimens are distributed uniformly throughout the population [160]. This has been called the _hypergeometric model_[159, 160, 98], _H-model_[160], or _combinatorial prior_[7]. The term _generalized hypergeometric model_[93] has been used when only an upper bound on \(d\) is assumed, whereas the term _truncated binomial model_[94, 96] has been used if an upper bound is known for the binomial model. In 1973, Nebenzahl and Sobel [143] considered group testing for a population composed of several separate binomial populations with different prevalences. As mentioned above, Hwang [90] further generalized this direction by modeling specimens as independent but _nonidentical_ binary random variables. This is called the _generalized binomial model_[90], _prior defectivity model_[7], _nonidentical model_[128, 112, 53], or _heterogeneous population_[58, 10, 31].
In this style, we might use the term _exchangeable model_ or _symmetric model_ to describe the exchangeable populations we consider herein. The information assumed is the \(n\) parameters of the symmetric distribution. On one hand, the binomial, truncated binomial, hypergeometric, and generalized hypergeometric models are symmetric. On the other, the generalized binomial model is _not_ symmetric. Previously, the so-called _mean model_[96] has been studied as a generalization of all of these. It assumes only the mean number of positives. Later on, we discuss other more recent models allowing correlation between specimen statuses.
Within probabilistic group testing, a _second further_ subdivision relates to parameter uncertainty and the choice of objective. Starting with Sobel and Groll in 1959 [162], several authors handle uncertainty in model parameters [163, 120, 39, 107]. In this setting, the objective of _estimation_ may replace that of efficiency [161, 39]. Elsewhere, other objectives such as information gain [2] and risk-based metrics [10] have been considered. See [85] for further discussion of different objectives. In the sequel, we assume full knowledge of model parameters and focus on the objective of efficiency as measured by the expected number of tests used.
_B) Testing models, feasibility, and noise._ A second distinction relates to the group testing capability. Dorfman [55] considers unconstrained, noiseless, binary individual and group tests. This is the setting we consider. The terms _reliable_[56] for noiseless and _disjunctive_[24] for binary group outcomes are also used. We mention alternative testing models for binary specimen statuses below. Historically, other testing models also arise naturally from nonbinary specimen status models, as for example the _trinomial model_[117] and _multinomial model_[118].
Toward _more_ informative tests, we mention three examples. First, Sobel [160] considered _quantitative group testing_[7] in which group tests reveal the number of positive specimens. Sobel used the term _H-type_ model, in contrast with the term _B-type_ model for the usual binary result setting. As indicated earlier, he used analogous language for the specimen model. Other terms include _linear model_ or _adder channel model_[7]. For variants on this theme, see [139, 61, 179]. Second, Pfeifer and Enis [150] considered group tests that reveal the sum or mean of individual test results. Although the distinction involves tests and not statuses, they use the term _modified binomial model_ or _M-model_. For examples of fully continuous test results, see [181, 177]. Third, Sobel and coauthors [164] considered _symmetric group testing_[56] in which there are three group test outcomes: all positive, all negative, and mixed. We reiterate that symmetric here describes the test model and not the status model.
In the opposite direction, several authors weaken the group test capability. Toward _less_ informative models, we mention Hwang [91] and Farach et al. [65] who consider dilution effects and so-called _inhibitor_ specimens, respectively. For details and other examples, see the survey [7] and the book [56]. Similarly, application areas often motivate various forms of _constrained group testing_[7]. Two natural and classic examples limit the size of a group test [90] or the divisibility of a specimen [162]. Recently, these have been studied under the heading _sparse group testing_[69, 70, 103, 105]. For a second example, in _graph-constrained group testing_ the tests must correspond to paths in a given graph [82, 109, 41, 158, 158, 165]. The methodology we give below readily handles limits on group size. We consider no additional constraints.
Starting with Graff and Roeloffs in 1972 [78], authors regularly study probabilistic models of noisy, or _unreliable_, tests [114, 107, 4]. Noisy tests motivate studying _nonexact_, or _partial_, recovery as opposed to _exact_ recovery [7]. In the sequel, we only consider reliable tests.
_C) Algorithms and analysis_. A third distinction in group testing involves the algorithms and analysis considered. The algorithmic distinction is largely captured by a division into _adaptive_ and _nonadaptive_ procedures. The analytical distinction is largely captured by a division into _finite population_ and _infinite population_, or _asymptotic_, analysis.
The terms _adaptive_, _sequential_ and _multistage_ describe procedures with multiple _rounds_, _cycles_, or _stages_ of testing [56]. The tests of later rounds may depend on, and so adapt to, the results of earlier ones. Each round may involve one test or several. The literature is replete with adaptive algorithms [125, 74, 90, 112, 33, 53]. The further modifiers _nested_[92, 134] and _hierarchical_[114, 106, 107] indicate that groups tested in later stages are subsets of groups already tested. For example, a classic multistage nested approach is Sobel and Groll's original _binary splitting_ or _halving_[162]. For a modern discussion and further examples, see [7, 56].
Alternatively, various applications motivate _nonadaptive_, or _single-stage_, algorithms in which all group tests must be specified in advance [100, 20, 36, 40, 57, 41, 7]. Although it is sometimes natural in this case to discuss the two stages of testing and _decoding_[7], we use the term _two-stage_ exclusively in its usual sense [24, 47, 141] of two rounds of testing.
Dorfman's [55] particular two-stage, adaptive strategy splits the population into nonoverlapping groups of a fixed size, tests these, and individually retests the specimens of positive groups. The strategy has been called the _Dorfman procedure_[78, 90, 150, 96], _Dorfman-type group testing_[150], _Dorfman screening_[137, 152], _Dorfman testing_[10], and _single pooling_[33]. Some authors use the terms _conservative_[6] or _trivial_[47] when the second round of a two-stage procedure only involves individual retests. These terms are usually employed, however, when _confirmatory_[65] tests are used to verify suspected positives indicated by a first round of _overlapping_ tests. This occurs, for example, in _array testing_[151, 138].
Dorfman [55] considers the setting in which the population size tends to infinity. This asymptotic regime remains popular, especially in the information theory community [7]. On the other hand, starting with Sobel and Groll in 1959 [162], many authors consider finite populations [118, 74, 90] or both settings [143, 164, 150]. We consider herein the finite-population setting in which Dorfman's procedure is generalized slightly to allow for groups of different sizes. Hence one seeks a partition of the population. See Section 2 for details.
Applications beyond disease screening.In his original article, Dorfman [55] speculated on the utility of group testing outside of medical testing. In particular, he mentioned manufacturing quality control. Sobel and Groll's influential 1959 article [162] gave further examples. See also
the book [106]. Since then, researchers have applied group testing techniques in such diverse settings as wireless communications [83, 25, 183, 131, 103, 104, 102, 105, 42], genetics [79, 34, 133, 101], machine learning [173, 186, 135], signal processing [73, 43] and data stream analysis [45, 60]. The survey [7] and the book [56] contain additional applications and references.
Application to COVID-19 pandemicThe COVID-19 pandemic created a surge of interest in group testing for disease screening [136, 59, 17]. We make four observations here. First, pooling was feasible. Standard technology detects SARS-CoV-2 virus in pools of up to 32 specimens [184, 178]. Second, pooling was widely and successfully used in practical settings [86, 184, 130, 23, 21] and encouraged by authorities [142, 147, 16, 169, 1, 175, 46]. Third, practitioners often preferred Dorfman's procedure for reasons, among simplicity, that we detail below [23, 21]. Other sophisticated approaches were, however, proposed [140, 72, 87]. Finally, the classical independence theory failed to explain empirical findings in large-scale asymptomatic screening [21, 44]. We discuss this and work aiming to remediate it below.
Benefits of Dorfman's procedureThere are good reasons to prefer Dorfman's protocol beyond its simplicity, historical precedence and modern importance. First, it divides each specimen into only two aliquots. This feature is relevant when the testing process is destructive or dilutive, as is usually the case in disease screening or any biological specimen testing. Second, it is parallel. Within both stages, all indicated tests can be performed at the same time. Consequently, the latency is predictable and bounded. The test efficiency gains of more sophisticated procedures, e.g. Sterrett's [168] or binary splitting [162], are often offset by latency considerations. Third, it has easy to compute pool sizes and interpretable results. The methodology we develop for exchangeable populations also enjoys these features.
Group testing with specimen status correlationThe pandemic created a surge of interest in studying group testing under models motivated by infectious disease screening. These often include _correlation_ between statuses, a feature largely absent from the classical literature. Furthermore, these models involve various forms and degrees of side information. While it is reasonable to suppose that such side information can help efficiency, is is natural to be interested in methodology independent of it. Modeling exchangeability requires no additional knowledge of contact tracing, interaction networks, or community structure.
Although Hwang mentions correlated statuses in his 1984 discussion of the mean model [96], Lendle et al.'s [126] 2012 article appears to be the first to study correlated specimen statuses in earnest. They investigate a restricted form of exchangeability and show that efficiency gains can result from pooling _within_ clusters of positively correlated specimens.
We highlight three more recent directions here. First, Lin et al. [129] study the Dorfman procedure under a correlated arrival process of contiguous groups from different IID populations. They find higher efficiency. They also propose a hierarchical method for the case in which a social graph is available. Other simulation [152, 50] and theoretical [176] investigations also report that pooling within positively correlated groups increases efficiency. Second, Ahn et al. [3, 4] study a so-called _stochastic block infection model_. They analyze a modified binary splitting algorithm which uses knowledge of a specimen's community membership. For other generalizations of the IID model related to theirs, see [77, 123, 146]. Third, and related, Nikolopoulous et al. [144, 145] study combinatorial and probabilistic models for _community-aware group testing_ in which a hypergraph encoding overlapping commmunities is known. They similarly propose algorithms leveraging this side information.
These examples are characteristic of a growing body of work incorporating information such as cluster identity [12, 15, 18, 28], an underlying network topology [27, 26, 157], or contact-tracing [76, 171, 35] into models. Also, several authors study disease spread and so consider _dynamic_ models [166, 29, 54, 167, 13, 14, 11]. Prior to the pandemic, side information was usually incorporated via specimen-specific probabilities of testing positive [90, 30, 137].
Finally, we mention that Comess et al. [44] also investigate the unexpectedly high efficiency observed by Barak et al. [21], the source of the data we consider in Section 5. They propose and analyze a community network model. They use this to also investigate the higher-than-expected _sensitivity_ observed by Barak et al. [21]. We do not consider sensitivity here.
#### 1.2.2 Probabilistic symmetries
Exchangeable random variables fit within the broad study of probabilistic, or distributional, symmetries [108]. An _exchangeable_ sequence of random variables is one whose joint distribution is _invariant_ under permutations [5]. This condition is strictly _weaker_ than assuming that the sequence is IID. Although we focus on _finite_ sequences, the concept first gained prominence when applied to _infinite_ sequences.
_Infinite exchangeable sequences_. These are associated with an influential and well-known theorem of de Finetti, subsequently generalized by Hewitt and Savage [84]. See [108] for a modern treatment. Roughly speaking, _de Finetti's theorem_ says that the joint distribution of every _infinite_ exchangeable sequence can be expressed as a mixture of IID joint distributions [48, 108]. Conversely, _any_ such _IID mix_ is exchangeable. Permutation invariance, therefore, is _characterized_ by a representation which can be interpreted as a _prior_ distribution over the parameters of an infinite IID model. Freedman [68] gives an informal discussion of this result and its relevance to the Bayesian, or subjective, interpretation of probability.
_Finite exchangeable sequences_. de Finetti's characterization fails for _finite_ sequences [51]. A more delicate treatment can be given, however, which approximates his result [52]. In the sequel, we call the distributions of _finite_ exchangeable sequences _symmetric_. It is well-known that such distributions are mixtures of distributions of _urn sequences_[108]. See Proposition 3.2 below for a precise statement. Our contribution is a separate and nonobvious characterization of symmetric distributions over _binary_ domains. See Lemma 3.4. We are not aware of this specialized result appearing explicitly in prior literature. In the context of Dorfman's procedure, it is the key object which aids interpretation of the probabilistic model.
#### 1.2.3 Partitioning problems
In the sequel, we encounter both _set_ and _integer_ partitioning problems. Each has been extensively studied [19, 97, 62, 148] and can be viewed as a particular combinatorial optimization problem [124, 116]. We mention that _neither_ is exactly the well-known "partition" problem described by Karp in his classic paper [110, 71].
_Sets_. In _set_ partitioning problems, we seek a partition of a finite set to minimize a given real-valued objective function. Such a partition is sometimes called _unlabeled_ to distinguish it from an _allocation_, which has a prespecified number of elements [97]. For the many applications of these problems, see [19] and [97]. Dorfman's procedure partially motivated one historical line of work [90, 95, 99, 97]. The basic difficulty is that the number of partitions of a finite set of size \(n\), the so-called \(n\)th _Bell number_[22, 153], grows quickly with \(n\). Still, these problems have standard integer linear programming formulations when the objective is additive [19, 155, 156]. Also, several other structured objectives have been studied [95, 9, 97, 121]. We are not aware, however, of any work handling symmetry as we define it in Section 4.
IntegersIn _integer_ partitioning problems, we seek a partition of a positive integer [81, 8] to minimize a given real-valued objective function. As with set partitioning problems, the basic difficulty is that the number of partitions of a positive integer \(n\) is large, even for moderate \(n\). We know of two outstanding articles which study these problems under additive objectives [62, 148]. We discuss these in Subsection 4.4.1. Integer partitioning arises in this paper from a set partitioning problem whose objective is _symmetric_. See Section 4. Although this reduction is natural, we are not aware of prior work explicitly making the connection.
### Preliminaries
For finite sets \(P\) and \(D\), let \(D^{P}\) denote the set of functions mapping \(P\) to \(D\). Given \(z:P\to D\) and \(H\subset P\), denote the restriction of \(z\) to \(H\) by \(z_{|H}:H\to D\). Denote the constant zero function with any domain by \(\mathbf{0}\). For any finite set \(P\) and \(u\in D^{P}\) with \(0\in D\), define \(\operatorname{nnz}(u)=|\{i\in P\mid u_{i}\neq 0\}|\), the number of points at which \(u\) is nonzero. Denote the empty set by \(\varnothing\). Denote the union of a set of sets \(E\) by \(\cup E\).
For \(f:D^{J}\to C\), given \(g:J\to H\), define \(f^{g}:D^{H}\to C\) via \(f^{g}(x)=f(x\circ g)\) for all \(x\in D^{H}\). For \(z:P\to D\), given \(d\in D\), define \(z^{-1}(d)=\{i\in P\mid z(i)=d\}\), the preimage of \(d\) under \(z\). Given a set \(F\) of subsets of a set \(P\) and a function \(h:P\to P\), define \({}^{h}\!F\) by \({}^{h}\!F=\{\{h(i)\mid i\in H\}\mid H\in F\}\). Hence \({}^{h}\!F\) is the set of images under \(h\) of the sets in \(F\).
ProbabilityGiven a distribution \(p:D^{P}\to[0,1]\), the probability of an event \(A\subset D^{P}\) is \(\sum_{z\in A}p(z)\). We denote it by \(\operatorname{Prob}(A)\) when \(p\) is clear from context. Given a set \(H\subset P\), the _marginal_ of \(p\)_over_\(H\) is the function \(p_{H}:D^{H}\to[0,1]\) defined by \(p_{H}(u)=\sum_{z|z_{|H}=u}p(z)\).
If \(r:D^{P}\to[0,1]\) is also a distribution, the _cross-entropy_\(H(r,p)\) of \(p\) relative to \(r\) is \(-\sum_{z\in D^{P}}r(z)\log p(z)\) and the entropy \(H(r)\) of \(r\) is \(-\sum_{z\in D^{P}}r(z)\log r(z)\) as usual. The _Kullback-Leibler divergence_\(d_{kl}(r,p)\) of \(p\) relative to \(r\) is defined as usual by \(d_{kl}(r,p)=H(r,p)-H(r)\). The empirical distribution \(\hat{p}:D^{P}\to[0,1]\) of a dataset \(z^{1},\ldots,z^{m}\) in \(D^{P}\) is defined as usual by \(\hat{p}(z)=(1/m)|\{i\in\{1,\ldots,m\}\mid z^{i}=z\}|\).
Set powersGiven a set \(S\), the _power set_\(\mathcal{P}(S)\) of \(S\) is the set of all subsets of \(S\). The power set _of_\(\mathcal{P}(S)\) is the set of all _sets_ of subsets of \(S\). We denote the nonempty elements of this set whose members are nonempty and disjoint by \(\operatorname{Disj}(S)\).
Set partitionsA _partition_\(F=\{F_{1},\ldots,F_{r}\}\) of a _set_\(S\) is a set of nonempty, pairwise disjoint subsets of \(S\) whose union is \(S\). That is, \(F_{i}\cap F_{j}=\varnothing\) whenever \(i\neq j\) and \(\cup_{i=1}^{r}F_{i}=S\). Given a set \(P\), cost function \(J:\operatorname{Disj}(P)\to\mathbb{R}\) and any nonempty \(S\subset P\), we call a _partition_\(F^{\star}\) of \(S\)_optimal_ for \(S\) under \(J\) if \(J(F^{\star})\leq J(F)\) for all partitions \(F\) of \(S\).
Integer partitionsA _partition_\(\lambda=(\lambda_{1},\ldots,\lambda_{r})\) of the positive _integer_\(m\) is a nonincreasing finite sequence of positive integers whose sum is \(m\)[81, 8]. The terms \(\lambda_{i}\) are called _parts_. The _multiplicity_ of an integer in \(\lambda\) is the number of times it appears as a part [132]. We associate to \(\lambda\) a _multiplicity function_\(\mu\) so that \(\mu(h)\) is the multiplicity of the integer \(h\) in \(\lambda\). We denote the integer partitions of \(m\) by \(\mathcal{L}(m)\) and the corresponding multiplicity functions by \(\mathcal{M}(m)\). There is a bijection between \(\mathcal{M}(m)\) and \(\mathcal{L}(m)\). We denote the set \(\cup_{i=1}^{m}\mathcal{M}(i)\) by \(\mathcal{M}(1,\ldots,m)\).
Set and integer partitionsGiven a partition \(F\) of a nonempty _set_\(S\), we can construct a partition \(\lambda_{F}\) of the positive _integer_\(m=|S|\). The parts of \(\lambda_{F}\) are the sizes of the elements of \(F\), in nonincreasing order as usual. This integer partition \(\lambda_{F}\) has a multiplicity function \(\mu_{F}\), where \(\mu_{F}(h)\) is the number of parts of size \(h\) in \(F\). We also call \(\mu_{F}\) the _multiplicity function_ of the set partition \(F\). Any \(F\in\operatorname{Disj}(S)\) is a partition of the set \(\cup F\subset S\), and so has corresponding integer partition \(\lambda_{F}\in\mathcal{L}(k)\) and multiplicity function \(\mu_{F}\in\mathcal{M}(k)\) where \(k=|\cup F|\). Given
\(F,G\in\operatorname{Disj}(S)\), we call \(F\) and \(G\)_multiplicity equivalent_ if \(\mu_{F}=\mu_{G}\). This holds if and only if \(\lambda_{F}=\lambda_{G}\). Note that possibly \(\cup F\neq\cup G\). If \(F\cap G=\varnothing\), then \(\mu_{F\cup G}=\mu_{F}+\mu_{G}\).
## 2 Problem formulation
We have a _population_\(P\) of \(n\) specimens to test for a binary trait. A specimen is either _negative_ or _positive_, which we denote by \(0\) and \(1\), respectively. We model these \(n\) statuses as random variables \(\{x_{i}\}_{i\in P}\) with distribution \(p:\{0,1\}^{P}\to[0,1]\).
### Group testing
We determine the statuses via testing. We may test several specimens together and observe that either (a) all of the specimens are negative or (b) at least one of the specimens is positive. A _group_ is a nonempty subset \(H\subset P\). Its status is defined to be \(S_{H}(x)=\max_{i\in H}x_{i}\). We say that the group \(H\) tests negative if and only if \(S_{H}(x)=0\). In other words, all of its members are negative. A group \(H\) tests positive means its status \(S_{H}(x)=1\). There is no noise in the observed outcomes of individual or group tests.
### Dorfman's adaptive procedure
Dorfman [55] proposed determining specimen statuses via a two-stage procedure. The population is first partitioned into groups and these are tested. If a group tests negative, each specimen in the group is determined negative. If a group tests positive, each specimen in the group is retested individually, and is determined positive or negative depending on the result of its individual test.
Given the statuses \(x\) and a group \(H\subset P\), the number of tests required to determine the status of every specimen in \(H\) is
\[T_{H}(x)=\begin{cases}1&\text{if }|H|=1\\ 1+|H|S_{H}(x)&\text{otherwise}\end{cases} \tag{1}\]
The mean of this is then
\[\mathbb{E}T_{H}(x)=\begin{cases}1&\text{if }|H|=1\\ 1+|H|\operatorname{Prob}(S_{H}(x)=1)&\text{otherwise}\end{cases} \tag{2}\]
The first case of (1) records that a group with one member requires only one test. Otherwise, a group \(H\) of size two or more requires one group test and possibly \(|H|\) additional individual tests. These additional tests are required only if the group status is positive.
Dorfman's procedure may be applied to any nonempty _subpopulation_\(S\subset P\). Given a _partition_\(F\) of \(S\), the number of tests used to determine the status of every specimen in \(S\) is \(C(F,x)=\sum_{H\in F}T_{H}(x)\), and its expectation is
\[\mathbb{E}C(F,x)=\sum_{H\in F}\mathbb{E}T_{H}(x) \tag{3}\]
which is the sum of the expected number of tests needed for each group in \(F\). To determine the status of every specimen in the population, one is interested in a partition of \(P\).
A _pooling_ of \(P\) is a partition \(G=\{G_{1},\ldots,G_{r}\}\) of \(P\), where each group \(G_{i}\subset P\). A natural cost for a pooling \(G\) is the expected number of tests \(\mathbb{E}C(G,x)\) it uses. A natural measure of its _efficiency_ is \(n/\mathbb{E}C(G,x)\). Here \(n\) is the cost of testing each specimen individually.
### Minimizing expected number of tests
It is natural to seek a partition which minimizes the expected number of tests required to determine the status of all specimens. Or, equivalently, to seek a partition which maximizes efficiency.
Given a distribution \(p:\{0,1\}^{P}\to[0,1]\), find a partition \(G\) of the population \(P\) to minimize the expected number of tests \(\mathbb{E}C(G,x)\).
We are interested, therefore, in solving a set partitioning problem. Without further assumptions, the problem is computationally challenging because of the large number of parameters required to specify \(p\) and the large number of partitions. Consequently, one is interested in particular distribution classes with succinct representations and efficient algorithms.
Hwang [90] showed that if specimen statuses are modeled as _independent_ random variables, then \(p\) is determined by \(n\) real parameters and Problem 2 can be efficiently solved. We show herein that similar results hold if the statuses are instead modeled as _exchangeable_.
## 3 Symmetric distributions
Given a permutation \(g\) on \(P\), we can apply it to outcomes \(x\in\{0,1\}^{P}\) in the natural way via composition to give \(x\circ g\). This also induces a corresponding rearrangement \(p^{g}\) of a distribution \(p\) on \(\{0,1\}^{P}\). Call \(p\)_symmetric_ if \(p=p^{g}\) for all permutations \(g\) on \(P\). The statement that \(p\) is symmetric is equivalent to the probabilistic statement that the individual specimen status random variables are _exchangeable_[51, 115, 5, 108]. Given \(x\) and \(y\) in \(\{0,1\}^{P}\), relate \(x\sim y\) if there is a permutation \(g\) so that \(x=y\circ g\). The relation \(\sim\) is an equivalence relation, and we have \(x\sim y\) if and only if \(\operatorname{nnz}(x)=\operatorname{nnz}(y)\). The resulting equivalence classes are the sets of functions in \(\{0,1\}^{P}\) having the same number of nonzero values. A distribution \(p\) is symmetric if and only if it is invariant on the equivalence classes, that is \(p(x)=p(y)\) whenever \(x\sim y\). In other words, \(p\) is symmetric if and only if there exists a function \(w:\{0,\ldots,n\}\to[0,1]\) such that \(p(x)=w(\operatorname{nnz}(x))\) for all \(x\in\{0,1\}^{P}\). The value \(w(k)\) gives the probability of a _particular_ outcome with \(k\) nonzero values, for \(k=0,\ldots,n\). Since there are \(\binom{n}{k}\) such outcomes with \(k\) nonzero values, the probability of the event of observing an outcome with \(k\) nonzeros is \(\binom{n}{k}w(k)\).
### Examples of symmetric distributions
#### l.1.d. random variables have symmetric distributions
Given any distribution \(b\) on \(\{0,1\}\), we can define a symmetric distribution \(p\) on \(\{0,1\}^{P}\) by \(p(x)=\prod_{i\in P}b(x(i))\) for all \(x\in\{0,1\}^{P}\). \(p\) is called an _i.i.d. distribution_. In the context of group testing, the value \(\rho=\mathbb{E}\sum_{i=1}^{n}x_{i}/n\) is called the _prevalence rate_[55]. Here \(\rho=b(1)\). We can express \(p\) in terms of \(\rho\) as \(p(x)=(1-\rho)^{n-\operatorname{nnz}(x)}\rho^{\operatorname{nnz}(x)}\) for all \(x\in\{0,1\}^{P}\). This expression exhibits the symmetry of \(p\) because we have written \(p(x)\) as a function of \(\operatorname{nnz}(x)\). See Remark 3.
Mixtures of symmetric distributions are symmetricThe set of symmetric distributions is convex. As usual, we call a convex combination of distributions a _mixture_.
A simple example is a mixture of two distributions. Given symmetric distributions \(r\) and \(s\) along with a mixing parameter \(\mu\) in \([0,1]\), define the symmetric distribution \(p\) on \(\{0,1\}^{P}\) by \(p(x)=(1-\mu)r(x)+\mu s(x)\) for all \(x\in\{0,1\}^{P}\). We may interpret \(p\) as modeling statuses which depend on an _unobserved_ event with probability \(\mu\) of occuring. In the context of group testing, one might call \(p\) an _exposure distribution_ with two levels. If \(r\) and \(s\) have prevalence rates \(\rho_{r}\) and \(\rho_{s}\), respectively, then \(p\) has rate \((1-\mu)\rho_{r}+\mu\rho_{s}\). In case \(\rho_{s}>\rho_{r}\), the unobserved _exposure_ event _increases_ the prevalence. The generalization to \(\ell\) levels is straightforward.
Mixtures of i.i.d. distributions provide examples of symmetric distributions that model random variables which are _not_ independent. For an extreme but easy to see case, suppose \(r\) and \(s\) of the previous paragraph have prevalence rates \(0\) and \(1\) respectively, and \(\mu=1/2\). Let \(i\) and \(j\) be distinct elements of \(P\). The probability of the event \(\{x\mid x(i)=1\}\) is \(\nicefrac{{1}}{{2}}\). The _conditional_ probability of this event _given_ the event \(\{x\mid x(j)=1\}\) is \(1\). Consequently the two events are dependent, and so the random variables \(x_{i}\) and \(x_{j}\) are _not_ independent. In other words, for such a distribution, if one specimen is positive then so are all the others.
Simple random sampling is symmetric.A classic probabilistic model involves an urn with \(N\) balls. \(k\) of the balls are marked \(1\) and \(N-k\) are marked \(0\). One imagines drawing \(n\leq N\) balls from the urn and recording the labels \(x_{1},\ldots,x_{n}\). If the balls are drawn _with replacement_ then the set \(\{x_{i}\}_{i=1}^{n}\) is independent. If instead one draws the balls _without replacement_ then the set \(\{x_{i}\}_{i=1}^{n}\) is _exchangeable_, but not independent. In both cases, the variables have a symmetric distribution with a prevalence rate of \(k/N\).
Now suppose \(n=N\); i.e., one imagines drawing all balls from the urn. Given \(k=0,\ldots,n\), the variables \(x_{1},\ldots,x_{n}\) have the symmetric distribution \(r_{k}\) on \(\{0,1\}^{P}\) defined by \(r_{k}(x)=1/\binom{n}{k}\) if \(\operatorname{nnz}(x)=k\) and \(0\) otherwise. A classic result [49, 113, 64, 51] says that every symmetric distribution on \(\{0,1\}^{P}\) is a mixture of the distributions \(r_{0},\ldots,r_{n}\).
Suppose \(p\) is a distribution on \(\{0,1\}^{P}\). Then \(p\) is symmetric if and only if there exists a function \(\alpha:\{0,\ldots,n\}\to[0,1]\) such that \(\sum_{i=0}^{n}\alpha(i)=1\) and \(p(x)=\sum_{i=0}^{n}\alpha(i)r_{i}(x)\) for all \(x\in\{0,1\}^{P}\).
This fact is easy to see using Remark 3. The function \(\alpha\) is related to the function \(w\) of Remark 3 by the relation \(\alpha(i)=\binom{n}{i}w(i)\) for \(i=0,\ldots,n\).
Shuffling symmetriesGiven any distribution \(t\) on \(\{0,1\}^{P}\), _not_ necessarily symmetric, define the _symmetric_ distribution \(p\) on \(\{0,1\}^{P}\) by \(p(x)=(1/n!)\sum_{g\mid g\text{ is a bijection}}t^{g}(x)\) for all \(x\in\{0,1\}^{P}\). We call \(p\) the _symmetrization_ of \(t\). If \(t\) is symmetric, then \(p=t\). In other words, shuffling creates symmetry. For symmetric distributions this shuffling has no effect.
### Characterizing symmetric distributions
First we record a straightforward lemma. Roughly speaking, it says that all same-size marginals of a symmetric distribution agree.
Suppose \(p:\{0,1\}^{P}\to[0,1]\) is a distribution. Then \(p\) is symmetric if and only if \(p_{H}=(p_{J})^{g}\) for all bijections \(g:J\to H\), where \(H,J\subset P\).
An immediate consequence of Lemma 3 is that every marginal of a symmetric distribution is symmetric. To see this, take \(H=J\) in the _only if_ direction. This corresponds to the statement that every subset of a set of exchangeable random variables is exchangeable.
#### 3.2.1 Representation via marginals
We now look at a specific representation of symmetric distributions, in terms of a function \(q\) such that \(q(h)\) is the _marginal_ probability that _any_ group of size \(h\) tests negative, for \(h=0,\ldots,n\).
Suppose \(p:\{0,1\}^{P}\to[0,1]\) is a distribution. Then \(p\) is symmetric if and only if there exists a function \(q:\{0,\ldots,n\}\to[0,1]\) such that
\[p_{H}(\mathbf{0})=q(|H|)\quad\text{for all $H\subset P$} \tag{3.1}\]
We take the convention \(p_{\varnothing}(\mathbf{0})=q(0)=1\).
Proof.: First, we address the _only if_ direction. The existence of \(q\) is equivalent to the statement that \(p_{H}(\mathbf{0})=p_{J}(\mathbf{0})\) whenever \(|H|=|J|\), where \(H,J\subset P\). As a result, the _only if_ direction follows directly from Lemma 3 since, using any bijection \(g:J\to H\), we have
\[p_{H}(\mathbf{0})=(p_{J})^{g}(\mathbf{0})=p_{J}(\mathbf{0}\circ g)=p_{J}( \mathbf{0})\]
For the _if_ direction, first recall from Remark 3 that \(p\) is symmetric if and only if there exists a function \(w\) such that
\[p(x)=w(\operatorname{nnz}(x))\quad\text{for all }x\in\{0,1\}^{P} \tag{3.2}\]
Second, we claim that for _any_ distribution \(p\) on \(\{0,1\}^{P}\) and set \(H\subset P\) with \(|H|=h\)
\[p_{H}(\mathbf{0})=\sum_{z|z_{|H}=\mathbf{0}}p(z)=\sum_{i=0}^{n-h}\Big{(}\sum_ {z|z_{|H}=\mathbf{0}\text{ and }\operatorname{nnz}(z)=i}p(z)\ \Big{)} \tag{3.3}\]
This holds by rearranging the terms in the first sum and grouping them according to the number of nonzero entries.
Suppose by hypothesis that there exists a function \(q\) satisfying (3.1). We will use \(q\) to construct a function \(w\) satisfying (3.2) via a linear recursion. First define \(w(0)=q(n)\). Then, for \(k=1,\ldots,n\), recursively define
\[w(k)=q(n-k)-\sum_{i=0}^{k-1}\binom{k}{i}w(i)\]
We claim that \(w\) so constructed satisfies (3.2). We will show this via strong induction on \(k=\operatorname{nnz}(x)\), the number of nonzero values of the outcome \(x\).
First, we introduce a bit of notation which we use to rewrite (3.3) in terms of \(q\). Given \(x\) in \(\{0,1\}^{P}\) define \(I_{x}=x^{-1}(0)\). We claim that for any such \(x\), we have
\[q(n-\operatorname{nnz}(x))=\sum_{i=0}^{\operatorname{nnz}(x)}\Big{(}\sum_{z|z _{|I_{x}}=\mathbf{0}\text{ and }\operatorname{nnz}(z)=i}p(z)\ \Big{)} \tag{3.4}\]
This holds by taking \(H=I_{x}\) in (3.3), recognizing \(n-|I_{x}|=\operatorname{nnz}(x)\), and recognizing \(p_{I_{x}}(\mathbf{0})=q(n-\operatorname{nnz}(x))\). If \(\operatorname{nnz}(x)>0\), we can rearrange (3.4) to give
\[p(x)=q(n-\operatorname{nnz}(x))-\sum_{i=0}^{\operatorname{nnz}(x)-1}\Big{(} \sum_{z|z_{|I_{x}}=\mathbf{0}\text{ and }\operatorname{nnz}(z)=i}p(z)\ \Big{)} \tag{3.5}\]
This holds because the last term in the outermost sum of (3.4) is a sum which itself consists of only one term, specifically \(\{z\ |\ z_{|I_{x}}=\mathbf{0}\text{ and }\operatorname{nnz}(z)=\operatorname{ nnz}(x)\}=\{x\}\). For the base case of the induction, \(\operatorname{nnz}(x)=0\), we have \(w(0)=p(x)\) for all \(x\) with \(\operatorname{nnz}(x)=0\). This holds because \(\operatorname{nnz}(x)=0\) if and only if \(x=\mathbf{0}\), and \(q(n)=p_{P}(\mathbf{0})=p(\mathbf{0})\).
Next, suppose the induction hypothesis, that for all \(z\) with \(\operatorname{nnz}(z)<k\), we have \(p(z)=w(\operatorname{nnz}(z))\). Then for any \(x\) in \(\{0,1\}^{P}\) with \(\operatorname{nnz}(x)=k>0\), we have
\[p(x)=q(n-k)-\sum_{i=0}^{k-1}\binom{k}{i}w(i) \tag{3.6}\]
To see this, take \(\operatorname{nnz}(x)=k\) in (3.5) and observe that for each of \(i=0,\ldots,k-1\), the set \(\{z\ |\ z_{|I_{x}}=\mathbf{0}\text{ and }\operatorname{nnz}(x)=i\}\) has \(\binom{k}{i}\) members. Each element of the \(i\)th set has probability \(w(\operatorname{nnz}(z))\) under the induction hypothesis. Since \(k=\operatorname{nnz}(x)\), (3.6) gives \(p(x)\) as a function purely of \(\operatorname{nnz}(x)\). Consequently, \(w\) satisfies (3.2) as desired.
### Fitting symmetric distributions to data
The principle of maximum likelihood gives a natural method of fitting a symmetric distribution to a dataset. One way to understand the approach is via the more general problem of approximating an arbitrary distribution with a symmetric one. We use the Kullback-Leibler divergence to measure the approximation. As is well known, a distribution minimizing this divergence with respect to the _empirical_ distribution of a dataset also maximizes the _likelihood_ of the dataset.
The following proposition says that one can best approximate a distribution by _symmetrizing_ it, in the sense defined above. For this and other results, see [149]. We denote the equivalence class of \(x\in\{0,1\}^{P}\) by \([x]\) for notational convenience. See Remark 3.1.
Suppose \(r:\{0,1\}^{P}\to[0,1]\) is a distribution and define the distribution \(p^{\star}:\{0,1\}^{P}\to[0,1]\) by \(p^{\star}(x):=(1/n!)\sum_{g|g\text{ is a bijection}}r^{g}(x)=(1/|[x]|)\sum_{z\in[x]}r (z)\). Then \(d_{kl}(r,p^{\star})\leq d_{kl}(r,s)\) for all symmetric distributions \(s:\{0,1\}^{P}\to[0,1]\).
We mention that the order of the arguments of \(d_{kl}\) matters. With the order chosen here, \(p^{\star}\) is called the _M-projection_ of \(r\) onto the set of symmetric distributions [149].
One can interpret \(p^{\star}\) of Proposition 3.5 as evenly distributing the total probability mass assigned to each equivalance class among the members of that class. When the distribution being approximated is the empirical distribution of a dataset, we can easily compute \(p^{\star}\) by counting the number of samples in each of the \(n+1\) equivalence classes. This gives \(p^{\star}(x)=(1/\binom{n}{\text{nnz}(x)})\sum_{z|\,\text{nnz}(z)=\text{nnz}(x)} \frac{1}{m}|\{i\in\{1,\ldots,m\}\mid z^{i}=z\}|\) where \(z^{1},\ldots,z^{m}\) is a dataset in \(D^{P}\).
## 4 Symmetric and additive set partitioning problems
A set partitioning problem simplifies considerably if its cost is symmetric and additive. In this case, it reduces to an _integer_ partition problem which can be solved efficiently by any of several methods. Problem 2 has an additive cost. The cost is also symmetric when the distribution is symmetric.
### Symmetric cost gives an integer partition problem
Call a function \(J:\text{Disj}(P)\to\mathbb{R}\)_symmetric_ if \(J(F)=J(\text{{\it g}}F)\) for all \(F\in\text{Disj}(P)\) and bijections \(g\) on \(P\). For such cost functions \(J\), all multiplicity equivalent members of \(\text{Disj}(P)\) have the same cost.
Suppose \(J:\text{Disj}(P)\to\mathbb{R}\). Then \(J\) is symmetric if and only if \(\mu_{F}=\mu_{G}\implies J(F)=J(G)\) for all \(F,G\in\text{Disj}(P)\).
#### 4.1.1 The induced integer partition problem
Suppose \(J:\text{Disj}(P)\to\mathbb{R}\) is _symmetric_. Then Lemma 4.1 says that there is a function \(J_{\mathcal{M}}:\mathcal{M}(1,\ldots,n)\to\mathbb{R}\) satisfying
\[J(F)=J_{\mathcal{M}}(\mu_{F})\quad\text{for all }F\in\text{Disj}(P) \tag{4.1}\]
Moreover, a _partition_\(G\) of \(P\) minimizes \(J\) among all _partitions_ if and only if its multiplicity function \(\mu_{G}\) minimizes the restriction of \(J_{\mathcal{M}}\) to \(\mathcal{M}(n)\). Given such a minimizer \(\mu^{\star}\), it is easy to construct a partition of \(P\) whose multiplicity function is \(\mu^{\star}\). Hence, we can find a class of multiplicity equivalent set partitions optimal under \(J\) by solving the following problem.
Given \(K:\mathcal{M}(n)\to\mathbb{R}\), find \(\mu\) to minimize \(K(\mu)\).
We call Problem 4.2 an _integer_ partition problem because \(\mathcal{M}(n)\) is in one-to-one correspondence with \(\mathcal{L}(n)\). In the next section, we study additional structure on \(J\), inherited by \(J_{\mathcal{M}}\), that enables one to avoid exhaustive enumeration in solving Problem 4.2.
### Symmetric and additive cost gives an additive integer partition problem
As usual, we call a function \(J:\mathrm{Disj}(P)\to\mathbb{R}\)_additive_ if \(J(F\cup G)=J(F)+J(G)\) for all disjoint \(F,G\in\mathrm{Disj}(P)\). A function \(J:\mathrm{Disj}(P)\to\mathbb{R}\) can be symmetric but not additive, and vice versa. When \(J\) is both symmetric and additive, we have the following characterization.
Suppose \(J:\mathrm{Disj}(P)\to\mathbb{R}\). Then \(J\) is symmetric and additive if and only if there exists a function \(h:\{1,\ldots,n\}\to\mathbb{R}\) so that \(J(F)=\sum_{H\in F}h(|H|)\) for all \(F\in\mathrm{Disj}(P)\).
#### 4.2.1 \(J_{\mathcal{M}}\) inherits the additivity of \(J\)
If \(J\) is both symmetric and additive, then \(J_{\mathcal{M}}\) inherits the additivity of \(J\). We formalize this statement below.
As usual, we call a function \(M:\mathcal{M}(1,\ldots,n)\to\mathbb{R}\)_additive_ if \(M(\mu+\nu)=M(\mu)+M(\nu)\) for all \(\mu\) and \(\nu\) with \(\mu+\nu\in\mathcal{M}(1,\ldots,n)\). This condition is equivalent to the existence of a function \(c:\{1,\ldots,n\}\to\mathbb{R}\) satisfying
\[M(\mu)=\sum_{i=1}^{n}c(i)\mu(i)\quad\text{for all $\mu\in\mathcal{M}(1,\ldots,n)$} \tag{12}\]
Suppose \(J:\mathrm{Disj}(P)\to\mathbb{R}\) is symmetric with \(J_{\mathcal{M}}:\mathcal{M}(1,\ldots,n)\to\mathbb{R}\) defined as in (11). If \(J\) is additive, then \(J_{\mathcal{M}}\) is additive.
The characterization in (12) says that any additive function on \(\mathcal{M}(1,\ldots,n)\) has a representation \(c:\{1,\ldots,n\}\to\mathbb{R}\). Lemma 4.2 says a symmetric and additive function \(J\) on \(\mathrm{Disj}(P)\) has a representation \(h:\{1,\ldots,n\}\to\mathbb{R}\). For the _additive_ function \(J_{\mathcal{M}}\), these coincide.
#### 4.2.2 The induced additive integer partition problem
As before, suppose \(J\) is additive and symmetric. Since \(J_{\mathcal{M}}\) inherits the additivity of \(J\), we can find a class of multiplicity equivalent set partitions optimal under \(J\) by solving the following problem.
Given \(c:\{1,\ldots,n\}\to\mathbb{R}\), find \(\mu\in\mathcal{M}(n)\) to minimize \(\sum_{i=1}^{n}c(i)\mu(i)\).
Similar to Problem 12, the one-to-one correspondence between \(\mathcal{M}(n)\) and \(\mathcal{L}(n)\) means that we can interpret Problem 1 as finding an _integer_ partition. For this reason, prior work has called Problem 1 an integer partition problem [62]. We use the terminology _additive_ integer partition problem to distinguish Problem 1 from the general Problem 1.
### Minimizing tests for symmetric distributions
Given a _symmetric_ distribution of specimen statuses, Problem 1 reduces to an additive integer partition problem. In this case the objective, which is additive for any distribution, is _also_ symmetric. We formalize this in Corollary 1 below, which, given Lemma 4.2, is an immediate consequence of the following.
Suppose \(x\) has distribution \(p\). If \(p\) is symmetric, then there exists a function \(U:\{1,\ldots,n\}\to\mathbb{R}\) satisfying \(\mathbb{E}T_{H}(x)=U(|H|)\) for all nonempty \(H\subset P\).
Let \(H\subset P\) be nonempty. We make three straightforward substitutions in (1). First, the status of \(H\) is either \(0\) or \(1\). Consequently, \(\mathrm{Prob}(S_{H}(x)=1)=1-\mathrm{Prob}(S_{H}(x)=0)\). Second, the status of \(H\) is \(0\) if and only if \(x_{i}=0\) for all \(i\in H\). Hence, \(\mathrm{Prob}(S_{H}(x)=0)=p_{H}(\mathbf{0})\). Finally, the symmetry of \(p\) is equivalent to the existence of a function \(q\) satisfying
\(p_{H}(\mathbf{0})=q(|H|)\) for all \(H\subset P\). See Lemma 3. Substituting into (2.2) gives
\[\mathbb{E}T_{H}(x)=\begin{cases}1&\text{if }|H|=1\\ 1+|H|(1-q(|H|))&\text{otherwise}\end{cases}\]
The right hand side is a function of \(|H|\), as desired.
Suppose \(x\) has distribution \(p\). Define \(J:\operatorname{Disj}(P)\to\mathbb{R}\) by \(J(F)=\mathbb{E}C(F,x)\) for all \(F\in\operatorname{Disj}(P)\). Then \(J\) is additive. If \(p\) is symmetric, then \(J\) is symmetric.
### Solving additive integer partition problems
Several efficient approaches for computing the solutions of _additive_ integer partition problems are known [62, 148]. In this section we briefly mention these before elaborating on a dynamic programming method.
#### 4.4.1 Overview of approaches
We highlight three approaches to Problem 4.3.
Linear programmingA polyhedral approach identifies multiplicity functions with vectors in \(\mathbb{R}^{n}\). The convex hull of these _multiplicity vectors_, called the _integer partition polytope_, has a succinct polyhedral lift [148]. Linear optimization over the extended formulation can be done via linear programming and a solution recovered via projection. See [148] for details.
Dynamic programmingAlternatively, a dynamic programming approach minimizes, in order from \(k=1,\ldots,n\), the cost \(c\) over the set \(\mathcal{M}(k)\). A solution for the \(k+1\) case is found using the solutions for the cases \(1,\ldots,k\). See Subsection 4.4.2 below and [62] for details.
Shortest path problemFinally, a minimum-weight path reduction constructs a directed graph in which the multiplicity vectors correspond to the directed paths between two distinguished vertices. By appropriately weighting the edges of these paths, the minimizing multiplicity vectors are put in one-to-one correspondence with the minimum-weight paths. Consequently, one can find a minimizing multiplicity vector by solving the well-known shortest weighted path problem via standard algorithms. See [148] for details.
#### 4.4.2 A dynamic programming approach
Here we expand on the dynamic programming algorithm mentioned above in Subsection 4.4.1. For further details and also a variant of Problem 4.3 that seeks an optimal integer partition with _fewest_ parts, see [62].
The algorithm we describe sequentially computes optimal partitions of all integers \(k\leq n\) in order from \(k=1,\ldots,n\). At step \(k\), it uses the costs of optimal partitions of \(1,\ldots,k-1\) to find an optimal partition of \(k\). As usual, we call a partition of an integer \(k\)_optimal_ if its multiplicity function minimizes the objective of Problem 4.3 over \(\mathcal{M}(k)\).
InterpretationSince we omit proofs below, we start by interpreting the algorithm. We imagine partitioning the integer \(n\) by first choosing to include a part of size \(i\leq n\), and subsequently partitioning the remainder \(n-i\). It is easy to see that every partition of \(n\) can be obtained in this way. If the cost is additive, then the cost of such a partition is the cost of the part \(i\) plus the cost of the partition chosen for \(n-i\). Given a fixed \(i\), we can minimize this cost by optimally partitioning \(n-i\). So if we knew in advance the cost of optimally partitioning each integer smaller than \(n\), then we could optimize over our choice of the first part \(i\). The same interpretation applies to partitioning \(n-i\), and so on, recursively.
The algorithm proceeds in reverse of this interpretation. First we optimally partition \(1\), then \(2\), then \(3\), and so on up to \(n\). To illustrate concretely, first we optimally partition \(1\). This
is trivial, since there is only one choice of partition. Next, we optimally partition 2 by either taking a part of size 1, inducing the partition \(1+1\), or keeping the single part 2. Likewise for 3. We may take a part of size 1 and use our optimal partition of 2, take a part of size 2 and use our optimal partition of 1, or take a single part of size 3. Which is best depends on the cost of partitioning 2. We have already computed this at the previous step. Similarly for 4. We take a part of size 1, 2, 3 or 4. The choice depends on the cost of optimally partitioning 1, 2 and 3, which we have already computed. We continue in a similar way up to \(n\).
We ignore here the minor subtlety that we can skip considering a part of size \(i\) if its cost exceeds the cost of an optimal partition of \(i\). In this case, an optimal partition of \(k\geq i\) will not include a part of size \(i\) since this part could be replaced to lower the cost.
Subproblem Optimal Value RecursionWe briefly formalize this interpretation. Given a function \(M:\mathcal{M}(1,\ldots,n)\to\mathbb{R}\), define the function \(M^{\star}:\{0,\ldots,n\}\to\mathbb{R}\) by \(M^{\star}(0)=0\) and
\[M^{\star}(k)=\min\{M(\mu)\mid\mu\in\mathcal{M}(k)\}\quad\text{for }k=1,\ldots,n \tag{4.3}\]
\(M^{\star}\) is called the _value function_. \(M^{\star}(k)\) is the cost of an optimal partition of \(k\). If \(M\) is additive, then \(M^{\star}\) satisfies the following recursive relation. See Theorem 1 of [62].
**Lemma 4.8**: _Suppose \(M:\mathcal{M}(1,\ldots,n)\to\mathbb{R}\) is additive with representation \(c:\{1,\ldots,n\}\to\mathbb{R}\) satisfying (4.2). Then \(M^{\star}\) defined as in (4.3) satisfies_
\[M^{\star}(k)=\min\{M^{\star}(k-i)+c(i)\mid i\in\{1,\ldots,k\}\}\quad\text{for all }k\in\{1,\ldots,n\}\]
Hence we can use \(M^{\star}(1),\ldots,M^{\star}(k-1)\) to compute \(M^{\star}(k)\).
Algorithm.Lemma 4.8 justifies a simple algorithm for computing \(M^{\star}(1),\ldots,M^{\star}(n)\) and corresponding multiplicity functions \(\mu_{1}^{\star},\ldots,\mu_{n}^{\star}\) satisfying \(M(\mu_{k}^{\star})=M^{\star}(k)\) for \(k=1,\ldots,n\). In other words, \(\mu_{k}^{\star}\) is the multiplicity function of an optimal partition of \(k\). We let \(\mu_{0}^{\star}\) be the constant zero function for notational convenience.
We iterate from \(k=1,\ldots,n\). At step \(k\), we find an integer \(i_{k}^{\star}\) so that
\[i_{k}^{\star}\in\operatorname{argmin}\{M^{\star}(k-i)+c(i)\mid i\in\{1,\ldots, k\}\}\]
Using \(i_{k}^{\star}\) and \(\mu_{k-i_{k}^{\star}}^{\star}\), we define the multiplicity function \(\mu_{k}^{\star}\in\mathcal{M}(k)\) by
\[\mu_{k}^{\star}(j)=\begin{cases}\mu_{(k-i_{k}^{\star})}^{\star}(j)+1&\text{if }j =i_{k}^{\star}\\ \mu_{(k-i_{k}^{\star})}^{\star}(j)&\text{otherwise}\end{cases}\]
We can interpret \(\mu_{k}^{\star}\) as an extension of \(\mu_{k-i_{k}^{\star}}^{\star}\) which includes one additional part of size \(i_{k}^{\star}\). We choose the part \(i_{k}^{\star}\) to minimize the sum of its cost and the cost of optimally partitioning \(k-i_{k}^{\star}\). By construction, \(\mu_{k}^{\star}\) has cost \(M^{\star}(k)=M^{\star}(k-i_{k}^{\star})+c(i_{k}^{\star})\) and so is optimal. See Lemma 4.8. In particular, the multiplicity function \(\mu_{n}^{\star}\) corresponds to an optimal partition of \(n\).
This algorithm has quadratic time complexity. In other words, it requires a number of real arithmetic and comparison operations which grows quadratically in \(n\). To see this, notice that there are \(n\) steps of the algorithm and at step \(k\) we minimize over a finite set of size \(k\).
We also mention that Problem 4.5 and Lemma 4.8 have straightforward variants in which we restrict the support of the multiplicity function. This corresponds to restricting the sizes of the parts of the integer partition. In the context of group testing, for example, an upper bound on the size of the groups may be motivated by the testing capability.
## 5 Numerical example on empirical data
In this section we apply the tools of symmetric probability and group testing to an empirical dataset from the COVID-19 pandemic. The example is meant to illustrate several approaches. It is not intended to improve upon testing methodology used in a practical setting at this point.
We start by describing the origin and preparation of the dataset. Then we compare four pooling strategies. The first three strategies happen to coincide for this dataset whereas the final one, using tools developed in this paper, gives a different and more efficient pooling.
### Dataset background
The Hebrew University-Hadassah COVID-19 Diagnosis Team provide the pooled testing data we use below [21].
Testing contextThe COVID-19 pandemic called for large-scale and high-throughput disease screening. Authorities encouraged specimen pooling to conserve test resources [147].
The Hebrew University team processed 133,816 nasopharyngeal lysates across 17,945 pools via Dorfman screening between April 19 and September 16, 2020 [21]. They collected these specimens from _asymptomatic_ individuals and performed tests for screening purposes. Their protocol adaptively switched between size-5 and size-8 pools.
Testing pipelineSpecimens arrived in _batches_, often of size 80. Technicians centrifuged each lysate before a robot performed pooling and mixing. Up to 92 pooled or individual specimens could be tested simultaneously in a single _run_ of a reverse transcription polymerase chain reaction (PCR) machine. The pool size and specimen-to-pool assignment were informed by the prior week's prevalence and batch-specific side information.
Correlated specimen statusThe team observed empirical efficiency _exceeding_ that indicated by Dorfman's analysis. For example, at a prevalence of 1.695% the size-8 pools enjoyed an _empirical_ efficiency of 4.59 whereas Dorfman predicts a _theoretical_ efficiency of 3.96.
They attribute this discrepancy to the "nonrandom distribution of positive specimens in pools." They report that "specimens arrive in batches: from colleges, nursing homes, or health care personnel." Technicians sorted related specimens into pools "such that family members and roommates were often pooled together, thereby increasing the number of positive samples within the pool." This protocol _helps_ efficiency because keeping positive specimens together mitigates the number of positive pools and, hence, retests required.
These circumstances challenge the typical probabilistic assumption that specimen statuses are independent. For this particular dataset, therefore, modeling statuses as exchangeable may be more appropriate than modeling them as independent.
### Dataset preparation
We simplify the dataset before using the PCR machine _run timestamp_ to impute size-80 batches of specimens.
SimplificationsWe ignore (a) pools without a timestamp (b) pools of size 5 (c) pools with specimens of inconclusive status. The first measure allows us to batch sequentially; the second mitigates the varying prevalence rate across pool sizes; the third ensures we have complete status data. These adjustments leave 112,848 specimens across 14,106 pools.
BatchingAlthough the dataset does not include information about which samples arrived together, it does include information about _when_ pools were tested in the PCR machine. We use this _run timestamp_ to order the samples and impute batches of size 80 sequentially. This yields 1,410 batches including 112,800 specimens across 14,100 pools. One could alternatively batch within a particular day, PCR run, or by using different batch sizes (e.g., 40 or 64). Our
experiments indicate that these results closely correspond to the size-80 sequential batching case and so we do not include details here.
### Experimental setup
We compare four strategies to pool the finite population of size 80. The last is enabled by the tools of this paper. The strategies are:
1. [label=0.]
2. _Hebrew University team_. Use 10 size-8 pools [21].
3. _Dorfman_. Use the pool size indicated by Dorfman's infinite population analysis [55]. Include one extra smaller pool if this size does not evenly divide 80.
4. _Independent statuses_. Select a pooling to minimize the expected tests used under an estimated _IID_ distribution. Use the algorithm of Hwang [90] or of Subsection 4.4.2.
5. _Exchangeable statuses_. Select a pooling to minimize the expected tests used under an estimated _symmetric_ distribution. Use the algorithm of Subsection 4.4.2.
Strategy (1) requires no estimation, strategies (2) and (3) require estimating the population prevalence, and strategy (4) requires estimating the parameters of a symmetric distribution. For (4) we use the principle of maximum likelihood (see Subsection 3.3).
Each strategy may indicate different poolings. We evaluate these under the estimated symmetric distribution and against the empirical data. We report the empirical efficiencies both _with_ and _without_ randomization over specimen-to-pool assignment. For the former we randomize over 10,000 trials. We also report the theoretical efficiency of size-8 pools as indicated by Dorfman's infinite population analysis and under the estimated finite IID model.
### Experimental results
The empirical prevalence is 1.624%. We visualize the estimated IID and symmetric distributions used for strategies (3) and (4) in Figure 2.
Pooling strategiesStrategies (1), (2) and (3) each indicate 10 pools of size 8 whereas strategy (4) indicates 8 pools of size 10. Since the strategies only indicate two distinct poolings, we refer to _size-8 pools_ and _size-10 pools_ in the discussion below.
Theoretical efficienciesBoth Dorfman's infinite analysis and the finite IID model indicate an efficiency of 4.04 for size-8 pools. Under the estimated symmetric model, the efficiency of
Figure 2: Comparison of an independent and identically distributed (IID) model with a symmetric model for a population of size 80. (a) The representation \(\alpha\) of these distributions where \(\alpha(k)\) is the probability of seeing \(k\) positive specimens (see Proposition 3.2). The IID model decays more rapidly. The symmetric distribution has non-monotonic decay; e.g., it assigns more mass to five positives than four positives. (b) The representation \(q\) where \(q(h)\) is the probability that a group of size \(h\) tests negative (see Lemma 3.4). The IID model underestimates these probabilities. (c) The function \(U\) where \(U(h)\) is the expected number of tests used on a group of size \(h\) (see Lemma 4.6). The IID model overestimates these costs.
the size-8 and size-10 pools is 4.38 and 4.48, respectively.
Empirical efficienciesThe average empirical efficiencies of the size-8 and size-10 pools are 4.38 and 4.48, respectively. The standard error in both cases is 0.02. Without randomization, the empirical efficiencies for size-8 and size-10 pools are 4.71 and 4.75, respectively.
### Discussion and interpretation
The estimated IID and symmetric distributions differ visibly (see Figure 2). The IID distribution _underestimates_ the probabilities of observing \(\geq 6\) positive specimens (see Figure 2, panel a). Hence, it also the _underestimates_ probability that a group of a particular size will test negative (see Figure 2, panel b). As a result, it _overestimates_ the number of tests used for a group of a particular size (see Figure 2, panel c).
StrategiesThe first three strategies _coincide_ whereas strategy (4) uses _fewer_ and _larger_ pools. In our experience, it is usual for the symmetric model to indicate larger pools. We attribute this phenomenon to the overestimation described in the previous paragraph.
We emphasize that strategies (1), (2) and (3) need _not_ coincide. When they do differ, in our experience, it is often by indicating successively larger pools. For example, strategy (3) avoids the small remainder pool indicated by (2) when Dorfman's pool size does not evenly divide the population size. Here the strategies agree. They also agree with the Hebrew University team's original choice of size-8 pools.
Theoretical efficienciesThe efficiencies indicated by Dorfman and the finite IID model (a) agree, (b) exceed that reported in [21] and (c) _underestimate_ the theoretical efficiency as predicted by the symmetric distribution. Phenomenon (a) may be interpreted to justify Dorfman's approximation. Phenomenon (b) occurs because the prevalence is _slightly_ lower in our processed data than the original dataset. Phenomenon (c) is a consequence of the IID model overestimating the expected number of tests used.
Under the estimated symmetric distribution, the efficiency of the size-10 pools exceeds that of the size-8 pools. We expect the size-10 pools to be at least as efficient as the size-8 pools as a consequence of the optimization carried out in strategy (4).
Empirical efficienciesWith randomization, the mean empirical efficiencies agree with their theoretical values. The standard errors are relatively small.
Without randomization, both size-8 and size-10 efficiencies increase with the size-10 efficiency remaining larger. This increase appears to be a consequence of the intentional pooling carried out by the Hebrew University Team. Since we batch and pool sequentially, the size-8 pools used here match exactly those constructed by the team. Although the size-10 efficiency is higher here, our experience indicates that this is not significant. The empirical efficiency reported in [21] is _lower_ than these values because certain pools were retested even though each of the pool's specimens was negative.
## 6 Conclusion
In this paper, we develop and apply tools for Dorfman's two-stage adaptive group testing protocol. In particular, we study the problem under the modeling assumption that the statuses are exchangeable and so their distribution is symmetric.
This modeling assumption is both amenable to analysis and relevant for infectious disease screening. Although symmetric distributions are a simple model of reality, they nonetheless allow for correlation among specimen statuses. Such correlations appear in disease screening because specimens originating from the same family, living space, or workplace often arrive for testing, and hence for pooling, together. Since the disease is contagious, positive statuses
co-occur. Accounting for this phenomenon in the probabilistic model may indicate better efficiency and larger pool sizes than proposed by the classical theory. The dataset we studied in Section 5 exhibits this feature. In summary, symmetric distributions are a prototypical class on the path to further research into and analysis of more complicated models.
### Future directions
We focus on topics related to Dorfman's procedure. It may also be of interest, however, to analyze other group testing protocols, e.g. Sterrett's procedure [168] or Sobel and Groll's binary splitting [162], under exchangeability.
Notable variants and generalizationWe list three variants and a generalization of the symmetry considered in this paper. The three variants are (1) _infinite_ population exchangeability, (2) _test error_ models for exchangeable statuses, and (3) _risk-adjusted objectives_ incorporating, e.g., the variance of the number of tests used. Even within the finite population, error-free, minimize-expected-tests setting of this paper, an interesting generalization of this paper may study distributions which are invariant under an _arbitrary_ permutation group.
Characterizing savings and robustnessHow much can we save by correctly modeling statuses as exchangeable instead of independent? Toward answering this, suppose \(x\) has symmetric distribution \(p\) and denote by \(\bar{p}\) the IID distribution whose prevalence matches that of \(p\). Suppose \(G^{\star}\) and \(\bar{G}^{\star}\) are the corresponding optimal partitions under \(p\) and \(\bar{p}\), respectively. One approach to the question of savings is to study the quantity \(\Delta(p):=\mathbb{E}C(\bar{G}^{\star},x)-\mathbb{E}C(G^{\star},x)\). What is \(\sup_{p}\Delta(p)\)? Which symmetric distributions achieve this? There are no savings if \(\bar{G}^{\star}=G^{\star}\), but Section 5 indicates that distributions with savings exist and appear empirically.
Also, how robust are these approaches to uncertainty in estimated parameters. Given an interval containing the population prevalence or a set containing the symmetric distribution, what are the optimal _worst-case_ partitions? The linear programming approach (see Subsection 4.4.1) may be useful for these questions and the foregoing one.
Using features to estimate the probability a group tests negativeLastly, we sketch a direction toward more complicated distributions. Although specimens have identical marginals under the symmetric models considered in this paper, it is natural to relax this assumption as well. Classically, Hwang [90] proposed using specimen-specific negative-status probabilities. He showed that, assuming independence, one can efficiently compute partitions to minimize the expected number of tests used. With modern tools, one might use _features_ and _logistic regression_ to estimate these probabilities. See [30] for an approach along these lines.
To generalize, one may drop the independence assumption and directly estimate the probability that a _group_ tests negative by, for example, performing logistic regression on sets of individual specimen features. Regression models that do not depend on the order of an input list of feature vectors are called _permutation-invariant_[185; 32]. Given such a model indicating the probability that a group tests negative, one might then employ general-purpose partitioning algorithms to find partitions which minimize the expected number of tests used. |
2301.12052 | Leveraging Importance Weights in Subset Selection | We present a subset selection algorithm designed to work with arbitrary model
families in a practical batch setting. In such a setting, an algorithm can
sample examples one at a time but, in order to limit overhead costs, is only
able to update its state (i.e. further train model weights) once a large enough
batch of examples is selected. Our algorithm, IWeS, selects examples by
importance sampling where the sampling probability assigned to each example is
based on the entropy of models trained on previously selected batches. IWeS
admits significant performance improvement compared to other subset selection
algorithms for seven publicly available datasets. Additionally, it is
competitive in an active learning setting, where the label information is not
available at selection time. We also provide an initial theoretical analysis to
support our importance weighting approach, proving generalization and sampling
rate bounds. | Gui Citovsky, Giulia DeSalvo, Sanjiv Kumar, Srikumar Ramalingam, Afshin Rostamizadeh, Yunjuan Wang | 2023-01-28T02:07:31Z | http://arxiv.org/abs/2301.12052v1 | # Leveraging Importance Weights in Subset Selection
###### Abstract
We present a subset selection algorithm designed to work with arbitrary model families in a practical batch setting. In such a setting, an algorithm can sample examples one at a time but, in order to limit overhead costs, is only able to update its state (i.e. further train model weights) once a large enough batch of examples is selected. Our algorithm, IWeS, selects examples by importance sampling where the sampling probability assigned to each example is based on the entropy of models trained on previously selected batches. IWeS admits significant performance improvement compared to other subset selection algorithms for seven publicly available datasets. Additionally, it is competitive in an active learning setting, where the label information is not available at selection time. We also provide an initial theoretical analysis to support our importance weighting approach, proving generalization and sampling rate bounds.
## 1 Introduction
Deep neural networks have shown remarkable success in several domains such as computer vision and natural language processing. In many tasks, this is achieved by heavily relying on extremely large labeled datasets. In addition to the storage costs and potential security/privacy concerns that come along with large datasets, training modern deep neural networks on such datasets also incur high computational costs. With the growing size of datasets in various domains, algorithm scalability is a real and imminent challenge that needs to be addressed. One promising way to solve this problem is with data subset selection, where the learner aims to find the most informative subset from a large number of training samples to approximate (or even improve upon) training with the entire training set. Such ideas have been extensively studied in k-means and k-median clustering (Har-Peled and Mazumdar, 2004), subspace approximation (Feldman et al., 2010), computational geometry (Agarwal et al., 2005), density estimation (Turner et al., 2021), to name a few.
One particular approach for solving data subsampling involves the computation of coresets, which are weighted subsets of a dataset that can act as the proxy for the whole dataset to solve some optimization task. Coreset algorithms are primarily motivated with theoretical guarantees that bound the difference between the training loss (or other such objective) over the coreset and that over the full dataset under different assumptions on the losses and hypothesis classes (Mai et al., 2021; Munteanu et al., 2018; Curtin et al., 2019; Karnin and Liberty, 2019). However, in practice, most competitive subset selection algorithms, that are designed for general loss functions and arbitrary function classes, focus only on selecting informative subsets of the data and typically do not assign weights to the selected examples. These methods are, for example, based on some notion of model uncertainty (Scheffer et al., 2001), information gain (Argamon-Engelson and Dagan, 1999), loss gradients (Paul et al., 2021; Ash et al., 2019), or diversity (Sener and Savarese, 2018). Counter to this trend, we show that weighting the selected samples can be very beneficial.
In this work, we present a subset selection algorithm called IWeS that is designed for general loss functions and hypothesis classes and that selects examples by importance sampling, a theoretically
motivated and unbiased sampling technique. Importance sampling is conducted according to a specially crafted probability distribution and, importantly, each sampled example is weighted inversely proportional to its sampling probability when computing the training loss. We develop two types of sampling probability for different practical requirements (e.g. computational constraints and label availability), but in both cases, the sampling probability is based on the example's entropy-based score computed using a previously trained model. We note, the IWeS algorithm is similar to the IWAL active learning algorithm of Beygelzimer et al. (2009) as both are based on importance sampling. However, in contrast to IWAL, IWeS uses a different sampling probability definition with a focus on providing a practical method that is amenable to large deep networks and complex hypothesis classes.
Through extensive experiments, we find that the IWeS algorithm is competitive for deep neural networks over several datasets. We compare our algorithm against four types of baselines whose sampling strategies leverage: the model's uncertainty over examples, diversity of selected examples, gradient information, and random sampling. Finally, we analyze a closely related albeit less practical algorithm that inspires the design of IWeS, called IWeS-V, proving it admits generalization and sampling rate guarantees that hold for general loss functions and hypothesis classes.
The contributions of this work can be summarized as follows:
1. We present the **I**mportance **W**eighted **S**ubset Selection (IWeS) algorithm that selects examples by importance sampling with a sampling probability based on a model's entropy, which is applicable to (and practical for) arbitrary model families including modern deep networks. In addition to the subset selection framework, IWeS also works in the active learning setting where the examples are unlabeled at selection time.
2. We demonstrate that IWeS achieves significant improvement over several baselines (Random, Margin, Least-Confident, Entropy, Coreset, BADGE) using VGG16 model for six common multi-class datasets (CIFAR10, CIFAR10-corrupted, CIFAR100, SVHN, Eurosat, Fashion MNIST), and using ResNet101 model for the large-scale multi-label OpenImages dataset.
3. We provide a theoretical analysis for a closely related algorithm, IWeS-V, in Section 4. We prove a \(\mathcal{O}(1/\sqrt{T})\) generalization bound, which depends on the full training dataset size \(T\). We further give a new definition of disagreement coefficient and prove a sampling rate bound by leveraging label information, which is tighter compared with the label complexity bound provided by Beygelzimer et al. (2009) that does not use label information.
### Related Work
**Uncertainty.** Uncertainty sampling, which selects examples that the model is least confident on, is favored by practitioners (Mussmann and Liang, 2018) and rather competitive among many recent algorithms (Yang and Loog, 2018). Uncertainty can be measured through entropy (Argamon-Engelson and Dagan, 1999), least confidence (Culotta and McCallum, 2005), and most popular is the margin between the most likely and the second most likely labels (Scheffer et al., 2001). Beygelzimer et al. (2009) makes use of a disagreement-based notion of uncertainty and constructs an importance weighted predictor with theoretical guarantees called IWAL, which is further enhanced by Cortes et al. (2019). However, IWAL is not directly suitable for use with complex hypothesis spaces, such as deep networks, since it requires solving a non-trivial optimization over a subset of the hypothesis class, the so-called version space, in order to compute sampling probabilities. We further discuss these difficulties in Section 4.
**Diversity.** In another line of research, subsets are selected by enforcing diversity such as in the FASS (Wei et al., 2015) and Coreset (Sener and Savarese, 2018) algorithms. Wei et al. (2015) introduces a submodular sampling objective that trades off between uncertainty and diversity by finding a diverse set of samples from amongst those that the current trained model is most uncertain about. It was further explored by Kaushal et al. (2019) who designed a unified framework for data subset selection with facility location and dispersion-based diversity functions. Sener and Savarese (2018) show that the task of identifying a coreset in an active learning setting can be mapped to solving the k-center problem. Further recent works related to coreset idea are Mirzasoleiman et al. (2020); Killamsetty et al. (2021), where the algorithms select representative subsets of the training data to minimize the estimation error between the weighted gradient of selected subset and the full gradient.
**Loss Gradient.** Another class of algorithms selects a subset by leveraging the loss gradients. For example, the GRAND score (Paul et al., 2021), or closely related EL2N score, leverages the average
gradient across several different independent models to measure the importance of each sample. However, as such, it requires training several neural networks, which is computationally expensive. BADGE (Ash et al., 2019) is a sampling strategy for deep neural networks that uses k-MEANS++ on the gradient embedding of the networks to balance between uncertainty and diversity. Finally, for the sake of completeness, we note that importance weighting type approaches have also been used for the selection of examples within an SGD minibatch (Katharopoulos and Fleuret, 2018; Johnson and Guestrin, 2018), which can be thought of a change to the training procedure itself. In contrast, the problem setting we consider in this work requires explicitly producing a (weighted) subset of the training data and treats the training procedure itself as a black-box.
These are a sampling of data subset selection algorithms, and we refer the reader to (Guo et al., 2022) for a more detailed survey. In this work, we choose at least one algorithm from each of the categories mentioned above, in particular, Margin (Scheffer et al., 2001), BADGE (Ash et al., 2019), and Coreset (Sener and Savarese, 2018) to compare against empirically in Section 3. However, before that, we first formally define the IWeS algorithm in the following section.
## 2 The IWeS Algorithm
We consider a practical batch streaming setting, where an algorithm processes one example at a time without updating its state until a batch of examples is selected. That is, like in standard streaming settings, the algorithm receives a labeled example, and decides whether to include it in the selected subset or not. Yet, the algorithm is only allowed to update its state after a fixed batch of examples have been selected in order to limit the overhead costs (e.g. this typically can include retraining models and extracting gradients). Unlike the pool-based setting where the algorithm receives the entire labeled pool beforehand, a batch streaming setting can be more appropriate when facing a vast training data pool since the algorithm can only process a subset of the pool without iterating over the whole pool. Note that any batch streaming algorithm can also be used in a pool-based setting, by simply streaming through the pool in a uniformly random fashion. At a high level, the IWeS algorithm selects examples by importance sampling where the sampling probability is based on the entropy of models trained on previously selected data. We define two sampling probabilities that allow us to trade-off between performance and the computational cost, as well as label-aware or an active learning setting leading to less label annotation costs. As we will subsequently see, these sampling definitions are both easy to use and work well in practice.
To define the algorithm in more detail, we let \(\mathcal{X}\in\mathbb{R}^{d}\) and \(\mathcal{Y}=\{1,\ldots,c\}\) denote the input space and the multi-class label space, respectively. We assume the data \((\mathbf{x},y)\) is drawn from an unknown joint distribution \(\mathcal{D}\) on \(\mathcal{X}\times\mathcal{Y}\). Let \(\mathcal{H}=\{h:\mathcal{X}\rightarrow\mathcal{Z}\}\) be the hypothesis class consisting of functions mapping from \(\mathcal{X}\) to some prediction space \(\mathcal{Z}\subset\mathbb{R}^{\mathcal{Y}}\) and let \(\ell:\mathcal{Z}\times\mathcal{Y}\rightarrow\mathbb{R}\) denote the loss.
The pseudocode of IWeS is shown in Algorithm 1. Initially, a seed set \(\mathcal{S}_{0}\) (\(|\mathcal{S}_{0}|=k_{0}\)) is selected uniformly at random from the labeled pool \(\mathcal{P}\). Then the algorithm proceeds in rounds \(r\in[1,\ldots,R]\) and it consists of two main components: training and sampling. At the training step at round \(r\), the model(s) are trained using the importance-weighted loss, namely \(f_{r}=\arg\min_{h\in\mathcal{H}}\mathcal{N}_{(\mathbf{x},y,w)\in\mathcal{S}}w \cdot\ell(h(\mathbf{x}),y)\) on the subset \(\mathcal{S}\), selected so far, in the previous \(r-1\) rounds.
Depending on the sampling strategy, we may need to randomly initialize two models \(f_{r},g_{r}\) in which case they are trained independently on the same selected subset \(\mathcal{S}\), but with different random initializations. At the sampling step at round \(r\), the IWeS algorithm calculates a sampling probably for example \((\mathbf{x},y)\in\mathcal{S}\) based on one of the following definitions:
* **Entropy-based Disagreement.** We define the sampling probability based on the disagreement on two functions with respect to entropy restricted to the labeled example \((\mathbf{x},y)\). That is, \[p(\mathbf{x},y)=|\mathsf{P}_{f_{r}}(y|\mathbf{x})\log\mathsf{P}_{f_{r}}(y| \mathbf{x})-\mathsf{P}_{g_{r}}(y|\mathbf{x})\log\mathsf{P}_{g_{r}}(y|\mathbf{x })|\] (1) where \(\mathsf{P}_{f_{r}}(y|\mathbf{x})\) is the probability of class \(y\) with model \(f_{r}\) given example x. If the two functions, \(f_{r},g_{r}\), disagree on the labeled example \((\mathbf{x},y)\), then \(p(\mathbf{x},y)\) will be small and the example will be less likely to be selected. This definition is the closest to the IWeS-V algorithm analyzed in Section 4 and achieves the best performance when the computational cost of training two models is not an issue. In Appendix A, we show an efficient version of entropy-based disagreement that utilizes only one model and achieves similar performance.
* **Entropy.** We define the sampling probability by the normalized entropy of the model \(f_{r}\) trained on past selected examples: \[p(\mathbf{x},\cdot)=-\sum_{y^{\prime}\in\mathcal{Y}}\mathsf{P}_{f_{r}}(y^{ \prime}|\mathbf{x})\log_{2}\mathsf{P}_{f_{r}}(y^{\prime}|\mathbf{x})/\log_{2} |\mathcal{Y}|.\] (2) The sampling probability \(p(\mathbf{x},\cdot)\) is high whenever the model class probability \(\mathsf{P}_{f_{r}}(y^{\prime}|\mathbf{x})\) is close to \(1/|\mathcal{Y}|\), which is when the model is not confident about its prediction as it effectively randomly selects a label from \(\mathcal{Y}\). This definition does not use the label \(y\) and thus it can be used in an active learning setting where the algorithm can only access the unlabeled examples. Another advantage is that it only requires training one model, thereby saving some computational cost.
We note that entropy-based sampling has been used in algorithms such as uncertainty sampling as discussed in the related works section, but using entropy to define importance weights has not been done in past literature.
Based on one of these definitions, the IWeS algorithm then decides whether to include the example into the selected subset \(\mathcal{S}\) by flipping a coin \(Q\) with chosen sampling probability \(p(\mathbf{x},y)\). If the example is selected, the example's corresponding weight \(w\) is set to \(\frac{1}{p(\mathbf{x},y)}\), and the example is removed from the labeled pool \(\mathcal{P}=\mathcal{P}\backslash\{(\mathbf{x},y)\}\). This process is repeated until \(k\) examples have been selected. Below we use IWeS-dis as an abbreviation for IWeS algorithm with Entropy-based Disagreement sampling probability and IWeS-ent for IWeS algorithm with Entropy sampling probability.
The weighted loss used to train the model can be written as \(\frac{1}{|\mathcal{Y}|}\sum_{i\in\mathcal{P}}\frac{Q_{i}}{p(\mathbf{x}_{i},y_ {i})}\ell(f(\mathbf{x}_{i}),y_{i})\) and it is an unbiased estimator of the population risk \(\mathbb{E}_{(\mathbf{x},y)\sim p}[\ell(f(\mathbf{x}),y)]\). Yet such estimator can have a large variance when the model is highly confident in its prediction, that is whenever \(\mathsf{P}_{f_{r}}(y|\mathbf{x})\) is large, then \(p(\mathbf{x},y)\) is small. This may lead to training instability and one pragmatic approach to addressing this issue is by "clipping" the importance sampling weights (Ionides, 2008; Swaminathan & Joachims, 2015). Thus in our algorithm, we let \(u\) be the upper bound on the weight of the selected example. Although this clipping strategy introduces an additional parameter, we find it is not too sensitive and, as mentioned in the empirical section, set it to a fixed constant throughout our evaluation.
## 3 Empirical Evaluation
We compare IWeS with state-of-the-art baselines on several image classification benchmarks. Specifically, we consider six multi-class datasets (CIFAR10, CIFAR100 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), EUROSAT (Helber et al., 2019), CIFAR10 Corrupted
(Hendrycks and Dietterich, 2019), Fashion MNIST (Xiao et al., 2017) and one large-scale multi-label Open Images dataset (Krasin et al., 2017). In the multi-class setting, each image is associated with only one label. On the other hand, the multi-label Open Images dataset consists of 19,957 classes over 9M images, where each image contains binary labels for a small subset of the classes (on average 6 labels per image). Further details of each dataset can be found in Table 1 and Table 2 in the appendix.
For all experiments, we consider a diverse set of standard baselines from both subset selection and active learning literature (discussed in Section 1.1).
* **Uncertainty Sampling** selects top \(k\) examples on which the current model admits the highest uncertainty. There are three popular ways of defining model uncertainty \(s(\text{x})\) of an example x, namely margin sampling, entropy sampling, and least confident sampling, and all are based on \(\text{P}_{f}[\hat{y}|\text{x}]\), the probability of class \(\hat{y}\) given example x according to the model \(f\). Margin sampling defines the model uncertainty of an example x as \(s(\text{x})=1-(\text{P}_{f}[\hat{y}_{1}|\text{x}]-\text{P}_{f}[\hat{y}_{2}| \text{x}])\) where \(\hat{y}_{1}=\operatorname*{argmax}_{y\in\mathcal{Y}}\text{P}_{f}[y|\text{x}],\hat{y}_{2}=\operatorname*{argmax}_{y\in\mathcal{Y}\setminus y_{1}}\text{ P}_{f}[y|\text{x}]\) are the first and second most probable classes for model \(f\). For entropy sampling, model uncertainty is defined as \(s(\text{x})=-\sum_{y\in\mathcal{Y}}\text{P}_{f}(\hat{y}|\text{x})\log(\text{ P}_{f}(\hat{y}|\text{x}))\) while for least confidence sampling, it is defined as \(s(\text{x})=1-\max_{y\in\mathcal{Y}}\text{P}_{f}(\hat{y}|\text{x})\).
* **BADGE** of Ash et al. (2019) selects \(k\) examples by using the \(k\)-MEANS++ seeding algorithm using the gradient vectors, computed with respect to the penultimate layer using the most likely labels given by the latest model checkpoint.
* **Coreset (\(k\)-Center)** of Sener and Savarese (2018) selects a subset of examples using their embeddings derived from the penultimate layer using the latest model checkpoint. In particular, the \(k\) examples are chosen using a greedy 2-approximation algorithm for the \(k\)-center problem.
* **Random Sampling** selects \(k\) examples uniformly at random.
### Multi-class Experiments
Here, we compare the IWeS algorithm against the baselines on the six multi-class image datasets. We use the VGG16 architecture with weights that were pre-trained using ImageNet as well as add two fully-connected 4096 dimensional layers and a final prediction layer. Xavier uniform initialization is used for the final layers. For each dataset, we tune the learning rate by choosing the rate from the set \(\{0.001,0.002,0.005,0.01,0.1\}\) that achieves best model performance on the seed set. We use batch SGD with the selected learning rate and fix SGD's batch size to 100. At each sampling round \(r\), the model is trained to convergence on all past selected examples. For IWeS, we set the weight
Figure 1: Accuracy of VGG16 when trained on examples selected by IWeS-dis and baseline algorithms.
capping parameter to 2 for all datasets except for CIFAR10 which we decreased to 1.5 in order to reduce training instability.
The embedding layer for BADGE and Coreset is extracted from the penultimate layer having a dimension of 4096. The effective dimension of the gradient vector in BADGE grows with the number of labels, which is problematic for CIFAR100 as it has 100 classes. More specifically, the runtime of BADGE is given by \(\mathcal{O}\left(dkT\right)\), which can be large for CIFAR100 since the dimension of the gradient vector from the penultimate layer is \(d=4096\times 100\), the size of the labeled pool is \(T\)=50K, and the number of examples selected in each round is \(k\)=5K. In order to solve this inefficiency for CIFAR100, we split the labeled pool randomly into 100 partitions and ran separate instances of the algorithm in each partition with batch size \(k/100\).
Each algorithm is initialized with a seed set that is sampled uniformly at random from the pool. After that, sampling then proceeds in a series of rounds \(r\) where the model is frozen until a batch \(k\) of examples is selected. The seed set size and sampling batch size \(k\) are set to 1K for CIFAR10, SVHN, EUROSAT, CIFAR10 Corrupted, Fashion MNIST, and to 5K for CIFAR100. The experiment was repeated for 5 trials. Any trial that encountered divergent training, i.e. the resulting model accuracy is more than three times below the standard error of model's accuracy on seed set, was dropped. We note that this happened infrequently (less than 3% of the time) and all reported averaged results have at least 3 trials.
Figure 1 shows the mean and standard error of VGG16 model's accuracy on a held out test set comparing IWeS-dis to the baseline methods. The IWeS-dis algorithm either outperforms or matches the performance of the baseline algorithms for all datasets. We also find that margin sampling consistently performs well against the remaining baseline algorithms and that BADGE either matches the performance of margin sampling or slightly underperforms on some datasets (Eurosat, Fashion MNIST, CIFAR100). Coreset admits a similar and at times slightly poorer performance compared to random sampling.
Next, Figure 2 compares the two variants of our algorithm: IWeS-dis and IWeS-ent. We find that the IWeS-dis performs slightly better than IWeS-ent on most of the datasets. This is not surprising since the IWeS-dis sampling probability leverages label information and more computational power, i.e. trains two models. As explained in Section 4, it also better fits our theoretical motivation. Nevertheless, it is important to note that IWeS-ent, without the label information, still consistently outperforms or matches the performance of the baselines for all the datasets.
### Multi-label Open Images Experiments
In this section, we evaluate the performance of the IWeS algorithm on Open Images v6. We train a ResNet101 model on 64 Cloud two core TPU v4 acceleration, and apply batch SGD with batchsize of 6144 and an initial learning rate of \(10^{-4}\) with decay logarithmically every \(5\times 10^{8}\) examples. We add a global pooling layer with a fully connected layer of 128 dimensions as the final layers of the networks, which is needed by BADGE and Coreset baselines. The model is initialized with weights that were pre-trained on the validation split using 150K SGD steps, and at each sampling round, the model is trained on all past selected examples with an additional 15K SGD steps.
In the previous section, our results show that the IWeS-dis algorithm only slightly outperforms the IWeS-ent algorithm on a few datasets. Additionally, since the IWeS-dis requires training two neural networks, which is computationally expensive in this scenario, we only test the performance
Figure 2: Accuracy of VGG16 when trained on examples selected by IWeS-ent, IWeS-dis, margin sampling and random sampling.
of IWeS-ent. Since IWeS-ent doesn't use the label information, this also allows us to measure the performance of the algorithm in an active learning setting.
Since Open Images is a multi-label dataset, the sampling algorithms must not only select the image, but also the class. That is, each example selected by an algorithm consists of an image-class pair with a corresponding binary label indicating whether the corresponding class is present in the image or not. In order to adapt IWeS-ent to the multi-label setting, the entropy sampling probability for each image-class pair is defined as \(p(\mathbf{x},\cdot)=-\mathrm{P}_{f_{r}}(y|\mathbf{x})\log_{2}\mathrm{P}_{f_{r }}(y|\mathbf{x})-(1-\mathrm{P}_{f_{r}}(y|\mathbf{x}))\log_{2}\left(1-\mathrm{ P}_{f_{r}}(y|\mathbf{x})\right)\), where \(\mathrm{P}_{f_{r}}(y|\mathbf{x})\) is the model class probability of a positive label at round \(r\). A seed set of size 300K is sampled uniformly at random from the pool, and at each sampling round \(r\), the algorithms select 100K examples. Similarly to the previous section, in order to run BADGE on Open Images, we divide the pool into 100 partitions and run separate instances of the algorithm in each partition. For IWeS, the weight capping parameter is set to 10.
Figure 3 shows the mean and standard error across 5 trials of the pooled average precision (Pooled AP) metric for each algorithm. As the number of selected examples increases, IWeS-ent outperforms all other baselines methods on the Open Images dataset. We also find that BADGE performs similarly or even slightly worse than the uncertainty-based sampling algorithms when the number of selected examples is smaller than 800K, and then outperforms all uncertainty-based sampling as the number of selected examples increases. Coreset initially performs better than random sampling, but at later sampling rounds, it admits a similar performance to random sampling.
## 4 Theoretical Motivation
In order to theoretically motivate the IWeS algorithm, we analyze a closely related algorithm which we call IWeS-V, adapted from the IWAL algorithm of Beygelzimer et al. (2009). We prove that IWeS-V admits generalization bounds that scale with the dataset size \(T\) and sampling rate bounds that are in terms of a new disagreement coefficient tailored to the subset selection framework.
Below, we let \(L(h)=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}}[\ell(h(\mathbf{x}),y)]\) denote the expected loss of hypothesis \(h\in\mathcal{H}\), and \(h^{*}=\operatorname*{argmin}_{h\in\mathcal{H}}L(h)\) be the best-in-class hypothesis. Without loss of generality, we consider a bounded loss \(\ell:\mathcal{Z}\times\mathcal{Y}\rightarrow[0,1]\) mapping to the interval \([0,1]\). Such a loss can be achieved by any bounded loss after normalization. For simplicity, we assume \(\mathcal{H}\) is a finite set, but our results can be easily extended by standard covering arguments to more general hypothesis sets such as finite VC-classes.
The IWeS-V algorithm operates on an i.i.d. example \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots,(\mathbf{x}_{T},y_{T})\) drawn from \(\mathcal{D}\) sequentially. It maintains a version space \(\mathcal{H}_{t}\) at any time \(t\), with \(\mathcal{H}_{1}=\mathcal{H}\). At time \(t\), IWeS-V flips a coin \(Q_{t}\in\{0,1\}\) with bias \(p_{t}\) defined as
\[p_{t}=\max_{f,g\in\mathcal{H}_{t}}\ell(f(\mathbf{x}_{t}),y_{t})-\ell(g(\mathbf{ x}_{t}),y_{t}) \tag{3}\]
where \(\mathcal{H}_{t}=\left\{h\in\mathcal{H}_{t-1}:\frac{1}{t}\sum_{s=1}^{t}\frac{Q_ {s}}{p_{s}}\ell(h(\mathbf{x}_{s}),y_{s})\leq\min_{h^{\prime}\in\mathcal{H}_{t-1 }}\frac{1}{t}\sum_{s=1}^{t}\frac{Q_{s}}{p_{s}}(h^{\prime}(\mathbf{x}_{s}),y_{s} )+\Delta_{t-1}\right\}\) with \(\Delta_{t-1}=\sqrt{\frac{8\log(2T(T+1)|\mathcal{H}|^{2}/\delta)}{t-1}}\). The example is selected if \(Q_{t}=1\) and otherwise it is discarded. The main idea behind this algorithm is thus to define a sampling probability that is in terms of the disagreement between two hypothesis \(f,g\) that are not too far from the best model trained on the past selected data, i.e. \(\min_{h\in\mathcal{H}_{t-1}}\frac{1}{t}\sum_{s=1}^{t}\frac{Q_{s}}{p_{s}}\ell(h (\mathbf{x}_{s}),y_{s})\). The formal IWeS-V algorithm pseudo-code (Algorithm 2) and all the theorem proofs can be found in Appendix B.
For general, e.g. non-linear, hypothesis classes it is computationally infeasible to find two hypotheses \(f,g\in\mathcal{H}_{t}\) that maximize the expression in equation (3). This main impracticality of IWeS-V is reason why we developed the IWeS algorithm of the previous section. This drawback is also shared by the IWAL algorithm of Beygelzimer et al. (2009), which computes a sampling probability very
Figure 3: Pooled Average Precision of ResNet101 trained on examples selected by IWeS-ent and the baseline algorithms.
similar to that of equation (3), but with an additional maximization over the choice of \(y\in\mathcal{Y}\) in the definition of the sampling probability \(p_{t}\).
Before continuing we explain how our practical algorithm IWeS-dis, specifically using sampling probability in equation (1), is closely related to the IWeS-V algorithm. Recall that the IWeS algorithm trains two models \(f\) and \(g\) each minimizing the importance-weighted loss using the data sampled so far. Therefore, each model exhibits reasonable training loss, i.e. they are expected to be included in the version space \(\mathcal{H}_{t}\) of good hypothesis, while the different random initializations (in the case of non-convex neural network hypotheses) results in models that still differ in certain regions of the feature space. Thus, the difference in equation (1) can be thought of as a less aggressive version of the difference found in the maximization of equation (3).
Another dissimilarity between the two is that the IWeS-dis algorithm is defined for the batch streaming setting while the IWeS-dis algorithm and its analysis is developed for the streaming setting. Said differently, the IWeS-V algorithm can be seen as a special case of the IWeS-dis algorithm with target subset size of 1. To extend the theoretical guarantees of IWeS-V to the batch streaming setting, we can follow a similar analysis developed by Amin et al. (2020) to also find that the effects of delayed feedback in the batch streaming setting are in fact mild as compared to the streaming setting.
### Generalization bound
Next, we turn to the topic of generalization guarantees and we review an existing bound for coreset based algorithms. The guarantees of coreset algorithms are generally focused on showing that a model's training loss on the selected subset is close to the same model's training loss on the whole dataset. That is, given dataset \(\mathcal{P}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{T}\sim D^{T}\), the learner seek to select a subset \(m<T\) of examples \(\mathcal{S}=\{(\mathbf{x}_{i}^{\prime},y_{i}^{\prime})\}_{i=1}^{m}\) along the corresponding set of weights \(w_{1},\ldots,w_{m}\) such that for some small \(\epsilon>0\) and for all \(h\in\mathcal{H}\), the _additive error coreset guarantee_ holds \(\left|\sum_{i=1}^{m}w_{i}\ell(h(\mathbf{x}_{i}^{\prime}),y_{i}^{\prime})-\sum _{i=1}^{T}\ell(h(\mathbf{x}_{i}),y_{i})\right|\leq\epsilon T\). The following proposition, which is a minor extension of Fact 8 of Karnin and Liberty (2019), allows us to convert a coreset guarantee into a generalization guarantee.
**Proposition 4.1**.: Let \(h^{\prime}=\operatorname*{argmin}_{h\in\mathcal{H}}\sum_{i=1}^{m}w_{i}\ell(h (\mathbf{x}_{i}^{\prime}),y_{i}^{\prime})\), and let the additive error coreset guarantee hold for any \(\epsilon>0\), with probability at least \(1-\delta\), it holds that \(L(h^{\prime})\leq L(h^{*})+2\epsilon+2\sqrt{\ln(4/\delta)/2T}\).
As shown above, the generalization guarantee depends linearly on \(\epsilon\) which in turn depends on the size of the subset \(m\). To give a few examples, Karnin and Liberty (2019) show that for hypotheses that are defined as analytic functions of dot products (e.g. generalized linear models) this dependence on \(m\) is \(\epsilon=O(1/m)\), while for more complex Kernel Density Estimator type models the dependence is \(\epsilon=O(1/\sqrt{m})\). See Mai et al. (2021), Table 1, for examples on the dependency between \(\epsilon\) and \(m\) under different data distributions assumptions (e.g. uniform, deterministic, \(\ell_{1}\) Lewis) and for specific loss functions (e.g. log loss, hinge loss).
We now provide a generalization guarantee for the IWeS-V algorithm, which depends on the size of the labeled pool size \(T\). The proof follows from that in Beygelzimer et al. (2009).
**Theorem 4.2**.: Let \(h^{*}\in\mathcal{H}\) be the minimizer of the expected loss function \(h^{*}=\operatorname*{argmin}_{h\in\mathcal{H}}L(h)\). For any \(\delta>0\), with probability at least \(1-\delta\), for any \(t\geq 1\) with \(t\in\{1,2\ldots,T\}\), we have that \(h^{*}\in\mathcal{H}_{t}\) and that \(L(f)-L(g)\leq 2\Delta_{t-1}\) for any \(f,g\in\mathcal{H}_{t}\). In particular, if \(h_{T}\) is the output of IWeS-V, then \(L(h_{T})-L(h^{*})\leq 2\Delta_{T-1}=\mathcal{O}\big{(}\sqrt{\log(T/\delta)}/T \big{)}\).
Unlike the distribution-specific and loss-specific theoretical guarantees proposed in the coreset literature, Theorem 4.2 holds for any bounded loss function and general hypothesis classes. If we ignore log terms and consider the more complex Kernel Density Estimator class of hypotheses, the coreset method of Karnin and Liberty (2019) requires \(m=\mathcal{O}(T)\) coreset samples in order to achieve an overall \(\mathcal{O}(1/\sqrt{T})\) generalization bound. As we will see in the next section, the required IWeS sampling rate can also be as high as \(\mathcal{O}(T)\), but critically is scaled by the best-in-class loss, which in favorable cases is significantly smaller than one.
### Sampling Rate bounds
Hanneke (2007) proves that the expected number of labeled examples needed to train a model in an active learning setting can be characterized in terms of the disagreement coefficient of the learning problem. Later, Beygelzimer et al. (2009) generalizes this notion to arbitrary loss functions, and in this work, we further generalize this for the subset selection setting.
Recall that the disagreement coefficient \(\theta_{\text{AL}}\) in Beygelzimer et al. (2009) for the active learning setting is defined as
\[\theta_{\text{AL}}=\sup_{r\geq 0}\frac{\mathbb{E}_{\mathsf{x}\sim\mathcal{X}} \left[\max_{h\in\mathcal{B}_{\text{AL}}(h^{*},r)}\max_{y\in\mathcal{Y}}| \ell(h(x),y)-\ell(h^{*}(x),y)|\right]}{r},\]
where \(\mathcal{B}_{\text{AL}}(h^{*},r)=\{h\in\mathcal{H}:\rho_{\text{AL}}(h,h^{*}) \leq r\}\) with \(\rho_{\text{AL}}(f,g)=\mathbb{E}_{\mathsf{x}\sim\mathcal{X}}[\sup_{y\in \mathcal{Y}}|\ell(f(\mathsf{x}),y)-\ell(g(\mathsf{x}),y)|]\). Informally, this coefficient quantifies how much disagreement there is among a set of classifiers that is close to the best-in-class hypothesis. In the subset selection setting, labels are available at sample time and, thus, we are able to define the following disagreement coefficient:
**Definition 4.1**.: Let \(\rho_{\mathsf{S}}(f,g)=\mathbb{E}_{(\mathsf{x},y)\in\mathcal{D}}[|\ell(f( \mathsf{x}),y)-\ell(g(\mathsf{x}),y)|]\) and \(\mathcal{B}_{\mathsf{S}}(h^{*},r)=\{h\in\mathcal{H}:\rho_{\mathsf{S}}(h,h^{*} )\leq r\}\) for \(r\geq 0\). The disagreement coefficient in the subset selection setting is defined as
\[\theta_{\mathsf{S}}=\sup_{r\geq 0}\frac{\mathbb{E}_{(\mathsf{x},y)\sim \mathcal{D}}\left[\max_{h\in\mathcal{B}_{\mathsf{S}}(h^{*},r)}|\ell(h(\mathsf{ x}),y)-\ell(h^{*}(\mathsf{x}),y)|\right]}{r}.\]
The main difference between the above coefficient and that of Beygelzimer et al. (2009) is that there is no supremum over all label \(y\in\mathcal{Y}\) both in the definition of the distance \(\rho\) and the coefficient's numerator. Instead, the supremum is replaced with an expectation over the label space.
The following theorem leverages \(\theta_{\mathsf{S}}\) to derive an upper bound on the expected number of selected examples for the IWeS-V algorithm. Below, let \(\mathcal{F}_{t}=\{(\mathsf{x}_{i},y_{i},Q_{i})\}_{i=1}^{t}\) be the observations of the algorithm up to time \(t\).
**Theorem 4.3**.: For any \(\delta>0\), with probability at least \(1-\delta\), the expected sampling rate of the IWeS-V algorithm is: \(\sum_{t=1}^{T}\mathbb{E}_{(\mathsf{x}_{i},y_{i})\sim\mathcal{D}}\left[p_{t} \big{|}\mathcal{F}_{t-1}\right]=\mathcal{O}\Big{(}\theta_{\mathsf{S}}\left(L( h^{*})T+\sqrt{T\log(T/\delta)}\right)\Big{)}\).
Suppressing lower order terms, the above expected sampling rate bound is small whenever the product of the disagreement coefficient and the expected loss of the best-in-class is small. In such cases, by combining the above theorem with the generalization guarantee, it holds that IWeS-V returns a hypothesis trained on a only fraction of the points that generalizes as well as a hypothesis trained on the full dataset of size \(T\). Theorem 4.3 can be further improved by adapting the ideas found in Cortes et al. (2019) to the IWeS-V algorithm. See Appendix B.4 for this enhanced analysis.
The form of this sampling rate bound is similar to that of Beygelzimer et al. (2009). More concretely, under the assumption that a loss function has bounded slope asymmetry, that is \(K_{\ell}=\sup_{z,z^{\prime}\in\mathcal{Z}}\frac{\max_{y\in\mathcal{Y}}|\ell( z,y)-\ell(z^{\prime},y)|}{\min_{y\in\mathcal{Y}}|\ell(z,y)-\ell(z^{\prime},y)|}\) is bounded, with probability at least \(1-\delta\), the expected number of examples selected by the IWAL algorithm is given by \(\mathcal{O}\left(\theta_{\text{AL}}K_{\ell}\left(L(h^{*})T+\sqrt{T\log(T/ \delta)}\right)\right)\). Thus, the main difference between the sampling rate bound of the IWAL algorithm and the IWeS-V algorithm are the factors that depends on the two disagreement coefficients: \(\theta_{\text{AL}}K_{\ell}\) and \(\theta_{\mathsf{S}}\). Since \(\theta_{\mathsf{S}}\) leverages the label information we may expect it to provide a tighter bound, compared to using the label-independent disagreement \(\theta_{\text{AL}}\). Theorem 4.4 shows that this is indeed the case.
**Theorem 4.4**.: If the loss function has a bounded slope asymmetry \(K_{\ell}\), then \(\theta_{\mathsf{S}}\leq\theta_{\text{AL}}K_{\ell}\).
The above theorem in conjunction with the sampling rate guarantees thus proves that the sampling rate bound of IWeS of Theorem 4.3 is tighter than the sampling rate bound of the IWAL algorithm.
## 5 Conclusion
In this paper we have introduced a subset selection algorithm, IWeS that is designed for arbitrary hypothesis classes including deep networks. We have shown that the IWeS algorithm outperforms several natural and important baselines across multiple datasets. In addition, we have developed an initial theoretical motivation for our approach based on the importance weighted sampling mechanism. A natural next step is enforcing a notion of diversity as it will likely provide improved performance in the large-batch sampling setting and thus, we plan to adapt the diversity-based method in Citovsky et al. (2021) by replacing the uncertainty sampling component with the IWeS algorithm. |
2307.00900 | Modular forms with non-vanishing central values and linear independence
of Fourier coefficients | In this article, we are interested in modular forms with non-vanishing
central critical values and linear independence of Fourier coefficients of
modular forms. The main ingredient is a generalization of a theorem due to
VanderKam to modular symbols of higher weights. We prove that for sufficiently
large primes $p$, Hecke operators $T_1, T_2, \ldots, T_D$ act linearly
independently on the winding elements inside the space of weight $2k$ cuspidal
modular symbol $\mathbb{S}_{2k}(\Gamma_0(p))$ with $k\geq 1$ for $D^2\ll p$.
This gives a bound on the number of newforms with non-vanishing arithmetic
$L$-functions at their central critical points and linear independence on the
reductions of these modular forms for prime modulo $l\not=p$. | Debargha Banerjee, Priyanka Majumder | 2023-07-03T09:54:17Z | http://arxiv.org/abs/2307.00900v3 | # Linear independence of Hecke operators on modular symbols of higher weights
###### Abstract.
We prove that for sufficiently large primes \(p\), the Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) acts linearly independent on the space of weight \(2k\) cuspidal modular symbol \(\mathbb{S}_{2k}(\Gamma_{0}(p))\) with \(k\geq 1\) for \(D^{2}\ll p\). This is a generalization of work of Vanderkam to the higher weights modular symbols.
Key words and phrases:Modular curves, Hecke operators The first named author is partially supported by the SERB grant MTR/2017/000357 and CRG/2020/000223. The second named author was partially supported by the CRG/2020/000223 and IISER Pune post-doctoral fellowship. The article came out of the discussion of the first author with Professor Peter Sarnak. The second named author would like to thank Dr. Pramath Anamby for helpful discussions. It is a pleasure to acknowledge several fruitful email communication with Professors Loic Merel, Jefferey Vanderkam and Satadal Ganguly.
of weight \(2\) cuspidal modular symbols (see [5, Theorem 1.9]). The Hecke algebra \(\mathbb{T}_{\mathbb{Z}}\) acts on the homology \(\operatorname{H}_{1}(X_{0}(p),\mathbb{Z})\cong\mathbb{S}_{2}(\Gamma_{0}(p))\). Vanderkam [11] proved the linear independence of Hecke operators \(T_{1},T_{2},\dots,T_{D}\) acting on the winding element \(\mathbf{e}\) when \(p>c_{\delta}\,D^{2+\delta}\) for any given \(\delta>0\) and \(c_{\delta}\) is an effective constant. Note that the image of \(\{0,\infty\}\in\mathbb{S}_{2}(\Gamma_{0}(p))\) is the winding element \(\mathbf{e}\). We can ask if the same question is true if \(k>2\). Let \(\mathbb{S}_{2k}(\Gamma_{0}(p))\) be the space of cuspidal modular symbols of arbitrary weight \(k>2\). This is the homology group with coefficient in a locally constant sheaf (rather than the constant sheaf).
In our present article we prove the linear independence of the Hecke operators on the algebraic cycle of \(\mathbb{S}_{2k}(\Gamma_{0}(p))\) for all \(k\geq 1\).
**Theorem 1.1**.: _For a given \(\delta>0\), when all primes \(p\) satisfy the bound \(D^{2}<p^{1-\delta}\), the Hecke operators \(T_{1},T_{2},\dots,T_{D}\) act linearly independently on \(z^{n}\otimes\mathbf{e}\) inside the space of cuspidal symbols \(\mathbb{S}_{2k}(\Gamma_{0}(p))\) for all \(0\leq n\leq 2k-2\)._
The linear independence result discussed above allow us to establish a bound on the set of functions with simultaneous non-vanishing of \(L\)-functions.
**Theorem 1.2**.: _If \(D^{2}<p^{1-\delta}\) then we have for any given \(\delta>0\)_
\[|\{f\in\mathcal{B}_{2k}(p)\mid L(f,n)\neq 0\;\;\text{for}\;\;0\leq n\leq 2k-2 \}|\ll\sqrt{p}.\]
## 2. Preliminaries
### Modular symbols
We define the modular symbols of arbitrary weight \(k\geq 2\) following [10]. Let \(\mathbb{M}_{2}\) be the free abelian group with basis the set of symbols \(\{\alpha,\beta\}\) with \(\alpha,\beta\in\mathbb{P}^{1}(\mathbb{Q})\) modulo the \(3\)-term relations
\[\{\alpha,\beta\}+\{\beta,\gamma\}+\{\gamma,\alpha\}=0\]
for all \(\alpha,\beta,\gamma\in\mathbb{P}^{1}(\mathbb{Q})\), and for all torsion, i.e.,
\[\mathbb{M}_{2}=(F/R)/(F/R)_{\text{tor}},\]
where \(F\) is the free abelian group on all pairs \((\alpha,\beta)\) and \(R\) is the subgroup generated by all elements of the form \((\alpha,\beta)+(\beta,\gamma)+(\gamma,\alpha)\). The group \(\mathbb{M}_{2}\) is the group of modular symbols of weight \(2\). For any finite index subgroup \(\Gamma\subset\operatorname{SL}_{2}(\mathbb{Z})\), there exists a left action of \(\Gamma\) on \(\mathbb{M}_{2}\) defined as follows:
\[g\{\alpha,\beta\}=\{g(\alpha),g(\beta)\},\]
where \(g\in\Gamma\) acts via the fractional linear transformation
\[g(\alpha)=\frac{a\alpha+b}{c\alpha+d},\;\;\text{where}\;\;g=\left(\begin{array} []{cc}a&b\\ c&d\end{array}\right)\!.\]
For any integer \(n\geq 0\), let \(\mathbb{Z}[X,Y]_{n}\) be the abelian group of homogeneous polynomials of degree \(n\) in two variables \(X,Y\). Recall that this defines a locally constant sheaf \(\mathcal{F}_{n}\) on the modular curve \(X_{0}(p)\). Note that \(\mathbb{Z}[X,Y]_{n}\) is isomorphic to \(\operatorname{Sym}^{n}(\mathbb{Z}\times\mathbb{Z}))\) as a group. For a fixed integer \(k\geq 2\), we define
\[\mathbb{M}_{k}:=\mathbb{Z}[X,Y]_{k-2}\otimes_{\mathbb{Z}}\mathbb{M}_{2},\]
which is a torsion-free abelian group whose elements are sums of expressions of the form \(X^{i}Y^{k-2-i}\otimes\{\alpha,\beta\}\).
For any fixed finite index subgroup \(\Gamma\subset\operatorname{SL}_{2}(\mathbb{Z})\), the left action of \(\Gamma\) on \(\mathbb{Z}[X,Y]_{k-2}\) as follows. Let \(g=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\Gamma\) and \(P(X,Y)\in\mathbb{Z}[X,Y]_{k-2}\), we have
\[(gP)(X,Y)=P(dX-bY,-cX+aY).\]
The left action of \(\Gamma\) on \(\mathbb{M}_{k}\) is given by
\[g(P\otimes\{\alpha,\beta\})=(gP)\otimes\{g(\alpha),g(\beta)\}.\]
**Definition 2.1**.: Let \(k\geq 2\) be an integer and let \(\Gamma\) be a finite index subgroup of \(\operatorname{SL}_{2}(\mathbb{Z})\). The space \(\mathbb{M}_{k}(\Gamma)\) of weight \(k\) modular symbols for \(\Gamma\) is the quotient of \(\mathbb{M}_{k}\) by all relations \(gx-x\) for \(x\in\mathbb{M}_{k}\), \(g\in\Gamma\), and by any torsion.
Let \(P\in\mathbb{Z}[X,Y]_{k-2}\) and \(g\in\Gamma\), a finite index subgroup of \(\operatorname{SL}_{2}(\mathbb{Z})\), then the _Manin symbol_ associated to this pair is given by
\[[P,g]=g(P\otimes\{0,i\infty\}).\]
Recall that the Manin symbols generate the space of modular symbols \(\mathbb{M}_{k}(\Gamma)\) (cf. [10, Proposition 8.3]).
### Hecke operators acting on modular symbols
For a prime \(p\), let
\[R_{p}=\left\{\left(\begin{array}{cc}1&r\\ 0&p\end{array}\right)\mid r=0,1,\ldots,p-1\right\}\cup\left\{\left(\begin{array} []{cc}p&0\\ 0&1\end{array}\right)\right\}.\]
The action of the Hecke operator \(T_{p}\) on \(\mathbb{M}_{k}(\Gamma)\) is given by
\[T_{p}(P\otimes\{\alpha,\beta\})=\sum_{g\in R_{p}}g(P\otimes\{\alpha,\beta\}).\]
Note that here \(\Gamma\) is a congruence subgroup of \(\operatorname{SL}_{2}(\mathbb{Z})\) which contains \(\Gamma_{1}(N)\), in particular we can take \(\Gamma=\Gamma_{0}(N)\).
Let \(\mathbb{B}\) be the free abelian group on symbols \(\{\alpha\}\) with \(\alpha\in\mathbb{P}^{1}(\mathbb{Q})\), and we set
\[\mathbb{B}_{k}=\mathbb{Z}[X,Y]_{k-2}\otimes\mathbb{B}.\]
For any fixed finite index subgroup \(\Gamma\), the left action of \(\Gamma\) on \(\mathbb{B}_{k}\) is given by
\[g(P\otimes\{\alpha\})=(gP)\otimes\{g(\alpha)\}\;\;\text{for}\;\;P\{\alpha\} \in\mathbb{B}_{k},\,g\in\Gamma.\]
Let \(k\geq 2\) be an integer and let \(\Gamma\) be a finite index subgroup. Let \(\mathbb{B}_{k}(\Gamma)\) be the quotient of \(\mathbb{B}_{k}\) by the relations \(x-gx\) for all \(g\in\Gamma\), \(x\in\mathbb{B}_{k}\), and by any torsion. Thus \(\mathbb{B}_{k}(\Gamma)\) is a torsion-free abelian group.
**Definition 2.2**.: The space \(\mathbb{S}_{k}(\Gamma)\) of weight \(k\) cuspidal modular symbols is the kernel of boundary map \(\delta_{k}:\mathbb{M}_{k}(\Gamma)\to\mathbb{B}_{k}(\Gamma)\) which is given by extending the map \(\delta_{k}(P\otimes\{\alpha,\beta\})=P\otimes\{\beta\}-P\otimes\{\alpha\}\) linearly.
Let \(S_{k}(\Gamma)\) be the space of weight \(k\) cusp forms with respect to \(\Gamma\) equipped with the Petersson inner product:
\[\langle f,g\rangle_{\text{\rm pet}}=\int_{\Gamma\setminus\mathbb{H}}f(z) \overline{g(z)}\,(\text{\rm Im}(z))^{k}\,\mu_{\text{\rm hyp}}(z)\,\text{ for }\,f,g\in S_{k}(\Gamma).\]
Let \(k\geq 2\) be an integer and let \(\Gamma\) be a finite index subgroup of \(\text{\rm SL}_{2}(\mathbb{Z})\). Then space of cuspidal modular symbols and cusp forms are related as below (cf. [10, Corollary 8.19]) (see also [13, Theorem 0.2] and [6, SS1.5])
\[\dim_{\mathbb{C}}\mathbb{S}_{k}(\Gamma,\mathbb{C})=2\dim_{\mathbb{C}}S_{k}( \Gamma).\]
Note that we may identify the space of modular symbols with certain homology group with coefficient in a locally constant sheaf following [12, Theorem 4].
Let \(\mathcal{F}_{k-2}\) be the usual locally constant sheaf on \(X_{0}(N)\). We have an identification
\[\mathbb{S}_{k}(\Gamma_{0}(N))\cong H_{1}\left(X_{0}(N),\,\mathcal{F}_{k-2} \right).\]
Note that the number of generator of the Hecke algebra over \(\mathbb{Z}\) acting on the space of modular forms is bounded by a linear function of \(p\)[10, Theorem 9.23]. However, this information is not sufficient to determine the set of basis with non-vanishing \(L\)-functions.
### Modular symbols and modular forms
Let \(k\geq 2\) be an integer and let \(\Gamma\) be a finite index subgroup. Let \(M_{k}(\Gamma)\) denote the space of modular forms of weight \(k\) for \(\Gamma\), and \(S_{k}(\Gamma)\) denote the space of cusp forms of weight \(k\) for \(\Gamma\). For any cusp form \(f\in S_{k}(\Gamma)\), the integration paring (see [10], p. 180) is given by
\[\big{\langle}z^{m}\{0,\infty\},f\big{\rangle}=\int_{0}^{\infty}f(z)z^{m}dz\quad \text{for}\quad 0\leq m\leq k-2.\]
Let \(N\) be a positive integer. Let \(f=\sum_{n\geq 1}a_{f}(n)q^{n}\in S_{k}(\Gamma_{0}(N))\), then the \(L\)-series is defined by
\[L(f,s)=\sum_{n\geq 1}\frac{a_{f}(n)}{n^{s}}\quad\text{with}\quad\text{Re}(s) \gg 0.\]
It has an Euler product expansion
\[L(f,s)=\prod_{p\mid N}\left(1-a_{p}(f)\,p^{-s}\right)^{-1}\prod_{p\mid N} \left(1-a_{p}(f)\,p^{-s}+p^{k-1}p^{-2s}\right)^{-1}.\]
For \(f\in S_{k}(\Gamma_{0}(N))\), we define
\[\Lambda(f,s)=\left(\frac{\sqrt{N}}{2\pi}\right)^{s}\Gamma(s)\,L(f,s) \tag{2.1}\]
and we have \(\Lambda(f,s)=i^{-k}\,\epsilon_{f}\,\Lambda(f,k-s)\), where \(\epsilon_{f}=\pm 1\). Also for \(f\in S_{k}(\Gamma_{0}(N))\) we have the following relation (see [2], equation(14), p. 1385)
\[\big{\langle}z^{m}\{0,\infty\},f\big{\rangle}=\frac{m!\,i^{m+1}}{(2\pi)^{m+1} }\,L(f,m+1)\,\text{ for }\,0\leq m\leq k-2. \tag{2.2}\]
## 3. Linear dependence of Hecke operators
Suppose the Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) are linearly dependent on \(z^{n}\otimes\mathbf{e}\) for all \(0\leq n\leq 2k-2\). Since the winding element \(\mathbf{e}\) is the image of \(\{0,\infty\}\), there exist \(\alpha_{1},\ldots,\alpha_{D}\) (at least one of them non-zero) with
\[\sum_{i=1}^{D}\alpha_{i}\,T_{i}(z^{n}\otimes\{0,\infty\})=0,\;\; \forall\;\;0\leq n\leq 2k-2.\]
Let \(\mathcal{B}_{2k}(p)\) denote the Hecke basis of \(S_{2k}(\Gamma_{0}(p))\). In other words, this is a basis of \(S_{2k}(\Gamma_{0}(p))\) consisting of Hecke eigenforms. Let \(f\) be a Hecke eigenform then by the self-adjoint property (see [10, Theorem 8.21]) of the Hecke operator we have,
\[\langle T_{i}(z^{n}\otimes\{0,\infty\}),f\rangle=\langle(z^{n} \otimes\{0,\infty\}),\,T_{i}f\rangle\quad\text{for}\;\;i=1,\ldots,D.\]
Let \(\lambda_{f}(i)\) be the Hecke eigenvalue, i.e., \(T_{i}f=\lambda_{f}(i)f\). Then we have
\[\langle T_{i}(z^{n}\otimes\{0,\infty\}),f\rangle=\lambda_{f}(i) \,\langle(z^{n}\otimes\{0,\infty\}),\,f\rangle\quad\text{for}\;\;i=1,\ldots,D.\]
Now, the linear dependence of the Hecke operators is equivalent to the existence of non-trivial solutions \(\alpha_{1},\ldots,\alpha_{D}\) to the following system of equations:
\[0=\sum_{f\in\mathcal{B}_{2k}(p)}\omega_{f}\,\Big{|}\sum_{i=1}^{D} \alpha_{i}\,\langle T_{i}(z^{n}\otimes\{0,\infty\}),f\rangle\Big{|}^{2} \tag{3.1}\]
with \(0\leq n\leq 2k-2\), and where \(\omega_{f}=\frac{1}{4\pi}\langle f,f\rangle_{\text{pet}}\). The equation (3.1) is equivalent to
\[0=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{f \in\mathcal{B}_{2k}(p)}\omega_{f}\lambda_{f}(i)\lambda_{f}(j)\big{|}\langle(z ^{n}\otimes\{0,\infty\}),f\rangle\big{|}^{2}.\]
Then by using (2.2), we get
\[0=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{f \in\mathcal{B}_{2k}(p)}\omega_{f}\lambda_{f}(i)\lambda_{f}(j)\big{|}L(f,n+1) \big{|}^{2}. \tag{3.2}\]
Recall the following Petersson formula (cf. [11, Lemma 1.2]) that works for arbitrary weight \(k\geq 1\). In the following lemma, a basis of \(S_{2k}(\Gamma_{0}(p))\) is a Hecke basis if if it is an eigenform of Hecke operators \(T_{n}\) for all \(n\) with \((n,p)=1\).
**Lemma 3.1** (Petersson formula).: _If \(\mathcal{B}_{2k}(p)\) is a Hecke basis of \(S_{2k}(\Gamma_{0}(p))\) then we have_
\[\sum_{f\in\mathcal{B}_{2k}(p)}\omega_{f}\lambda_{f}(r)\lambda_{f} (s)=\delta_{rs}+2\pi i^{2k}\sum_{p|c}\frac{\mathcal{S}(r,s,c)}{c}\,J_{2k-1} \left(\frac{4\pi\sqrt{rs}}{c}\right),\]
_where \(\mathcal{S}(r,s,c)\) denotes the Kloosterman sum and \(J_{\nu}(x)\) denotes the \(J\)-Bessel function of order \(\nu\)._
Note that, we use the Petersson formula (Lemma 3.1) many times in this paper. For notational convenience, whenever we use the Petersson formula by \(S_{\text{main}}\) we denote the term coming from the \(\delta_{rs}\)-part of the Petersson formula, and \(S_{\text{off}}\) denote the remaining terms coming from the Kloosterman sum in the Petersson formula.
**Lemma 3.2**.: _For \(0\leq n\leq 2k-2\) with \(n\neq k-1\) and \(D^{2}<p^{1-\delta}\), there exist no non-trivial solution \(\alpha_{1},\ldots,\alpha_{D}\) of the system of equations_
\[0=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{f\in\mathcal{B}_{2 k}(p)}\omega_{f}\lambda_{f}(i)\lambda_{f}(j)\big{|}L(f,n+1)\big{|}^{2}.\]
Proof.: We prove this by contradiction, so let us suppose there exist a non-trivial solution. Now, note that the critical strip for the \(L\)-function \(L(f,s)\) is \(\frac{2k-1}{2}\leq\operatorname{Re}(s)\leq\frac{2k+1}{2}\)(see [3, Chapter 7]), this implies \(L(f,n+1)\) lies on the critical strip only when \(n=k-1\). Outside the critical strip \(L\)-function \(L(f,s)\) is bounded, i.e., \(L(f,n+1)=O(1)\) for \(0\leq n<k-1\) and for \(k-1<n\leq 2k-2\).
By using the Petersson formula (Lemma 3.1) in the equation (3.2), we get two parts, \(S_{\text{main}}\) and \(S_{\text{off}}\). Here we have,
\[S_{\text{main}}:=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{ \alpha}_{j}\ \ \delta_{ij}\,\big{|}L(f,n+1)\big{|}^{2}\geq\sum_{i=1}^{D}|\alpha_{i}|^{2}\,c,\]
with some constant \(c>0\). For the non-\(\delta_{rs}\) part we have
\[S_{\text{off}}:=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{ \alpha}_{j}\sum_{p|c}\frac{\mathcal{S}(i,j,c)}{c}\,J_{2k-1}\left(\frac{4\pi \sqrt{ij}}{c}\right)\,\big{|}L(f,n+1)\big{|}^{2}.\]
By using the Weil's bound \(\mathcal{S}(i,j,c)\leq(i,j,c)^{\frac{1}{2}}\,\tau(c)\,c^{\frac{1}{2}}\) (see [3, Theorem 4.5]) and for the Bessel function we use the bound \(J_{\nu}(x)\ll x\), then we get
\[\sum_{p|c}\frac{\mathcal{S}(i,j,c)}{c}\,J_{2k-1}\left(\frac{4\pi \sqrt{ij}}{c}\right)=O\left(p^{-\frac{1}{2}}\,(i,j)^{\frac{1}{2}}\,(ij)^{ \frac{1}{2}}\right).\]
This implies
\[S_{\text{off}}\ll p^{-\frac{1}{2}}\sum_{i=1}^{D}\sum_{j=1}^{D} \alpha_{i}\bar{\alpha}_{j}\,(i,j)^{\frac{1}{2}}\,(ij)^{\frac{1}{2}}.\]
For \(D^{2}<p^{1-\delta}\) we see that the term \(S_{\text{off}}\) is not large enough to cancel the term \(S_{\text{main}}\), so it's a contradiction. This completes the proof.
We will be interested in \(n=k-1\) now.
**Lemma 3.3**.: _Let \(f\in S_{2k}(\Gamma_{0}(p))\), and \(L(f,s)=\sum_{n\geq 1}\frac{a_{f}(n)}{n^{s}}\) be the \(L\)-function associated with the cusp form \(f\) where \(a_{f}(n)=\lambda_{f}(n)\,n^{\frac{k-1}{2}}\). Then we have_
\[\big{|}L(f,k)\big{|}^{2}=\frac{2}{\left((k-1)!\right)^{2}}\sum_{ l,\,m\geq 1}G_{k}\left(\frac{lm}{p}\right)\frac{\lambda_{f}(l)\lambda_{f}(m)}{ \sqrt{lm}},\]
_where \(G_{k}(x)\) is given by_
\[G_{k}(x)=\frac{1}{2\pi i}\int_{\operatorname{Re}(t)=3/4}\frac{ \Gamma(k+t)^{2}}{(2\pi)^{2t}x^{t}}\frac{dt}{t}.\]
Proof.: Although the proof closely resembles [11, Lemma 1.1], we include the complete proof here in order to provide a comprehensive explanation in our paper.
For values of \(s\) outside the critical strip, the \(L\)-function \(L(f,s)\) satisfies the functional equation
\[\Lambda(f,s)=i^{-2k}\epsilon_{f}\Lambda(f,2k-s),\]
where we have the expression for \(\Lambda(f,s)\) as given in equation (2.1), and \(\epsilon_{f}\) represents a constant that can take the values \(\pm 1\).
For \(s=k+t\) with \(t>\frac{1}{2}\) the \(L\)-function \(L(f,k+t)\) satisfy this functional equation, i.e., we have
\[\left(\frac{\sqrt{p}}{2\pi}\right)^{k+t}\Gamma(k+t)\,L(f,k+t)=i^{ -2k}\epsilon_{f}\left(\frac{\sqrt{p}}{2\pi}\right)^{k-t}\Gamma(k-t)\,L(f,k-t)\] \[\implies \left(\frac{p}{4\pi^{2}}\right)^{t}\Gamma(k+t)^{2}L(f,k+t)^{2}= \left(\frac{p}{4\pi^{2}}\right)^{-t}\Gamma(k-t)^{2}L(f,k-t)\cdot\]
This implies \(\left(\frac{p}{4\pi^{2}}\right)^{t}\Gamma(k+t)^{2}L(f,k+t)^{2}\) is an even function. Hence we can integrate this function against \(\frac{dt}{t}\) at \(\operatorname{Re}(t)=\frac{3}{4}\) and shift to \(\operatorname{Re}(t)=-\frac{3}{4}\) by picking up the residue at \(t=0\). From the defintion we have
\[L(f,k+t)^{2}=\sum_{l,\,m\geq 1}\frac{\lambda_{f}(l)\lambda_{f}(m)}{\sqrt{ lm}}(lm)^{-t}.\]
Now, since \(L(f,k)^{2}=|L(f,k)|^{2}\), we have
\[\Gamma(k)^{2}\cdot|L(f,k)|^{2}= \text{Res}_{t=0}\left(\frac{p}{4\pi^{2}}\right)^{t}\Gamma(k+t)^{2 }L(f,k+t)^{2}\,\frac{1}{t}\] \[= \frac{1}{\pi i}\int_{\operatorname{Re}(t)=\frac{3}{4}}\left( \frac{p}{4\pi^{2}}\right)^{t}\Gamma(k+t)^{2}L(f,k+t)^{2}\,\frac{dt}{t}\] \[= \frac{1}{\pi}\sum_{l,\,m\geq 1}\frac{\lambda_{f}(l)\lambda_{f}(m) }{\sqrt{lm}}\int_{\operatorname{Re}(t)=\frac{3}{4}}\left(\frac{p}{4\pi^{2}} \right)^{t}\frac{\Gamma(k+t)^{2}}{(lm)^{t}}\,\frac{dt}{t}\] \[= 2\sum_{l,\,m\geq 1}G_{k}\left(\frac{lm}{p}\right)\frac{ \lambda_{f}(l)\lambda_{f}(m)}{\sqrt{lm}}.\]
This completes the proof.
**Lemma 3.4**.: _For \(x\geq 1\) we have \(G_{k}(x)\ll e^{-c\sqrt{x}}\) with a fixed constant \(c>0\)._
Proof.: We shift the contour of the integral to \(\operatorname{Re}(t)=\sqrt{x}\) and by using the Stirling formula from [4], i.e., \(\Gamma(s)=\sqrt{2\pi}\;\;s^{s-\frac{1}{2}}\,e^{-s}\left(1+O\left(\frac{1}{|s|} \right)\right)\), in the integral expression of \(G_{k}(x)\), we get
\[G_{k}(x)\ll\int_{\operatorname{Re}(t)=\sqrt{x}}\frac{(k+t)^{2k+2t-1}}{(2\pi) ^{2t}x^{t}}\,e^{-2(k+t)}\,\frac{dt}{t}\ll e^{-c\sqrt{x}}\]
with a fixed constant \(c>0\).
**Remark 3.5**.: By shifting the contour to the left of the origin one can easily show that the function \(G_{k}(x)\) is bounded for \(0<x<1\).
**Proposition 3.6**.: _The Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) are linearly dependent on \(z^{k-1}\otimes\{0,\infty\}\), is equivalent to the existence of the non-trivial solutions to the equation_
\[\sum_{d_{1},d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}}\sum_{Id_{1},Jd_{2}<D}\alpha_{Id _{1}}\bar{\alpha}_{Jd_{2}}\sum_{L,M\geq 1}\frac{G_{k}(LMd_{1}d_{2}/p)}{\sqrt{LM}} \sum_{f\in\mathcal{B}_{2k}(p)}\omega_{f}\lambda_{f}(IL)\lambda_{f}(JM)=0.\]
Proof.: We have already seen (by substituting \(n=k-1\) in (3.2)) that the linear dependency of \(T_{1},T_{2},\ldots,T_{D}\) on \(z^{k-1}\otimes\{0,\infty\}\) is equivalent to the existence of a non-trivial solution of the following equation
\[0=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{f\in\mathcal{B}_ {2k}(p)}\omega_{f}\lambda_{f}(i)\lambda_{f}(j)\big{|}L(f,k)\big{|}^{2}.\]
Use the value of \(\big{|}L(f,k)\big{|}^{2}\) from Lemma 3.3, we get
\[0=\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{l,\,m\geq 1}G_{k} \left(\frac{lm}{p}\right)\frac{1}{\sqrt{lm}}\sum_{f\in\mathcal{B}_{2k}(p)} \omega_{f}\lambda_{f}(i)\lambda_{f}(j)\lambda_{f}(l)\lambda_{f}(m).\]
Substitute \(i=Id_{1},\,j=Jd_{2}\) where \(d_{1}=(i,l),\,d_{2}=(j,m)\) and we also substitute \(Ld_{1}=l,\,Md_{2}=m\). Then we get
\[0=\sum_{d_{1},d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}}\sum_{Id_{1},Jd _{2}<D}\alpha_{Id_{1}}\bar{\alpha}_{Jd_{2}}\sum_{L,M\geq 1}\frac{G_{k}(LMd_{1}d_{ 2}/p)}{\sqrt{LM}}\] \[\times\sum_{f\in\mathcal{B}_{2k}(p)}\omega_{f}\lambda_{f}(Id_{1} )\lambda_{f}(Jd_{2})\lambda_{f}(Ld_{1})\lambda_{f}(Md_{2}).\]
Using the following multiplicative relation of the eigenvalues of Hecke operatos
\[\lambda_{f}(r)\lambda_{f}(s)=\sum_{d|(r,s)}\lambda_{f}\left(\frac{rs}{d} \right),\]
where \((r,s)\) has no common factor with the level. Since we have prime level \(p\), we can avoid this restriction. Hence we have
\[0=\sum_{d_{1},d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}}\sum_{Id_{1},Jd_{2}<D}\alpha _{Id_{1}}\bar{\alpha}_{Jd_{2}}\sum_{L,M\geq 1}\frac{G_{k}(LMd_{1}d_{2}/p)}{ \sqrt{LM}}\sum_{f\in\mathcal{B}_{2k}(p)}\omega_{f}\lambda_{f}(IL)\lambda_{f}( JM). \tag{3.3}\]
This completes the proof.
Then we use the Petersson formula in the equation (3.3). Here we have
\[S_{\text{main}}=\sum_{d_{1},d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}}\sum_{Id_{1}, Jd_{2}<D}\alpha_{Id_{1}}\bar{\alpha}_{Jd_{2}}\sum_{L,M\geq 1}\frac{G_{k}(LMd_{1}d_{ 2}/p)}{\sqrt{LM}}\ \delta_{IL\,JM}\]
and
\[S_{\text{off}}=\sum_{d_{1},\,d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}} \sum_{Id_{1},\,Jd_{2}<D}\alpha_{Id_{1}}\bar{\alpha}_{Jd_{2}}\sum_{L,M\geq 1} \frac{G_{k}(LMd_{1}d_{2}/p)}{\sqrt{LM}}\] \[\times\left(2\pi i^{2k}\sum_{p|c}\frac{\mathcal{S}(IL,JM,c)}{c} \,J_{2k-1}\left(\frac{4\pi\sqrt{ILJM}}{c}\right)\right)\]
## 4. Lower bound for the main term
In this section, we establish a lower bound for \(S_{\min}\). Recall that
\[S_{\min}=\sum_{d_{1},d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}}\sum_{Id_{1},Jd_{2}<D} \alpha_{Id_{1}}\bar{\alpha}_{Jd_{2}}\sum_{IL=JM}\frac{G_{k}(LMM_{1}d_{2}/p)}{ \sqrt{LM}}.\]
By employing the same change of variables as in [11, p. 352], we obtain
\[S_{\min}=\sum_{L=1}^{\infty}L\sum_{U,\,V<\frac{D}{L}}\tau(U)\tau(V)\,x_{UL}\bar {x}_{VL}\sum_{T\mid L}\frac{\mu(T)}{T}\sum_{A=1}^{\infty}\frac{G_{k}(A^{2}T^{2} UV/p)}{A},\]
where \(x_{c}=\alpha_{c}/\sqrt{c}\) and \(\tau(u)\) is the divisor function, i.e., \(\tau(u):=\sum_{d\mid u}1\).
**Lemma 4.1**.: _With the above notations, we have_
\[\sum_{A=1}^{\infty}\frac{G_{k}(A^{2}/X)}{A}=\frac{((k-1)!)^{2}}{2}\log X+c_{0} +O\left(\frac{1}{X}\right),\]
_where \(c_{0}\) is a fixed constant._
Proof.: From the integral representation of the function \(G_{k}(x)\), we have
\[\sum_{A=1}^{\infty}\frac{G_{k}(A^{2}/X)}{A} =\frac{1}{2\pi i}\sum_{A=1}^{\infty}\int_{\operatorname{Re}(t)= \frac{3}{4}}\frac{\Gamma(k+t)^{2}}{(2\pi)^{2t}}\frac{X^{t}}{A^{2t+1}}\frac{dt} {t}\] \[=\frac{1}{2\pi i}\int_{\operatorname{Re}(t)=\frac{3}{4}}\frac{ \Gamma(k+t)^{2}}{(2\pi)^{2t}}X^{t}\,\zeta(2t+1)\frac{dt}{t}.\]
Now by the contour shift, for any \(\epsilon>0\) we have
\[\frac{1}{2\pi i}\int_{\operatorname{Re}(t)=\frac{3}{4}}\frac{ \Gamma(k+t)^{2}}{(2\pi)^{2t}}X^{t}\,\zeta(2t+1)\frac{dt}{t} =\frac{1}{2\pi i}\int_{\operatorname{Re}(t)=-k-\epsilon}\frac{ \Gamma(k+t)^{2}}{(2\pi)^{2t}}X^{t}\zeta(2t+1)\frac{dt}{t}\] \[+\operatorname{Res}_{t=-k}\left(\frac{\Gamma(k+t)^{2}}{(2\pi)^{2t }}\frac{X^{t}\zeta(2t+1)}{t}\right)\] \[+\operatorname{Res}_{t=0}\left(\frac{\Gamma(k+t)^{2}}{(2\pi)^{2t }}\frac{X^{t}\zeta(2t+1)}{t}\right).\]
Using the residue formula, we compute
\[\operatorname{Res}_{t=0}\left(\frac{\Gamma(k+t)^{2}}{(2\pi)^{2t} }\frac{X^{t}\zeta(2t+1)}{t}\right) =\lim_{t\to 0}\left(\frac{d}{dt}\left(\frac{t^{2}\Gamma(k+t)^{2}}{(2 \pi)^{2t}}\frac{X^{t}\zeta(2t+1)}{t}\right)\right)\] \[=\frac{\left((k-1)!\right)^{2}}{2}\log X+c_{0};\]
with a fixed \(c_{0}\), and this completes the proof.
**Proposition 4.2**.: _For \(D^{2}<p^{1-\delta}\), we get_
\[S_{\min}\gg\log p\sum_{L}\phi(L)|y_{L}|^{2},\]
_where \(\phi(L)\) is the Euler function and \(y_{L}=\sum_{UL<D}\tau(U)\,x_{UL}\)._
Proof.: Recall that
\[S_{\min}=\sum_{L=1}^{\infty}L\sum_{U,\,V<\frac{D}{L}}\tau(U)\tau(V)\,x_{UL}\bar{x }_{VL}\sum_{T|L}\frac{\mu(T)}{T}\sum_{A=1}^{\infty}\frac{G_{k}(A^{2}T^{2}UV/p)}{ A},\]
Then by using Lemma 4.1 and the computations from [11, p. 353 - 354], we get
\[S_{\min}\geq\sum_{L=1}^{\infty}\phi(L)|y_{L}|^{2}\left(\log\left(\frac{p^{\frac {(k-1)!}{2}}}{D}\right)+c_{0}-c_{1}\right),\]
where \(y_{L}=\sum_{UL<D}\tau(U)\,x_{UL}\). For \(D^{2}<p^{1-\delta}\), we have
\[S_{\min}\geq\sum_{L=1}^{\infty}\phi(L)|y_{L}|^{2}\bigg{(}\frac{((k-1)!)^{2}}{2 }-\frac{1}{2}+\frac{\delta}{2}\bigg{)}\log p+\sum_{L=1}^{\infty}\phi(L)|y_{L}|^ {2}(c_{0}-c_{1}).\]
Since \(\frac{((k-1)!)^{2}}{2}-\frac{1}{2}+\frac{\delta}{2}>0\), we get
\[S_{\min}\gg\log p\sum_{L=1}^{\infty}\phi(L)|y_{L}|^{2},\]
where the implied constant depends on \(k\) and on \(\delta\).
## 5. Upper bound of Kloosterman sum part
Here we consider the non-\(\delta_{rs}\) part which involves the Kloosterman sums. Recall that
\[S_{\rm off}=\sum_{d_{1},\,d_{2}<D}\frac{1}{\sqrt{d_{1}d_{2}}} \sum_{Id_{1},\,Jd_{2}<D}\alpha_{Id_{1}}\bar{\alpha}_{Jd_{2}}\sum_{L,M\geq 1 }\frac{G_{k}(LMd_{1}d_{2}/p)}{\sqrt{LM}}\] \[\times\left(2\pi i^{2k}\sum_{p|c}\frac{\mathcal{S}(IL,JM,c)}{c} \,J_{2k-1}\left(\frac{4\pi\sqrt{ILJM}}{c}\right)\right)\]
Making the substitution \(x_{Id_{1}}=\frac{\alpha_{Id_{1}}}{\sqrt{Id_{1}}}\) and \(\bar{x}_{Jd_{2}}=\frac{\alpha_{Jd_{2}}}{\sqrt{Jd_{2}}}\), we get
\[S_{\rm off}=\sum_{d_{1},\,d_{2}<D} \sum_{Id_{1},\,Jd_{2}<D}x_{Id_{1}}\bar{x}_{Jd_{2}}\sqrt{IJ}\sum_{ L,M\geq 1}\frac{G_{k}(LMd_{1}d_{2}/p)}{\sqrt{LM}}\] \[\times\left(2\pi i^{2k}\sum_{p|c}\frac{\mathcal{S}(IL,JM,c)}{c}\, J_{2k-1}\left(\frac{4\pi\sqrt{ILJM}}{c}\right)\right).\]
Now for our convenience we write \(S_{\rm off}=S_{\rm off}^{c<p^{2}}+S_{\rm off}^{c\geq p^{2}}\), where we define
\[S_{\rm off}^{c<p^{2}}:=\sum_{d_{1},\,d_{2}<D} \sum_{Id_{1},\,Jd_{2}<D}x_{Id_{1}}\bar{x}_{Jd_{2}}\sqrt{IJ}\sum_{ L,M\geq 1}\frac{G_{k}(LMd_{1}d_{2}/p)}{\sqrt{LM}}\] \[\times\left(2\pi i^{2k}\sum_{\begin{subarray}{c}p|c\\ c<p^{2}\end{subarray}}\frac{\mathcal{S}(IL,JM,c)}{c}\,J_{2k-1}\left(\frac{4 \pi\sqrt{ILJM}}{c}\right)\right) \tag{5.1}\]
and
\[\begin{split} S^{c\geq p^{2}}_{\text{off}}&:=\sum_{d_{1},\,d_{2}<D}\ \ \sum_{Id_{1},\,Jd_{2}<D}x_{Id_{1}}\bar{x}_{Jd_{2}}\sqrt{IJ}\sum_{L,M\geq 1} \frac{G_{k}(LMd_{1}d_{2}/p)}{\sqrt{LM}}\\ &\times\left(2\pi i^{2k}\sum_{\begin{subarray}{c}p|c\\ c\geq p^{2}\end{subarray}}\frac{\mathcal{S}(IL,JM,c)}{c}\,J_{2k-1}\left( \frac{4\pi\sqrt{ILJM}}{c}\right)\right).\end{split} \tag{5.2}\]
**Lemma 5.1**.: _With the above notations, for \(D^{2}<p^{1-\delta}\) we have_
\[S^{c\geq p^{2}}_{\text{off}}\ll p^{-1+\epsilon}\]
_for any given \(\epsilon>0\)._
Proof.: We recall the Weil's bound for the Kloosterman sum, which says
\[\mathcal{S}(IL,JM,c)\leq(IL,JM,c)^{\frac{1}{2}}\,\tau(c)\,c^{\frac{1}{2}}. \tag{5.3}\]
Also recall that the \(J\)-Bessel function satisfy the bound
\[J_{2k-1}\left(\frac{4\pi\sqrt{ILJM}}{c}\right)\ll\left(\frac{4\pi\sqrt{IJLM}}{ c}\right). \tag{5.4}\]
Using (5.3) and (5.4) in the expression (5.2), we get
\[\begin{split} S^{c\geq p^{2}}_{\text{off}}&\ll \sum_{d_{1},\,d_{2}<D}\ \ \sum_{Id_{1},\,Jd_{2}<D}|x_{Id_{1}}|\,|\bar{x}_{Jd_{2}}|\,IJ\sum_{L,M\geq 1 }G_{k}(LMd_{1}d_{2}/p)\\ &\times\sum_{\begin{subarray}{c}p|c\\ c\geq p^{2}\end{subarray}}c^{-\frac{3}{2}}\,(IL,JM,c)^{\frac{1}{2}}\,\tau(c). \end{split}\]
Now using the fact that \(G_{k}(x)\ll e^{-c\sqrt{x}}\) (see Lemma 3.4), we have
\[S^{c\geq p^{2}}_{\text{off}}\ll p^{-1+\epsilon}\sum_{d_{1},\,d_{2}<D}\ \sum_{Id_{1},\,Jd_{2}<D}|x_{Id_{1}}|\,|\bar{x}_{Jd_{2}}|\,IJ\]
for any given \(\epsilon>0\). This completes the proof.
**Lemma 5.2**.: _With the above notations, for \(D^{2}<p^{1-\delta}\) we have_
\[S^{c<p^{2}}_{\text{off}}\ll p^{\epsilon}\]
_for any given \(\epsilon>0\)._
Proof.: Without loss of generality, we assume that the sums on \(L\) and \(M\) are over dyadic blocks of size \(\mathscr{L}\), \(\mathscr{M}\), respectively with \(\mathscr{L}\mathscr{M}\ll\frac{p^{1+\epsilon}}{d_{1}d_{2}}\).
From the definition of Kloosterman sum, we recall that (see [3, p. 55])
\[\mathcal{S}(IL,JM,c)=\sum_{(a,c)=1}e\left(\frac{aIL+\bar{a}JM}{c}\right), \tag{5.5}\]
where \(e(x):=e^{2\pi ix}\) and \(a\bar{a}\equiv 1\,(\operatorname{mod}c)\). The \(J\)-Bessel function can be written as (see [3, p. 50])
\[J_{2k-1}\left(\frac{4\pi\sqrt{ILJM}}{c}\right)=\sum_{\ell=0}^{\infty}\frac{(-1) ^{\ell}}{\ell!}\,\frac{(-1)^{\ell}}{\Gamma(\ell+2k)}\left(\frac{2\pi\sqrt{ IJLM}}{c}\right)^{2k+2\ell-1}. \tag{5.6}\]
Now, using (5.5) and (5.6) in the expression (5.1), we get
\[S_{\text{off}}^{c<p^{2}}= \int_{\operatorname{Re}(t)=\frac{3}{4}}\frac{i^{2k-1}\Gamma(k+t) ^{2}}{t}\sum_{\ell=0}^{\infty}a_{\ell}\,p^{t}\sum_{d_{1},\,d_{2}<D}\,\,\,\sum_{ Id_{1},\,Jd_{2}<D}x_{Id_{1}}\bar{x}_{Jd_{2}}(IJ)^{k+\ell}\] \[\times(d_{1}d_{2})^{-t}\sum_{\begin{subarray}{c}p|c\\ c<p^{2}\end{subarray}}\frac{1}{c^{2k+2\ell}}\sum_{(a,c)=1}\,\,\sum_{L,\,M}( LM)^{k+\ell-1-t}\,\,\,e\left(\frac{aIL+\bar{a}JM}{c}\right)dt,\]
where \(a_{\ell}=\frac{(-1)^{\ell}}{\ell!}\,\,\frac{(-1)^{\ell}}{\Gamma(\ell+2k)}\). Then by using [11, Lemma 3.3 and Lemma 3.4], we can write the inner sum as
\[\sum_{(a,c)=1}\,\,\,\sum_{L,\,M}(LM)^{k+\ell-1-t}\,\,\,e\left( \frac{aIL+\bar{a}JM}{c}\right)\] \[=\sum_{(a,c)=1}\,\,\,\sum_{L=\mathscr{L}}^{2\mathscr{L}}L^{k+ \ell-1-t}\,\,e\left(\frac{aIL}{c}\right)\sum_{M=\mathscr{M}}^{2\mathscr{M}}M^ {k+\ell-1-t}\,\,\,e\left(\frac{\bar{a}JM}{c}\right)\] \[\ll\left(\mathscr{L}\mathscr{M}\right)^{k+\ell-1-\operatorname{ Re}(t)}\sum_{(a,c)=1}\min\left(\mathscr{L},\frac{c^{\epsilon}(1+|k+\ell-1-t|)}{ \lfloor aI/c\rfloor}\right)\min\left(\mathscr{M},\frac{c^{\epsilon}(1+|k+\ell- 1-t|)}{\lfloor\bar{a}J/c\rfloor}\right)\] \[\ll\left(\mathscr{L}\mathscr{M}\right)^{k+\ell-1-\operatorname{ Re}(t)}\left(1+|k+\ell-1-t|\right)^{2}(\mathscr{L}\mathscr{M}+c)c^{\epsilon}\] \[\ll\left(\frac{p^{1+\epsilon}}{d_{1}d_{2}}\right)^{k+\ell-1- \operatorname{Re}(t)}\left(\frac{p^{1+\epsilon}}{d_{1}d_{2}}+c\right)c^{ \epsilon}\left(1+|k+\ell-1-t|\right)^{2}\]
for any \(\epsilon>0\). In the third line \(\lfloor aI/c\rfloor\) denotes the distance from \(aI/c\) to the nearest integer. Hence we get
\[S_{\text{off}}^{c<p^{2}} \ll\int_{\operatorname{Re}(t)=\frac{3}{4}}\frac{i^{2k-1}\Gamma(k +t)^{2}}{t}\sum_{\ell=0}^{\infty}a_{\ell}\,p^{t}\sum_{d_{1},\,d_{2}<D}\,\,\, \sum_{Id_{1},\,Jd_{2}<D}x_{Id_{1}}\bar{x}_{Jd_{2}}(IJ)^{k+\ell} \tag{5.7}\] \[\times(d_{1}d_{2})^{-t}\sum_{\begin{subarray}{c}p|c\\ c<p^{2}\end{subarray}}\frac{1}{c^{2k+2\ell}}\left(\frac{p^{1+\epsilon}}{d_{1} d_{2}}\right)^{k+\ell-1-\operatorname{Re}(t)}\left(\frac{p^{1+\epsilon}}{d_{1}d_{2}}+c \right)c^{\epsilon}\left(1+|k+\ell-1-t|\right)^{2}dt.\]
For each \(\ell\) and \(t\) the R.H.S of (5.7) is bounded by \(\Gamma(k+t)^{2}\left(1+|k+\ell-1-t|\right)^{2}\) times the term
\[p^{-k-\ell+\epsilon}D^{\epsilon}\left(\sum_{L<D}|y_{L}|L^{\ell+ k}\right)^{2} \ll p^{-k-\ell+\epsilon}D^{\epsilon}\left(\sum_{L<D}\phi(L)|y_{L}|^{2} \right)\left(\sum_{L<D}\frac{L^{2\ell+2k}}{\phi(L)}\right) \tag{5.8}\] \[\ll(pD)^{\epsilon}\left(\frac{D^{2}}{p}\right)^{k+\ell}\left(\sum _{L<D}\phi(L)|y_{L}|^{2}\right).\]
For \(D^{2}<p^{1-\delta}\), the R.H.S of (5.8) is bounded by \(p^{\frac{3\epsilon}{2}}p^{-\delta(\ell+k+\frac{\epsilon}{2})}\left(\sum_{L<D} \phi(L)|y_{L}|^{2}\right)\), and by taking the sum over \(\ell\) and integration over \(\operatorname{Re}(t)=3/4\), we see that the term \(S_{\text{off}}^{c<p^{2}}\) is bounded by \(p^{-\frac{1}{2}+\epsilon}\) for sufficiently large prime \(p\)
## 6. Main results
Proof of theorem 1.1.: The linear dependency of the Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) is equivalent of saying there exist a non-trivial solution \(\alpha_{1},\ldots,\alpha_{D}\) of
\[\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{f\in\mathcal{B}_{2k }(p)}\omega_{f}\lambda_{f}(i)\lambda_{f}(j)\big{|}L(f,n+1)\big{|}^{2}=0 \tag{6.1}\]
for \(0\leq n\leq 2k-2\). Meaning, the Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) act linearly independently on \(z^{n}\otimes\{0,\infty\}\) if and only if there does not exist any non-trivial solution of equation (6.1) for each \(n\). With our notations the we have
\[\sum_{i=1}^{D}\sum_{j=1}^{D}\alpha_{i}\bar{\alpha}_{j}\sum_{f\in\mathcal{B}_{2 k}(p)}\omega_{f}\lambda_{f}(i)\lambda_{f}(j)\big{|}L(f,n+1)\big{|}^{2}=S_{\rm off }+S_{\rm main}.\]
Now, for \(n=k-1\) the remainder term \(S_{\rm off}\) is not large enough to cancel the main term \(S_{\rm main}\). This implies, for \(D^{2}<p^{1-\delta}\) there is no non-trivial solutions \(\alpha_{1},\ldots,\alpha_{D}\) of the equation (6.1) for \(n=k-1\). Hence, the Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) are linearly independent on the cycle \(z^{k-1}\otimes\{0,\infty\}\). For the remaining part of the proof we use Lemma 3.2, which implies that there is no non-trivial solutions \(\alpha_{1},\ldots,\alpha_{D}\) of the equation (6.1) with \(0\leq n<k-1\) and \(k-1<n\leq 2k-2\) when \(D^{2}<p^{1-\delta}\). Hence, we prove that the Hecke operators \(T_{1},T_{2},\ldots,T_{D}\) are linearly independent on the cycle \(z^{n}\otimes\{0,\infty\}\) for all \(0\leq n\leq 2k-2\).
The following lemma is the number field analogue of [1, Lemma 7.12] which is true even for function field setting.
**Lemma 6.1**.: _The \(\mathbb{Z}\)-module \(\bigoplus_{n=0}^{2k-2}\mathbb{T}(z^{n}\otimes\{0,\infty\})\) is free of rank_
\[|\{f\in\mathcal{B}_{2k}(p)\mid L(f,n)\neq 0\;\;\text{for}\;\;0\leq n\leq 2k-2\}|.\]
Proof.: It is already clear that \(\bigoplus_{n=0}^{2k-2}\mathbb{T}(z^{n}\otimes\{0,\infty\})\) is a torsion free finite type \(\mathbb{Z}\)-module because each \(\mathbb{Z}\)-module \(\mathbb{T}(z^{n}\otimes\{0,\infty\})\) is torsion free of finite type. Let
\[\operatorname{Ann}(z^{n}\otimes\{0,\infty\})=\{T\in\mathbb{T}\mid T(z^{n} \otimes\{0,\infty\})=0\}\]
be the annihilator ideal of \(z^{n}\otimes\{0,\infty\}\). Then we have the isomorphism
\[\mathbb{T}/\bigoplus_{n=0}^{2k-2}\operatorname{Ann}(z^{n}\otimes\{0,\infty\}) \cong\bigoplus_{n=0}^{2k-2}\mathbb{T}(z^{n}\otimes\{0,\infty\}).\]
So, we calculate the rank of the quotient \(\mathbb{T}/\bigoplus_{n=0}^{2k-2}\operatorname{Ann}(z^{n}\otimes\{0,\infty\})\). For \(f\in\mathcal{B}_{2k}(p)\), let \([f]\) be the orbit of \(f\) and \(\operatorname{Ann}_{[f]}\) be the annihilator ideal of \(f\) in \(\mathbb{T}\). Let \(\mathcal{E}_{n}\) be the set of orbits \([f]_{n}\) such that \(L(f,n)\neq 0\) for \(0\leq n\leq 2k-2\). We start by showing the following identity:
\[\bigoplus_{n=0}^{2k-2}\bigcap_{[f]\in\mathcal{E}_{n}}\operatorname{Ann}_{[f]_ {n}}=\bigoplus_{n=0}^{2k-2}\operatorname{Ann}(z^{n}\otimes\{0,\infty\}). \tag{6.2}\]
Let's consider an element \(T\) from the L.H.S of (6.2). Now, for every \(f\) belonging to the set \(\mathcal{B}_{2k}(p)\) such that \(L(f,n+1)\neq 0\) for \(0\leq n\leq 2k-2\), we have the following:
\[\langle T(z^{n}\otimes\{0,\infty\}),f\rangle=\langle z^{n}\otimes\{0,\infty\},Tf\rangle=0.\]
If \(f\) belonging to the set \(\mathcal{B}_{2k}(p)\) such that \(L(f,n+1)=0\) then from (2.2) we get
\[\langle z^{n}\otimes\{0,\infty\},f\rangle=0\]
Therefore \(\langle T(z^{n}\otimes\{0,\infty\},f)=0\) because \(f\in\mathcal{B}_{2k}(p)\). Hence \(T(z^{n}\otimes\{0,\infty\}=0\) and this implies \(T\) is an element in the R.H.S of (6.2).
Now, for the other inclusion, let's consider an element \(T\) from the R.H.S of (6.2) and take \(f\in\mathcal{B}_{2k}(p)\) such that \(L(f,n+1)\neq 0\) for \(0\leq n\leq 2k-2\) we have
\[\langle z^{n}\otimes\{0,\infty\},f\rangle=\langle T(z^{n}\otimes\{0,\infty\} ),Tf\rangle=0.\]
Now by (2.2) implies \(\langle z^{n}\otimes\{0,\infty\},f\rangle\neq 0\), therefore \(T\) is an element in the L.H.S of (6.2). This proves the identity (6.2).
Hence \(\bigoplus_{n=0}^{2k-2}\mathrm{Ann}(z^{n}\otimes\{0,\infty\})\) is the kernel of the \(\mathbb{Z}\)-modules homomorphism
\[\varphi:\mathbb{T}\to\bigoplus_{n=0}^{2k-2}\prod_{[f]\in\mathcal{E}_{n}} \mathbb{T}/\mathrm{Ann}_{[f]_{n}}.\]
So, the \(\mathbb{Q}\)-vector space \(\mathbb{T}/\bigoplus_{n=0}^{2k-2}\mathrm{Ann}(z^{n}\otimes\{0,\infty\}) \otimes_{\mathbb{Z}}\mathbb{Q}\) is isomorphic to \(\bigoplus_{n=0}^{2k-2}\prod_{[f]\in\mathcal{E}_{n}}K_{[f]_{n}}\) and it's dimension is
\[|\{f\in\mathcal{B}_{2k}(p)\mid L(f,n)\neq 0\;\;\mathrm{for}\;\;0\leq n\leq 2k-2 \}|.\]
This completes the proof.
Proof of theorem 1.2.: The proof directly follows from Theorem 1.1 and Lemma 6.1.
|
2301.02241 | CiT: Curation in Training for Effective Vision-Language Data | Large vision-language models are generally applicable to many downstream
tasks, but come at an exorbitant training cost that only large institutions can
afford. This paper trades generality for efficiency and presents Curation in
Training (CiT), a simple and efficient vision-text learning algorithm that
couples a data objective into training. CiT automatically yields quality data
to speed-up contrastive image-text training and alleviates the need for an
offline data filtering pipeline, allowing broad data sources (including raw
image-text pairs from the web). CiT contains two loops: an outer loop curating
the training data and an inner loop consuming the curated training data. The
text encoder connects the two loops. Given metadata for tasks of interest,
e.g., class names, and a large pool of image-text pairs, CiT alternatively
selects relevant training data from the pool by measuring the similarity of
their text embeddings and embeddings of the metadata. In our experiments, we
observe that CiT can speed up training by over an order of magnitude,
especially if the raw data size is large. | Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer | 2023-01-05T18:59:57Z | http://arxiv.org/abs/2301.02241v1 | # CiT: Curation in Training for Effective Vision-Language Data
###### Abstract
Large vision-language models are generally applicable to many downstream tasks, but come at an exorbitant training cost that only large institutions can afford. This paper trades generality for efficiency and presents _Curation in Training (CiT)_, a simple and efficient vision-text learning algorithm that couples a data objective into training. CiT automatically yields quality data to speed-up contrastive image-text training and alleviates the need for an offline data filtering pipeline, allowing broad data sources (including raw image-text pairs from the web). CiT contains two loops: an outer loop curating the training data and an inner loop consuming the curated training data. The text encoder connects the two loops. Given metadata for tasks of interest, e.g., class names, and a large pool of image-text pairs, CiT alternatively selects relevant training data from the pool by measuring the similarity of their text embeddings and embeddings of the metadata. In our experiments, we observe that CiT can speed up training by over an order of magnitude, especially if the raw data size is large.
## 1 Introduction
Vision-language models have demonstrated success for fine-tuning and zero-shot transfer to downstream tasks [21, 26, 12] by training on a general-purpose large-scale dataset instead of a small task-level dataset. While general, large-scale pre-training is computationally expensive (_e.g_. CoCa [36] trains on 2048 TPUs for 5 days) and typically performed on a _pre-filtered_ dataset (_e.g_. WIT400M [21] used by CLIP [21] is created by searching for image-text pairs with text containing a set of 500,000 queries and [24] uses this model to create another dataset).
Such filtering pipelines usually involve manual labor-intensive efforts to remove data that is unlikely useful for downstream tasks [21, 12]. Recent effort has been made to curate data for high-quality image-text pairs (such as CC3M [25], CC12M [3], YFCC15M [29, 21], WIT400M [21] and LAION [23, 24]). Nevertheless, research is typically _tied_ to the static datasets or model weights (if the data is not released) and is not able to access or change the data pipelines or model architectures. Further, work is _limited_ by the prohibitive cost of training on these large image-text datasets (_e.g_. the CLIP model is trained on WIT400M for 12 days using 256 GPUs).
In this work, our goal is to empower training with the capability of adjusting the data distribution. Our intention is to dynamically curate the data during training and our key idea is to use the learned text representation of vision-language models to measure relevance of the data w.r.t. the task of interest. Given metadata (from downstream tasks _e.g_. a class name such as "chicken"), we measure its embedding similarity to the training data. This similarity can guide us for the decision of including this data into our training process. For example a caption containing the word "giraffe" will have higher embedding similarity to "chicken" than a caption such as "throwback Thursday".
Figure 1: A conceptual illustration of CLIP training _vs_. CiT. Vanilla CLIP training uses static data from offline human filtering (_e.g_. cleaned YFCC15M or WIT40QM [21]) and optimizes the model. Instead, our CiT incorporates dynamic data curation into training in two loops: (_i_) an outer curation loop improving data (for downstream tasks) given the current model; (_ii_) an inner loop optimizing the model given the curated data. The trained text model connects the loops by providing embeddings for curation.
Driven by this idea, we presents a simple algorithm that incorporates data Curation in Training (CiT), aiming at improving both data efficiency and model performance. CiT works as follows. Given a large source of image-text pairs and metadata (_e.g_. a list of class names used in this paper), CiT alternatively performs curation of the data and training on that curated data. As shown in Figure 1, CiT contains two loops: an outer loop to curate data given the current model and an inner loop trains the model given the curated data. Similar as Locked image Tuning (LiT [38]), CiT uses pre-trained image and text encoders and freezes the image one. The text model connects the two loops by serving curated data to inner loop for training which in turn learns good representations for the outer loop for curation.
CiT can speed up training by multiple orders of magnitude, especially if the raw data size is large; _e.g_. when trained on LAION-400M data, CiT reaches similar ImageNet zero-shot1 accuracy as OpenCLIP [31], while being 37.7\(\times\) faster in training. Since CiT changes the training data distribution that focuses on one or more tasks of interest, it can even handle image-text pairs from any (noisy) source with unknown distribution. Our experiments reveal that vanilla CLIP/LiT training fails on _raw_ random image-text pairs crawled from the web, while CiT trains easily.
Footnote 1: Zero-shot refers to not seeing any training examples of the target dataset. We note that our approach uses extra information of the downstream task, such as class names; however, this metadata is easy to acquire and can be of various forms as shown in experiments.
## 2 Related Work
**Vision-Language Learning.** Contrastive learning was initially popular in vision self-supervision [32, 4, 11] and later adopted for cross-modal learning [16, 18, 19, 21, 33]. CLIP [21] populates the idea of contrastive learning from image-text pairs (used before _e.g_. in ConVIRT [40]) at scale and shows a strong performance of zero-shot transfer to image classification and retrieval tasks. SLIP [20] combines image self-supervision and language supervision. LiT [38] shows that when a good pre-trained vision encoder is adopted, it is better to lock (freeze) the well pre-trained vision encoder to protect vision representations from being corrupted by noisy language supervision. Flamingo also use pre-trained models for various tasks [1].
**Vision-Language Data.** Large-scale vision-language learning is typically coupled to a data pipeline to yield high-quality data for efficient training [12, 37, 26]. For example, CC3M [25] heavily filters web crawled pairs and only keeps 0.1% of the raw data. Both CC3M and CC12M [3] leverage Google Cloud APIs with models predicting a large number of classes (on the order of \(10^{5}\)) [25] to filter out mismatched image-text pairs. YFCC100M is curated from Yahoo Flicker using text fields (such as title, description, etc.). This ensures certain data quality but limits the scale. Later YFCC100M is further cleaned as YFCC15M to contain English-only image-text pairs by [21]. Due to the limited scale, CLIP further curates a WebImageText dataset (WIT400M) by formulating queries from Wikipedia and performing searching them online to obtain image-text pairs. Florence [37] curates a dataset with the extra multi-label signals to improve supervision. ALIGN [12] relaxes CC12M filtering to show that training on 1.8B noisy pairs can achieve CLIP-level performance. FLAVA [26] combines existing human annotated datasets of smaller scale for high-quality image-text pairs. Different to related research, CiT improves data _within_ the training algorithm, and not as a pre-filtering. We demonstrate that such approach allows us to effectively learn from _raw_ image-text pairs.
**Related Areas.** Our work is related to research in other domains. In NLP, there are existing works on domain-adaptive finetuning and retrieval [39, 39, 14, 15, 13, 9]. In machine learning research, subset selection [13, 30] cast data selection as a discrete bi-level optimization problem.
## 3 Method
In CLIP pre-training, the training objective (_e.g_. contrastive image-text correspondence) operates as a training proxy that approximates downstream tasks (_e.g_. classification accuracy). Our CiT introduces a _data proxy_ to fit the _data distribution_ to downstream tasks. In this section, we first go through the details of the CiT algorithm in SS3.1, training loop in SS3.2 and the data proxy for the curation loop in SS3.3.
### CiT Algorithm
CiT contains two loops: the curation loop curates data given the current weights of the model and the training loop optimizing the weights given the curated data.
Let \(\mathcal{D}=\{(x_{\text{img}}^{i},x_{\text{vis}}^{i})\}_{i=1}^{N}\), be the set of source of image-text pairs. Then \(\mathcal{D}_{C}\subseteq\mathcal{D}\) is the actual _training data_ we aim to curate from the source. We define two functions: (i) \(\textit{Curation}(\mathcal{D};\Theta)\), and (ii) \(\textit{Training}(\Theta;\mathcal{D}_{T})\), for curation and training loops, respectively. Importantly, the weights of the learned model \(\Theta\) connects the two loops and serves the curation loop with the updated representations from the training loop. There is no notion of a _fixed dataset_ or training epochs over \(\mathcal{D}\); instead, we view the data source as an online _data stream_. CiT uses a sequential setup that alternatively performs curation for every \(s\) pairs of training.
CiT is shown in Algorithm 1. It takes 3 inputs: a data source \(\mathcal{D}\), the pre-trained weights \(\Theta\) and a training budget \(b\), which can be training time, resources consumed, _etc_. We simply use steps of weight updates as the training cost in this paper. Line 1 initializes the training budget. Line 2 determines if current training exceeds that training budget.
The main framework of CiT is to alternatively perform curation and training in line 2-4. To recap CLIP pre-training, we first detail the training function next.
```
Input :\(\mathcal{D}\): data source \(\Theta\): model's pre-trained weights \(b\): training budget
1\(c\gets 0\)while\(c<b\)do
2\(\mathcal{D}_{T}\leftarrow\text{{Curation}}\big{(}\mathcal{D};\Theta\big{)}\)\(\Theta,n\leftarrow\text{{Training}}(\Theta;\mathcal{D}_{T})\)\(c\gets c+n\)
3 end while
4
5 end while
```
**Algorithm 1**CiT (see SSA.1.1 for pseudo code)
### Training
The core of CLIP [21] training is the contrastive cross-modal objective serving as the proxy to approximates downstream tasks (_e.g_. higher classification accuracy). This objective pulls embeddings of positive image-text pairs closer and pushes negative pairs from other examples in a training batch apart; thus it creates a proxy for classification, which has one example per class and the rest of the batch are other classes described by natural language.
The training loop is shown in Algorithm 3, with the training data \(\mathcal{D}_{C}\), delivered from curation. We let \(m(\cdot;\cdot)\) denote the image-text model. We use \(\mathrm{sim}(\mathbf{x}_{\text{img}},\mathbf{x}_{\text{txt}})=\mathbf{x}_{ \text{img}}\mathbf{x}_{\text{txt}}^{\top}/(\|\mathbf{x}_{\text{img}}\|\| \mathbf{x}_{\text{txt}}\|)\) in line 3 to compute the image-to-text cosine similarity, divided by a trainable temperature \(\tau\). Our CiT training objective has almost the same structure as in CLIP, except that we only use an image-to-text (and no text-to-image) contrastive loss (\(\mathcal{L}_{\text{img2txt}}\)) in line 4. We ablate this loss versus the averaged bidirectional contrastive loss (used by CLIP) in our experiments. Line 5 updates the model parameters and line 6 counts training cost.
```
Input :\(\mathcal{D}_{C}\): curated training data \(\Theta\): model's weights \(\mathcal{D}_{\text{model's weights}}\) \(b\): training budget
1\(c\gets 0\)while\(c<b\)do
2\(\mathcal{D}_{T}\leftarrow\text{{Curation}}\big{(}\mathcal{D};\Theta\big{)}\)\(\Theta,n\leftarrow\text{{Training}}(\Theta;\mathcal{D}_{T})\)\(c\gets c+n\)
3 end while
4
5 end while return\(\Theta,n\)
```
**Algorithm 2**Curation
### Curation
CiT also has a data objective that curates data using the (previously updated) model. Encoding the data with an updated model allows for better representation of the data. Akin to the contrastive objective for training, the core function in curation is a _data proxy_ (or objective) that selects data based on the metadata (_e.g_. a list of class names).
We detail the curation loop in Algorithm 2. It takes the following inputs: model weights \(\Theta\), a data source \(\mathcal{D}\), the model architecture, the metadata for downstream tasks \(\mathcal{T}_{\text{meta}}\) and an expected size of curated data \(s\). \(\mathcal{T}_{\text{meta}}\) is a list containing a pre-defined taxonomy; (_e.g_. ImageNet WordNet lemmas or a combination from a group of tasks in our experiments), but could be generalized to other forms of text.
Algorithm 2 first obtains the embeddings for the metadata in line 1. Then it sets up the curated set \(\mathcal{D}_{C}\) for the next round of training and keeps curating data in line 3-7. Line 3 gets the next batch of raw image-text pairs. Line 4 obtains its text part and line 5 computes the text embedding from the current model. Line 6 is the _data proxy_, which approximates the data distribution for the downstream tasks (detailed in the next subsection). Lastly, we merge the newly curated subset into the curated set \(\mathcal{D}_{C}\).
``` Input :\(\mathcal{D}_{C}\): curated training data \(\Theta\): model's weights
a sample shall be used:
\[\begin{cases}\mathcal{D}_{t}&\text{if }\frac{|\mathcal{D}_{t}|}{|\mathcal{D}_{ \text{raw}}|}>\gamma,\\ \arg\text{topk}_{i}(v_{\text{max}}^{i},k=\gamma|\mathcal{D}_{\text{raw}}|),& \text{otherwise},\end{cases} \tag{2}\]
where \(\frac{|\mathcal{D}_{t}|}{|\mathcal{D}_{\text{raw}}|}\) is the _ratio of curation_ with \(\gamma\) being a pre-defined _minimal_ ratio of curation. If enough samples meet the threshold \(t\), \(\mathcal{D}_{t}\) is used. Otherwise, we use a _minimal ratio_\(\gamma\) of samples, that represent the top-\(k\) matching ones (with \(k=\gamma|\mathcal{D}_{\text{raw}}|\)) in terms of similarity across metadata.
The threshold \(t\) is crucial for CiT to balance the tradeoff between data quality and quantity. A higher \(t\) leads to high data quality, but can lead a lower ratio of curation. We adopt this mixed strategy because line 3 in Algorithm 2 could become a near infinite loop if the ratio of curation is low and not enough data that meets \(t\) can be found. This could happen because the threshold is set too high, or the data source has low metadata correspondence. The _otherwise_ part in equation 2 resolves this by selecting the \(\gamma\) (typically set to around 1% - 5%) best possible matches for training. See SSA.1.1 for PyTorch pseudo code of CiT.
## 4 Experiments
We use training data from two categories shown below; clean data that involves human-based offline filter pipelines and raw data that has not undergone cleaning.
### Cleaned Training Data
**YFCC15M.** We use the 15M subset of YFCC100M [29] (filtered by [21]) as the main evaluation dataset as it is widely adopted in existing literatures [20, 21, 22, 38]. It consists of English-only titles, descriptions, and tags. We simply refer to this as YFCC15M in this paper. Except for applying the script from [20] to remove HTML formatting, we do not perform any extra filtering or preprocessing. In contrast, LiT [38] performs extra filtering such as removing titles that start with "DSC", "IMG" and "Picture", or removing them if more than half of them contain digits.
**CC12M.** Since YFCC15M may lack enough training data, LiT [38] also combines YFCC15M with Conceptual Captions 12M (CC12M) [3], which is filtered and transformed from image & alt-text pairs from web pages. CC12M involves cleaning by supervised models from Google Cloud APIs to match the image's prediction over classes with text.
**LAION400M**[24] contains 400M English only image-text pairs. It is crawled from 2 and later filtered by a CLIP [21] model. Thus, LAION400M implicitly carries the data filter pipeline of WIT400M on which CLIP has been trained.
Footnote 2: [https://commoncrawl.org](https://commoncrawl.org)
### Raw Training Data
**YFCC100M**. We use the raw YFCC100M (the source of YFCC15M) to compare with YFCC15M. Note that YFCC100M is multilingual, whereas YFCC15M is English.
**Raw Image-Text Crawl.** To challenge CiT with real-world data, we further collect raw (unfiltered) image-text pairs from Common Crawl. We only perform de-duplication and NSFW filtering, but _no_ filtering on image-text association. This ended with 1.2B multilingual image-text pairs and 28.56% pairs are English (identified by our language identification system but this information is not used for CiT training). As such, about 343M image-text pairs are in English, which are slightly smaller than the scale of WIT400M or LAION400M, but much more noisy.
### Implementation and Training
Our training recipe uses a global batch size of 16,384, which is trained in 16 Nvidia V100 32GB GPUs. Our vision encoder corresponds to ViT [7] of various sizes and the text encoder defaults to BERTbase-SimCSE [6, 8] with a maximum token length of 32, similar to LiT [38]. Unless specified, we set a budget of training to be within \(b=5000\) steps (81M image-text pairs). We report hyper-parameters and an extra low-cost single-GPU setting in SSA.1.
We use pre-trained vision and text encoders and join them via two randomly initialized projection layers. Following LiT, we freeze the vision encoder and make the text encoder and two projection layers trainable. One can either use the text representation _before_, or _after_ the projection layer for computing cosine similarity during curation. We ablate these two choices in SS4.6.
### Evaluation
We evaluate zero-shot (0-shot) transfer _accuracy_ of CiT on **26** benchmarks, following [20, 21]. In our ablation studies, we use YFCC15M as the main data source for training and ImageNet-1K (IN-1K) as the downstream task. We use prompts from CLIP for all 26 tasks and additionally use the extra 2 prompts from LiT [38] for ImageNet for a fair comparison with LiT. Following CLIP, we perform prompt ensembling by averaging the class embeddings for each class across the prompt templates. For classification, cosine similarity is computed between an image embedding and the averaged class embeddings and the class with the highest cosine similarity is CiT's prediction. We perform validation every 500 training steps and stop training if the accuracy does not increase over the previous validation. The corresponding total training time (including curation and training) is reported along with the validation accuracy. We estimate the training time of baselines by re-running them under the same setup as CiT (16 GPUs) and maximize the GPU usage for best throughput. More results are in SSA.2.
### Choice of Pre-trained Models
We first study the effects of pre-trained encoders.
As _vision encoder_, we consider (1) ViT-B/16 [7] (patch size of 16\(\times\)16 pixels) with pre-trained weights from self-supervised MoCo-v3 [5], DINO [2] and MAE [10], all trained on IN-1K but without any labels. To be consistent with LiT [38], we also consider (2) supervised ViT(AugReg) [28] B/32, B/16, and L/16 trained on ImageNet-21K3. Finally, we also explore weakly-supervised ViT-B/16 and ViT-H/14 SWAG [27].
Footnote 3: We follow LiT here, but note that using IN-21K is not strictly a zero-shot setting, because 999 of the 1000 classes in IN-1K are in IN-21K.
Results for different vision encoder weights under the same ViT-B/16 architecture are in Table 2. We notice that the accuracy of MoCo-v3 (61.4%) and DINO (60.3%) pre-training are close, while MAE is worse (42.4%), presumably because the representations learned by instance discrimination (MoCo-v3 and DINO), which learns different embeddings for different images, is closer to zero-shot classification than MAE's training objective. AugReg performs best with 69.4% accuracy, presumably because the supervised pre-training on IN-21K is superior to unsupervised IN-1K pre-training. Finally, SWAG is worse than AugReg, but better than MoCo-v3. In the following experiments of this section, we will show larger variants.
For _text encoder_, we consider self-supervised base models from (1) language models BERT [6]; and contrastive tuned (2) BERT-SimCSE and RoBERTa-SimCSE [8], as shown in Table 3.
We observe similar trends as for vision: SimCSE trained BERT is better than vanilla BERT or RoBERTa, probably because contrastively trained [CLS] token by SimCSE can perform better text similarity than BERT's pairwise (a.k.a, next sentence prediction) trained [CLS] token or RoBERTa's no training on [CLS] token.
### Ablations
We adopt the combination of MoCo-v3 ViT B/16 and BERT-SimCSE as our default setting. We summarize ablation experiments of CiT in Table 1.
**Stage of Curation.** We first ablate the effects of curation in Table 1a. We see that CiT has a **7.6**% boost compared to _no curation_. We further ablate a _offline_ curation before training. This is sub-optimal as the SimCSE purely pre-trained from the text may not learn good representations for semantic-level similarity (discussion in SS3.1).
**Frequency of Curation.** Next, we are interested in how frequently curation needs to be performed. Table 1b varies the number of steps (and therefore pairs \(s\) when multiplied with the batch-size) for curation (in Alg. 2). We found that curating too frequent or infrequent yields sub-optimal results, but the change is marginal so we chose 100 steps as default.
**Feature for Curation.** In Table 1c, we find that using the feature before the projection layer (_e.g_. the direct output of SimCSE) is better than the features from the projection layer. This is probably because the projection layer tends to be more unstable during training (_e.g_. randomly initialized and needs longer training to align with the visual representation), whereas the SimCSE embedding is already pre-trained for text similarity.
**Threshold.** In Table 1d we ablate the threshold \(t\), which controls the trade-off for data quality and quantity. A lower threshold adds more low-quality data and a higher threshold reduces data quantity, so \(t=0.55\) is a good balance.
**Text Variants.** We ablate the length of text encoders in Table 1e to understand the memory/text sequence length tradeoff. We find that longer text sequences (77) (we reduce batch size per GPU to half and double the number of GPUs) are slightly worse. We also ablate the effectiveness of YFCC15M tag augmentation, adopted from LiT. Lastly, we are wondering if a shallow (6 layers) BERT-SimCSE is also a good text encoder. We obtain 1.2% worse results.
\begin{table}
\begin{tabular}{l c c} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l} \begin{tabular}{l}
\begin{tabular}{l} \text{{l
### Comparison to prior work on ImageNet
We compare CiT with existing contrastive cross-modal models in Tables 4 (YFCC and CC12M), 5 (LAION400M) and 6 (raw image-text crawl). We report the pre-training method (CLIP/LiT/CiT), vision encoder and initialization, usage of human-annotated labels, total training time in our setup (16 GPUs), as well as the ImageNet 0-shot accuracy.
**YFCC.** In Table 4 we report several data points for LiT and CiT training with various vision encoders and initialization. On YFCC15M, CiT outperforms LiT on self-supervised MoCo-v3 vision encoders by +5.9% accuracy. With ViT-B/32 trained with supervised AugReg on IN-21K, CiT yields a +3.4% gain over LiT. On YFCC15M+CC12M data with ViT-L/16 models, CiT outperforms LiT by +3.4%.
On YFCC100M we observe that LiT underperforms compared to YFCC15M (58.9 vs 59.9), due to cleaning [21] of the 15M subset. CiT however can _reverse_ the trend. CiT outperforms its counterpart from YFCC15M by 3%+ when using the less curated YFCC100M. This indicates human cleaning of YFCC100M by CLIP [21] is sub-optimal. The performance of CiT on YFCC100M is even **+2.6%** better than LiT on YFCC15M+CC12M. This trend holds for larger image model sizes (ViT-L/H) and stronger initialization (AugReg/SWAG), which lead to better accuracy.
**LAION400M.** In Table 5 we see that CiT performs better than OpenCLIP on LAION400M, while being substantially faster. For example, CiT with ViT-B/16 MoCo-v3 vision encoder performs as good as OpenCLIP but is 37.7\(\times\)faster in training. With more advanced initialization and larger ViT-L models, CiT is 283\(\times\) faster and 3% more accurate, producing 75.8% in 1.1 days with a 16 GPU setup, while OpenCLIP would take \(\sim\)283 days for an accuracy of 72.8%. We note that this extreme speedup comes with the caveat that our approach curates data with respect to downstream tasks; therefore, CiT only uses 26 hours for training, compared to 981 hours for OpenCLIP pre-training.
**Raw Image-Text Crawl.** We further test CiT on our raw image-text crawl containing 1.2B unfiltered image-text pairs from the web (about 343M pairs have English text).
\begin{table}
\begin{tabular}{l l l l l l l} Pre-train Data & Method & Vision Encoder & Vision Initialization & w/ Labels & Total Time & IN-1K Acc \\ \hline \multirow{7}{*}{YFCC15M} & CLIP [21] & ResNet-50 & scratch & ✗ & 25 hrs & 31.3 \\ & OpenCLIP [31] & ResNet-50 & scratch & ✗ & 25 hrs & 32.7 \\ & LiT [38] & ViT-B/16 & DINO [2] & ✗ & n/a & 55.4 \\ & LiT [38] & ViT-B/16 & MoCo-v3 [5] & ✗ & n/a & 55.5 \\ & LiT [38] & ViT-B/16 & AugReg [28] & IN-21K & n/a & 55.9\(\dagger\) \\ & LiT [38] & ViT-B/32 & AugReg [28] & IN-21K & 64 hrs & 59.9\(\ast\) \\ & **CIT** & ViT-B/16 & DINO [2] & ✗ & n/a & **60.3** \\ & **CIT** & ViT-B/16 & MoCo-v3 [5] & ✗ & 5 hrs & **61.4** \\ & **CiT** & ViT-B/32 & AugReg [28] & IN-21K & 11 hrs & **63.3** \\ & **CIT** & ViT-B/16 & AugReg [28] & IN-21K & 8 hrs & **69.4** \\ & **CIT** & ViT-L/16 & AugReg [28] & IN-21K & 8 hrs & **72.0** \\ & **CIT** & ViT-H/14 & SWAG [27] & IG hashtags & 11 hrs & **73.7** \\ \hline \multirow{7}{*}{YFCC15M+CC12M} & LiT [38] & ViT-L/16 & AugReg [28] & IN-21K & 112 hrs & 72.2\(\ast\) \\ & **CiT** & ViT-L/16 & AugReg [28] & IN-21K & 32 hrs & **75.6** \\ \hline \multirow{7}{*}{YFCC100M} & LiT [38] & ViT-B/32 & AugReg [28] & IN-21K & 15 hrs & 58.9\(\ast\) \\ & **CIT** & ViT-B/16 & MoCo-v3 [5] & ✗ & 48 hrs & **64.6** \\ \cline{1-1} & **CIT** & ViT-B/32 & AugReg [28] & IN-21K & 64 hrs & **65.6** \\ \cline{1-1} & **CIT** & ViT-B/16 & AugReg [28] & IN-21K & 66 hrs & **72.2** \\ \cline{1-1} & **CIT** & ViT-L/16 & AugReg [28] & IN-21K & 66 hrs & **74.8** \\ \cline{1-1} & **CIT** & ViT-H/14 & SWAG [27] & IG hashtags & 62 hrs & **75.5** \\ \hline \end{tabular}
\end{table}
Table 4: Comparison to existing methods on YFCC and CC12M. Under identical vision encoders, CiT achieves +3.2% higher accuracy with YFCC100M than using the human-cleaned YFCC15M subset and +5.9% accuracy over LiT on YFCC15M. \(\ast\) indicates reproduced results with BERTbase (uncased) for fair comparison; see appendix for the implementation differences to original LiT [38]. Total time for training and curation is reported for 16 V100 GPUs and varies depending on quality of embeddings from the vision encoder.
\begin{table}
\begin{tabular}{l l l l l l} Method & Vision Encoder & Vision Initialization & w/ Labeled Data & Total Time & IN-1K Acc \\ \hline OpenCLIP & ViT-B/32 & scratch & ✗ & 458 hrs & 62.9 \\ OpenCLIP & ViT-B/16 & scratch & ✗ & 981 hrs & 67.1 \\ OpenCLIP & ViT-L/14 & scratch & ✗ & 6803 hrs & 72.8 \\ LIT [38] & ViT-B/32 & AugReg [28] & IN-21K & 31 hrs & 62.8 \\ \hline
**CIT** & ViT-B/16 & MoCo-v3 [5] & ✗ & 26 hrs & **67.1** \\
**CIT** & ViT-B/32 & AugReg [28] & IN-21K & 62 hrs & **67.5** \\
**CIT** & ViT-B/16 & AugReg [28] & IN-21K & 63 hrs & **73.1** \\
**CIT** & ViT-L/16 & AugReg [28] & IN-21K & 27 hrs & **75.8** \\
**CIT** & ViT-H/14 & SWAG [27] & IG hashtags & 26 hrs & **76.4** \\ \hline \end{tabular}
\end{table}
Table 5: CiT on LAION400M: CiT reaches OpenCLIP-level accuracy with 37\(\times\) total training time improvement.
The data contains a large degree of noise. Results are shown in Table 6. To understand the challenge of training on raw image-text pairs, we run CLIP and LiT training on the raw image-text pairs. This yields unstable training that quickly reaches NaN loss for both a CLIP and LiT training. We believe some noisy pairs are unhealthy for training. By using our English filter to clean the text, we can train LiT and it reaches 56.7% IN-1K zero-shot accuracy. Training our CiT (_without_ even using an English filter) achieves 68.7% which is **+12.0**% higher. This indicates raw and very noisy image-text pairs lead to poor accuracy, but CiT can overcome this and curate high-quality data for vision-language learning.
Surprisingly, as shown in Table 5, CiT achieves much better performance than OpenCLIP trained on LAION400M. CiT on raw image-text reaches **77.9**%, which is +5.1% better than OpenCLIP ViT-L/14 (_cf_. Table 5). Note that our source is raw, with multilingual texts, whereas LAION400M is a curated English-only dataset filtered by the CLIP model. The training data used by CiT (_e.g_. 131M for 77.9%) is just around 1/5 of the scale of LAION400M dataset (one epoch), showing the effectiveness of curating training data.
### Comparison across 26 benchmarks
We extend CiT to 26 common 0-shot evaluation tasks for CLIP/SLIP models [20] on the public dataset YFCC15M. We provide more comparisons with further encoders as well as pre-training on LAION400M in the appendix. We evaluate with prompts from CLIP/SLIP. For ImageNet, we drop the extra prompts used by LiT for a fair comparison with the baselines. We use three setups of metadata: (_i_) IN-1K, (_ii_) IN-21K, and (_iii_) multi-task CiT that combines class names from all 26 tasks (_iv_) we run every task separately on a _single_ GPU as a low-compute setup (this trains a model for each task with separate metadata). Results are in Table 7 and discussed next.
We first evaluate CiT trained with IN-1K metadata on all 26 tasks. As expected accuracy on ImageNet and Pets is highest among the metadata variants (_i-iv_). Overall, we observe that _CiT 1K meta_ already exhibits certain generality to all tasks and can outperform CLIP (34.2 _vs_. 38.5%) and is similar to SLIP, but 8.2\(\times\) faster (5 _vs_. 41 hours), demonstrating its efficiency.
Next, we explore the WordNet lemma from ImageNet-21K as a relatively general metadata for training CiT. In Table 7, _CiT-21K-meta_ improves broadly over IN-1K leading to 40.6% average accuracy, showing that a more general taxonomy works well across tasks.
We combine the taxonomies from all 26 tasks in _CiT-multi-meta_. This allows us to curate training data for all 26 tasks at again almost no extra training cost. We notice that multi-task CiT is on average similarly accurate as IN-21K metadata (40.4% _vs_. 40.6%) and converges faster because CiT is more targeted towards tasks of interest.
Finally, we compare a setup that trains a model for each task with separate metadata. _CiT-sep.-meta_ in Table 7 achieves overall the best average accuracy of 42.2% across tasks. This setup uses a restricted 1-GPU setting to save compute and could be boosted further with longer training. We think that this scenario might be quite practical, where some domain data exists (_e.g_. on bird images in CUB) and one wants to build a classification system given a large amount of noisy image-text data from the web.
\begin{table}
\begin{tabular}{l l l l l l} \hline Method & Vision Encoder & Vision Initialization & w/ Labeled Data & Total Time & IN-1K Acc. \\ \hline OpenCLIP & ViT-B/16 & from scratch & ✗ & n/a & NaN loss \\ LiT & ViT-B/16 & MoCo-v3 [5] & ✗ & n/a & NaN loss \\ LiT (English filter) & ViT-B/16 & MoCo-v3 [5] & ✗ & 65 hrs & 56.7 \\ \hline
**CiT** & ViT-B/16 & MoCo-v3 [5] & ✗ & 39 hrs & **68.7** \\
**CIT** & ViT-B/32 & AugReg [28] & IN-21K & 69 hrs & **68.4** \\
**CIT** & ViT-B/16 & AugReg [28] & IN-21K & 72 hrs & **75.2** \\
**CIT** & ViT-L/16 & AugReg [28] & IN-21K & 105 hrs & **77.9** \\
**CIT** & ViT-H/14 & SWAG [27] & IG hashtags & 43 hrs & **77.4** \\ \hline \end{tabular}
\end{table}
Table 6: CiT on Raw Image-Text Crawl: CiT is able to produce strong results when learning from raw image-text data. The raw data contains 1.2B image-text pairs. An English language filter, which reduces the data to 343M pairs, is required to stabilize LiT training.
### Further Analysis
**Samples of Curated Data**. We further investigate samples curated by CiT on YFCC100M dataset in Table 8. We show training steps, a sample text, the related ImageNet metadata, as well as the cosine similarity in CiT's data proxy. At step \(c=0\) CiT's _data proxy_ tends to select text with similar length as class names and string-matching behavior; the short-term run of CiT (\(c=100\)) has some matching issues with many false positives. Later on, CiT starts to select texts of various lengths with similar semantics as the metadata. We do not observe any clearly less useful samples such as file names after \(c=2000\). Interestingly, CiT can even use the English part of mixed language texts from YFCC100M (as in the last example).
**Speed/accuracy trade-off.** In Figure 2, we show the speed/accuracy tradeoff of CiT _vs_. LiT [38], corresponding to results in Table 4). We see that CiT achieves a win-win scenario compared to LiT on identical AugReg ViT-B/32 vision encoders: a _+3.4%_ higher accuracy on ImageNet, and a _5\(\times\)_ faster total training time (including the curation time). on data YFCC15M [21].
**Ratio of Curation**. We are interested in the training dynamics of CiT. We use different curation thresholds \(t\) and inspect the amount of curated training data. In Figure 3, we see that the ratio of curation which corresponds to the fraction of used training samples from the raw data source, see SS3.3, keeps changing over steps for curation/training. Initially, CiT uses more data, _e.g_. for a threshold of \(t=0.5\), it peaks at about 75%. In this phase, the latent space of the text encoder is less aligned with the vision latents. Later on during training, CiT starts to produce embeddings that better represent the downstream task, producing a lower ratio.
## 5 Conclusion
This paper contributes CiT, a novel learning algorithm for efficient pre-training from noisy image-text data. CiT incorporates a curation process into learning to pull the training data distribution closer to downstream tasks. Our experiments demonstrate both significant accuracy and training time improvements when learning from either public or our own uncurated data from the web. We observe that training on the raw image-text pairs in YFCC can achieve better accuracy over the cleaned version from a hand-crafted filter pipeline. Further, we show that CiT can train with raw image-text pairs crawled from the web, which would lead to instability for vanilla pre-training objectives.
**Acknowledgement.** We thank Norman Mu, Shang-Wen Li, Vasu Sharma, Wojciech Galuba and Max Bain for help.
Figure 3: Ratio of curation under different thresholds \(t\). CiT broadly uses data first and curates more towards end of training.
Figure 2: CiT on provides \(>\)5\(\times\) speedup and +3.4% accuracy gain over LiT [38] on AugReg ViT-B/32 vision encoders. Training data is YFCC15M. Models are evaluated at 6 evenly sampled iterations.
\begin{table}
\begin{tabular}{l|l|c|c} \hline Step (c) & Text & ImageNet Class & Cosine Sim. \\ \hline
0 & title: _“Wollaston Beach”_ & beach & 0.739 \\
100 & title: _“In\&\_3128”_ & Vizsla & 0.779 \\
1000 & tag: _“beamballard parser river national wildlife refuge newnymport mssachusetts ocean”_ & beach & 0.716 \\
2000 & desc: _“These says were nice, told me all about this and other planes of the show, but unfortunately...”_ & military aircraft & 0.725 \\
3000 & title: _“Turtle”_ & terapin & 0.725 \\
4000 & desc: _“One of the fountains close by the south west entrance to the park”_ & fountain & 0.734 \\
5000 & title: _“butterfly”_ & papillon & 0.735 \\
5000 & tag: _“ash;explosion;salurajima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagoshima;kagima;kagoshima;kagima;kagoshima;kagoshima;kag
A Appendix
In this appendix, SSA.1 contains implementation details and SSA.2 contains further results as well as ablations.
### Implementation Details
#### a.1.1 PyTorch Pseudo Code
To facilitate implementation of CiT, we provide the PyTorch pseudo-code in Algorithm 4 below.
```
1#b:maximumtrainingstepsasbudget.
2#d:rawdatloader.
3#c_meta:textualmetadata.
4#bsz:batch_size.
5#t:threshold.
6#gamma:targetradioforcuration.
7#s:numberofexpectedpairs.
8
9c=0
10whilec<b:
11ifc#int(s/bsz)==0:
12x_meta-model(t_meta)
13x_meta-normalize(x_meta)
14d_c=[]
15whilelen(d_c)<s:
16x_imgs,x_txts-next(d)
17x_txts-model(x_txts)
18x_txts-normalize(x_txts)
19v-x_txts@x_meta.t()
20sel-max(v)>t
21b_ratio-sum(sel)/len(sel)
22ifb_ratio<gamma:
23sel-max(v):topk(
24k-int(bsz+gamma),dim=0)
25d_c.extend((x_imgs[sel],x_txts[sel]))
26
27for(x_imgs,x_txts)inbatchify(d_c):
28x_imgs,x_txts-model(x_imgs,x_txts)
29x_imgs,x_txts-normalize(x_imgs,x_txts)
30#scale:learnableloglogisticscale
31-exp(scale)+x_imgs@x_txts.t()
32labels-arange(bsz)
33loss-cross_entropy(1,labels)
34loss.backward() c+-1
```
**Algorithm 4**CiT: PyTorch Pseudo Code
#### a.1.2 Dataloader Implementation
For efficiency, we only load text during the curation loop and the training loop uses the curated indices to reload the full image-text pairs. Our implementation also supports in-memory storage of curated image-text pairs in case the data source is not randomly accessible for (re-)loading curated data, where all \(s\) pairs of training data can be stored in the CPU memory with image tensors represented as uint8 data. We use a larger batch size for curation (compared to training) to speed up CiT.
#### a.1.3 Detailed Implementation Settings
The hyper-parameters of CiT training are shown in Table 9. We mostly follow [20, 21, 38]. CiT is trained on 16 GPUs with a global batch size of 16,384 (1024 per GPU).
Hyperparameters for CiT curation outlined in SS3 of the main paper are shown in Table 10. We use different thresholds \(t\) and minimal ratios \(\gamma\) for each dataset/metadata combination to fit the training into a budget \(b\) shown in the table as well. We use the same values for all variants of vision encoders. Due to smaller size, we use a lower \(t\) for YFCC15M and CC12M, whereas for YFCC100M and Raw Img-Text Crawl we use a higher \(t\) to focus on high-quality data from the raw data source, in order to roughly meet the budget \(b\).
we perform a pre-curation on YFCC15M for each task using BERT-SimCSE with a threshold of 0.45 to remove pairs with low relevance.
#### a.1.4 Implementation Differences from LiT
While we aim for a close reproduction of LiT [38], there are a few tricks that our implementation does not incorporate and we suspect the differences on our LiT reproduction on YFCC stem from those. Below we list some tricks known to us, but there could be more differences we are not aware of since we have no access to LiT's full preprocessing and training code.
**Preprocessing.** For the captions, LiT performs extra filtering and removes titles that start with "DSC", "IMG", "Picture". Also, LiT removes text consisting of only the word "image" or text that contains a large fraction of digits.
**Joint Contrastive Loss.** LiT adopts a joint contrastive loss over 3 text fields in YFCC15M and shows the gain in Figure 8 of the LiT paper [38]. Since this technique is specific to the type of captions in the specific YFCC data, we remove it from our implementation and randomly sample one of the three text fields to pair with a training image.
**Text encoder.** LiT adopts various text encoders such as BERTbase and BERTlarge. This work consistently uses BERTbase for all main results to have a fair comparison.
### Additional Results
This section extends the results of CiT in the main paper to full results across 26 CLIP/SLIP benchmarks on YFCC15M and LAION400M and an extra ablation study.
#### a.2.1 Full Results on YFCC15M
We show the full results of Table 7 above in Table 12 below. On average, CiT-multi-meta (52.6) is slightly better than CiT-21K-meta (51.7), which is better than CiT-sep-meta and CiT-1K-meta (47.2). It appears that the broader ImageNet-21K wordnet taxonomy works well across datasets, and combining metadata from all downstream tasks is only slightly better than that. We note that training on the larger metadata does not introduce much extra curation compute since forwarding the raw examples takes the majority of computation. Nevertheless, we observe that larger metadata takes longer to converge and therefore increase the training budget to \(b=8000\) for CiT-21K-meta and CiT-multi-meta. We expect larger budgets will lead to even better results.
Besides what was already discussed in the main paper, we observe that CiT performs even better on larger models or models trained with supervised (AugReg IN-21K) or weakly supervised (SWAG) data than the unsupervisedly pre-trained MoCo-v3 on IN-1K. Out-of-domain issues (_e.g_. MNIST) are present even for larger vision encoders.
#### a.2.2 Full Results on LAION400M
In Table 13, we show the result of CiT trained on LAION400M and evaluated on 26 CLIP/SLIP benchmarks. With a larger data source, we realize CiT takes more time to converge especially with more metadata, which can be attributed to more data meeting the curation criteria. We set \(b=16000\) for CiT-multi-meta and \(b=30000\) for CiT-21K-meta. The trend is similar to YFCC15M but with better performance across the benchmarks. Similar as in Table 12, CiT-multi-meta is better than CiT-21K-meta, but this time the gap is larger. In addition to the longer training, we believe that the combined metadata from 26 benchmarks are more effective on larger pre-training data.
#### a.2.3 Full Results on Raw Image-Text Crawl
In Table 14, we show the result of CiT trained on our raw image-text crawl and evaluated on 26 benchmarks. With a larger raw data source, we realize CiT takes more time to converge. We set b = 30000 for CiT-multi-meta and b = 60000 for CiT-21K-meta. The trend is similar to LAION-400M but raw Image-Text Crawl is not cleaned for vision-language association. Similar as in Table 13, CiT-multi-meta is better than CiT-21K-meta, but the gap is larger. We expect better accuracy for longer training.
#### a.2.4 Additional Ablations
This section extends ablations in Table 1 of the main paper to _(i)_ evaluation prompts and _(ii)_ training objectives.
**Evaluation Prompts.** We first verify the effects of LiT's extra prompts on CiT in Table 10(a). We obtain a +0.2% gain by adding them to the CLIP prompts.
**Training Objective.** We ablate the \(\mathcal{L}_{\text{img2txt}}\) training objective which our approach uses (see SS3.2 of the main paper). In Table 10(a) we see that this variant provides a +0.2% gain over CLIP's objective that also incorporates a text2img loss.
\begin{table}
\begin{tabular}{c c c} Eval. Prompts & Acc & Objective & Acc \\ \hline \hline CLIP+LiT prompts & **61.4** \\ \hline CLIP prompts only & 61.2 & CLIP obj. & 61.2 \\ \multirow{2}{*}{(a) Evaluation Prompts} & & (b) Training Objective \\ \end{tabular}
\end{table}
Table 11: **Additional ablation experiments**. We use the default setup (MoCo-v3 / BERTbase-SimCSE) and YFCC15M as data source and report IN-1K Accuracy.
\begin{table}
\begin{tabular}{l l l|c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \multicolumn{13}{c|}{Vis. Encoder Imit.} & \multicolumn{13}{c|}{Hrs}
#### a.2.5 Early Detection of Task Coverage
One extra benefit of curation is being able to detect zero-shot transferability. Although existing scaled pre-trainings have huge success, the coverage of pre-training data distribution for downstream tasks is largely unknown. We discuss this coverage issue below.
**Task Coverage**. We obtain the statistics of curated data (offline in Table 0(a)) for the 26 tasks and show it in Table 15. We consider a sample with a maximum cosine similarity for one class as one sample belonging to that class/task. We note that this is a hard-matching which does not necessarily cover the full class to sample correlation. Breaking down YFCC15M for different tasks partially explains the low performance on some. For example, SST2 (a binary classification task) has low image-text pair matches, explaining the low performance (close to random) for all models.
|
2310.03178 | Digital Ethics in Federated Learning | The Internet of Things (IoT) consistently generates vast amounts of data,
sparking increasing concern over the protection of data privacy and the
limitation of data misuse. Federated learning (FL) facilitates collaborative
capabilities among multiple parties by sharing machine learning (ML) model
parameters instead of raw user data, and it has recently gained significant
attention for its potential in privacy preservation and learning efficiency
enhancement. In this paper, we highlight the digital ethics concerns that arise
when human-centric devices serve as clients in FL. More specifically,
challenges of game dynamics, fairness, incentive, and continuity arise in FL
due to differences in perspectives and objectives between clients and the
server. We analyze these challenges and their solutions from the perspectives
of both the client and the server, and through the viewpoints of centralized
and decentralized FL. Finally, we explore the opportunities in FL for
human-centric IoT as directions for future development. | Liangqi Yuan, Ziran Wang, Christopher G. Brinton | 2023-10-04T21:48:35Z | http://arxiv.org/abs/2310.03178v2 | # Digital Ethics in Federated Learning
###### Abstract
The Internet of Things (IoT) consistently generates vast amounts of data, sparking increasing concern over the protection of data privacy and the limitation of data misuse. Federated learning (FL) facilitates collaborative capabilities among multiple parties by sharing machine learning (ML) model parameters instead of raw user data, and it has recently gained significant attention for its potential in privacy preservation and learning efficiency enhancement. In this paper, we highlight the digital ethics concerns that arise when human-centric devices serve as clients in FL. More specifically, challenges of game dynamics, fairness, incentive, and continuity arise in FL due to differences in perspectives and objectives between clients and the server. We analyze these challenges and their solutions from the perspectives of both the client and the server, and through the viewpoints of centralized and decentralized FL. Finally, we explore the opportunities in FL for human-centric IoT as directions for future development.
## I Introduction
The Internet of Things (IoT) encompasses a phenomenon where physical devices, embedded with sensors, interact with their surroundings and engage in data exchange with other devices and systems via the Internet. These devices cover a vast range, from compact thermostats to large-scale industrial machinery, all interconnected within the IoT infrastructure. The human-centric IoT applications focus on devices within the IoT ecosystem that are designed to focus on human interaction or significantly influenced by human factors [1], such as smartphones, wearable devices, vehicles, and healthcare appliances. These devices, through their diverse sensors, incessantly produce a wealth of highly sensitive data. For example, images taken by smartphones can contain global positioning systems (GPS) location information, smartwatches can detect a user's electrocardiogram (ECG), and vehicle navigation systems document a driver's routes. Certain companies might require customers to disclose such rich personal data to improve their machine learning (ML) models. The growing demand for data invariably raises concerns over potential privacy breaches and misuse of associated data, introducing pressing digital ethics issues in our interconnected digital era, such as privacy, security, and fairness.
Federated learning (FL) [2] represents a decentralized learning paradigm, designed to facilitate multi-party collaboration while safeguarding user privacy. Its essence lies in sharing ML models rather than raw user data, achieving privacy preservation. Moreover, with the exponential growth of users and their data, the transmission and storage of vast amounts of raw data pose significant challenges for communication channels and server storage. FL can also be perceived as a form of knowledge distillation, distilling knowledge from raw data into model parameters to alleviate communication overhead. In line with the presence or absence of a server for coordination, management, and aggregation, FL can be categorized into two frameworks: centralized FL (CFL) and decentralized FL (DFL) [3]. Initially proposed by Google researchers and deployed in the Google keyboard for cooperative learning in keyboard input recommendation models [4], FL has found extensive applications in numerous sectors, such as healthcare, mobile services, and intelligent transportation systems, facilitating collaboration amongst widely distributed edge devices or institutions.
FL presents a powerful approach for mitigating privacy concerns inherent in collaborative ML. However, digital ethical concerns extending beyond privacy are often overlooked [5], especially within the context of the human-centric IoT. Notably, most existing research inadequately addresses ethical considerations from the narrow perspective of the client side. For example, users of applications like Google Keyboard may remain oblivious to or unconcerned about the underlying FL algorithms. Their primary concern is that they are contributing their model but not receiving highly accurate personalized recommendations in return. This disparity in expectations can breed disappointment and potentially lead to a discontinuation of use. Consequently, human emotions may emerge as a vital factor in ensuring the continuity and longevity of FL frameworks in these contexts.
In this paper, we present a discourse on the digital ethical issues arising within both CFL and D
Fig. 1: The human-centric Internet of Things (IoT) applications within the (a) centralized federated learning (CFL) and (b) decentralized federated learning (DFL) frameworks.
centric IoT applications, as depicted in Fig. 1. We illustrate the FL lifeline in Fig. 2, encompassing two trajectories, namely human-centric IoT and the digital ethics of FL. Apart from user privacy, people are generally concerned about fairness, interpretability, accountability, transparency, and other aspects. Additionally, issues related to user management, incentives, penalties, continuity, and compatibility with new users are important considerations in FL systems. In addition to pursuing higher performance and convergence in ML and optimizing communication networks, there is also a growing interest in the social, psychological, and economic aspects of FL, among others.
The organization of this paper is as follows: First, we provide an in-depth examination of the definitions and perspectives of clients and the server, as well as the underlying reasons for the game dynamic relationship that arises between the CFL and DFL frameworks (Sec. II). The discrepancies, limitations, and information asymmetry between clients and the server, especially the fundamental difference in their objectives, inevitably give rise to a game dynamic (Sec. III). Subsequently, the resultant trust issues emerging from divergent objectives appear specifically as client skepticism towards the fairness of the FL framework (Sec. IV). Notably, the difference in perspective also leads to varied definitions of fairness between clients and the server. Adjacent to the issue of fairness is the problem of incentive mechanisms for clients (Sec. V). Beyond the extensively researched server-led incentive mechanisms, we discuss the potential for a reputation system, established by the client community, to become one of the primary mechanisms in DFL. Alongside fairness and incentives, we also touch upon the continuity of FL's development and updates (Sec. VI). Based on these four properties (i.e., game dynamics, fairness, incentives, and continuity), we proceed to discuss opportunities to foster the continuous, active, and positive development of FL (Sec. VII). Finally, we draw conclusions from this paper (Sec. VIII).
## II Variance of Perspectives
Within FL, different roles possess distinct perspectives and varied levels of knowledge. The core of the game dynamics in FL stems from the differences in clients' contributions (e.g., the volume of raw data), the learning process, and the information asymmetry among participants.
### _Omniscient (Authors' and Readers') Perspective_
Currently, a significant portion of research papers on FL tends to overlook the information asymmetry between clients and servers. They often adopt an idealized perspective, optimizing FL based on the assumption of complete and perfect knowledge. These design methods, founded on the notion of omniscient information, fail to address the practical challenges that arise from limited information exchange, client data heterogeneity, and potential trust issues. Recognizing and considering the scenarios of information asymmetry are crucial for developing effective FL systems.
### _Server's Perspective_
In CFL, the role of the server is to receive model parameters from all clients, aggregate them, and then redistribute the aggregated model. The server, however, remains oblivious to how clients collect data, train models, and handle the post-processing of models. Some CFL frameworks make presumptions that the server is privy to more extensive metadata from clients. For example, in the case of FedAvg, each client not only sends their model parameters but also transmits the volume of their local raw data. This additional metadata allows the server to perform weighted averaging. Therefore, in CFL, the resources or perspectives available to the server can be summarized as previously aggregated models, current and past models from clients, and other metadata that clients are asked to send, such as volume of raw data, performance on local test sets, losses, training epochs, etc.
Fig. 2: Lifeline of digital ethics and the human-centric Internet of Things (IoT) applications in federated learning (FL).
### _Clients' Perspective_
Considering the perspective of the clients, we discuss the contexts of both CFL and DFL frameworks, as illustrated in Fig. 1.
In CFL, clients are oblivious to each other's information, such as the number of clients, the volume of raw data each client holds, the learning process, and the model performance of each. In specific FL scenarios, for example, when healthcare institutions act as clients for FL, the components such as optimizers, loss functions, learning rates, and training epochs differ from client to client. Additionally, clients lack knowledge about the server-side details, like the aggregation method employed by the server. Hence, in a CFL framework, the only available information for each client is the aggregated model received from the server.
In DFL, clients directly share models without the coordination of a server. For example, in a fully connected network topology, every client within the DFL framework needs to transmit their own model parameters to all other clients, and reciprocally receive models from them. As a result, a certain framework, such as network topology, communication direction, frequency, and so forth, needs to be agreed upon among clients. They also need to be cognizant of certain information about other clients, like their addresses and ports. Additionally, some extra metadata, such as the volume of raw data, number of clients, model versions, etc., might also be transmitted as per the requirement. Therefore, in DFL, for the system to function correctly, clients need to establish a communication protocol among themselves and are required to directly disclose their local information to other clients.
## III Game Dynamics
### _Why Game Dynamic Emerge?_
Compared with distributed learning that assigns tasks to nodes or miners, FL inherently emphasizes more on the data-generating clients. Governed by these data-holding clients, and propelled by self-interest and greed, the inclination towards selfish behavior and a lack of trust in others could surface. This drives the dynamics of interaction among clients in a game dynamics context, where each seeks to maximize personal gains.
This dynamic is primarily attributed to significant data heterogeneity among clients, where the server-aggregated model may not exhibit exceptional performance on all clients. Firstly, inter-group heterogeneity exists among clients. For example, professionals such as professors, doctors, and lawyers using Google Keyboard would require highly tailored recommendations due to their distinctive fields of expertise. Secondly, intragroup heterogeneity exists within each group of users, wherein each user's academic discipline, level of knowledge, years of expertise, and other factors vary. Lastly, system heterogeneity among clients arises from variations in IoT devices, which includes disparities among sensor and instrument manufacturers, differences in software versions of devices, and varying user operations.
### _Game Dynamics between Client and Server_
In CFL, clients and the server share similar yet fundamentally different objectives: clients aim to achieve the best-performing personalized model on their local dataset, while the server seeks to achieve the best average-performing general model across all clients. While this setup can be mutually beneficial, a game dynamics relationship emerges between the clients and the server due to the trade-off between personalized performance and generalization.
Personalized FL represents a potential solution to mitigating the game dynamics between clients and the server, as it seeks to satisfy the objectives of both parties [6]. There are two main strategies in this context: client compromise and server compromise. In the case of client compromise, a simplistic implementation would involve the client performing additional gradient descent upon receiving the generalized global model (i.e., meta-learning), thus achieving personalized expansion [7]. Conversely, in server compromise, a common practice is clustered FL. In this scenario, the server can create multiple
aggregated global models based on the nature of the clients or clustering of client models, with even the potential for multiple servers to partition different regions for aggregation (i.e., hierarchical FL). Regardless of whether it is client compromise or server compromise, these strategies both entail additional overheads, such as computation, communication, and storage.
### _Game Dynamics among Clients_
In DFL, despite the symmetric roles of clients in communication and knowledge propagation, competition can still emerge due to data heterogeneity and system heterogeneity. They all share the same but conflicting objective of striving for the best model performance on their respective local data sets. However, given the data heterogeneity among clients, it is more common that the models from other clients perform poorly on their local dataset [8].
Analogous to the two compromise strategies in CFL, DFL also incorporates similar personalized methods to enhance the performance of aggregated models on clients' local data sets. Apart from model post-processing methods such as meta-learning, DFL can reduce data heterogeneity among clients within a cluster by establishing different topological structures, mimicking the clustered FL. In particular, in real-world DFL scenarios, clients might prefer freely forming their clusters and establishing DFL network topologies among similar and familiar populations, such as city clusters and suburb clusters determined by geographical locations. These more flexible and customizable network topologies, although more challenging to establish initially, also confer more personalized and trustworthy DFL with communication cost advantages.
## IV Fairness
### _How to Define Fairness?_
A fairer system would enhance client trust, incentivize client contributions, reduce the potential for free-riding behaviors, attract more new client engagement, and bolster the long-term continuity of the system, among other benefits. Fairness has always been a central theme in human cooperation, and FL is no exception. Particularly in CFL, the method of aggregation has spurred discussions concerning fairness. The question of fairness in FL is indeed become a focal point of discourse. It involves considering whether the FL framework should prioritize the majority of users and clients with a larger number of samples, or whether it should also take into account clients with fewer samples that may have lower representativeness [9]. Furthermore, the perspectives of both clients and the server may also sway their interpretations of fairness. Clients may not have full visibility into the server's aggregation algorithm, nor comprehend the performance of the aggregated model across different clients.
### _Fairness of Server_
From the perspective of the server, its objective is to pursue a generalized model that maximizes the overall average performance of all clients. Driven by this objective and the pursuit of generalization in FL, the server's concept of fairness often tends to favor clients with a greater influence or voice. In the classic FedAvg algorithm, the server conducts a weighted averaging aggregation based on the sample number of each client. This approach seems fair because the sample size can extent reflect the performance and credibility of the model to some extent, and can be seen as a reward for clients with more samples, as the aggregated model is more likely to bias towards them. However, for those underrepresented clients, the performance of the aggregated model might be unsatisfactory. Furthermore, the dominance of large sample clients could lead to low sample diversity and cause the aggregated model to lose its generalization capability. On the contrary, an FL framework involving non-weighted averaging during aggregation might demotivate clients with large sample sizes, subsequently diminishing system performance and continuity. Hence, from the server's standpoint, the conception of fairness remains a topic open to debate.
### _Fairness of Client_
Conversely, from the client's perspective, their interpretation of fairness tends to be simpler. This is primarily due to the likelihood that they are either unaware of or unconcerned
with the server's aggregation algorithm and the performance of the aggregated model on other clients. Hence, within the FL framework, clients would consider the system fair from a standpoint of individual fairness, provided the model maintains acceptable performance locally. A noteworthy example is Google Keyboard, where users contribute local model parameters in their usage, and in return, benefit from personalized recommendations. Interestingly, these recommendations from Google Keyboard are not necessarily completely accurate. As long as the output is within the user's range of acceptability, the application can maintain its advantage relative to non-personalized keyboard applications. Of course, it is crucial to note that the level of acceptable performance may vary according to individual clients' requirements and should not be generalized. When using Google Keyboard, users are often oblivious to or indifferent towards the server's aggregation algorithm and have minimal or no knowledge of the model's performance among other users. Users are likely to be self-interested, prioritizing their user experience without considering any factors related to others, i.e., a non-cooperative game scenario.
## V Incentives
### _Incentives Driven by Server_
As the owner, leader, and manager of FL frameworks, the server typically aspires for its framework to undergo large-scale, active, positive, and continuable development. Thus, how the server employs incentive strategies to encourage clients to report their models, metadata, contributions, and even flaws in a rational, honest, and proactive manner remains an unresolved issue. The right to use the aggregated model itself serves as a form of reward (passive incentive), and current incentive strategies also contemplate offering additional rewards disseminated by the server to motivate clients (active incentives). The client contributions in these incentive strategies can follow economic principles, such as game theory, auction, contract, matching theory, and so forth [10]. The Stackelberg game, in particular, has garnered considerable attention due to its alignment with the behaviors of the server and clients in a CFL setting. Besides these active incentives, punitive incentives may also serve as a potential strategy. For example, the right to use the aggregated model could be revoked if a client's contribution does not meet expectations.
### _Incentives Driven by Client Community_
Within the context of DFL, the absence of server coordination and the customizability of diverse network topologies render the incentive problem more variable and challenging [11]. On the one hand, there is no server to generate and distribute rewards, while on the other hand, calculating client contributions is especially difficult due to mutual distrust among clients. Therefore, certain passive incentive strategies may become more effective and prevalent than active incentives. On one side, clients can acquire the right to use the models by participating in the DFL community. Simultaneously, due to the factors of information asymmetry and mutual invisibility of information among clients, they are unaware of the size of each other's contributions, such as the volume of raw data, training epochs, optimization results, etc. Consequently, they might be more inclined to share models imbued with local knowledge in exchange for other clients' model updates. The motivation here is to garner as many resources as possible from the client community, albeit at the expense of disclosing local resources. We can draw inspiration from altruistic contribution behaviors observed in human societies, such as open-sourcing on Github, answering questions on Stack Overflow, voluntarily performing peer reviews, etc. While free-riding attacks (where some users garner knowledge from others without contributing themselves) are inevitable, the influence of reputation and prestige can nonetheless maintain a virtuous cycle within the community [12].
## VI Continuity
Continuity is a critical feature for the survival, revenue generation, and expansion of any application, system, or framework. In terms of FL, continuity signifies the pause, elimination, and reactivation of inactive clients, the continual, active, and voluntary updates from current clients, and the willingness, eligibility, and data diversity of a large number of prospective clients.
### _Continuable Development of Server_
From the server's perspective, continuable development necessitates addressing and responding to the needs of these three classes of clients - inactive, current, and potential - while also considering the maintenance of different versions of the model to prevent catastrophic forgetting. More specifically, due to the continuous generation of new data by clients in the real world, particularly IoT devices, the local model updates of clients are typically based on the latest data. Although new models are evidently more compelling due to factors such as scenario updates, user utilization, and concept drift, old versions of the models do not entirely lose their contributions. A potential example could be an application using IoT devices, such as a smartwatch that monitors user's ECG patterns. The ECG readings of users are likely to differ between weekdays and weekends, thus the models derived from weekend data might warrant individual storage. In practice for FL, while the server is aggregating the current versions of local models, it also incorporates previous versions with appropriate weighting. Furthermore, clients are granted the ability to trace back and retrieve prior versions of the model at any time. This feature serves as a safeguard against potential instability in client performance due to model updates.
### _Continuable Update of Client_
In the context of CFL, clients strive for long-term stability, rapid iterations, and efficient updates of the aggregated models from the server. Thus, they may work hard to deploy server-updated models at the earliest opportunity to achieve enhanced performance and user experience. Beyond their expectations from the server, under rational circumstances, clients might also attempt to report their model parameters to the server
as rapidly, thoroughly, and accurately as possible, to ensure their models are significantly considered during the server's aggregation process. This is because the server cannot indefinitely wait for all clients to upload their models. Therefore, in a rational state, the behavior of client updates is balanced between the long-term nature of data collection and the rapidity of model updates.
In the scenario of DFL, clients within the community may voluntarily identify, denounce, and report malicious clients performing adversarial attacks (e.g., model poisoning) in order to protect the community, given that this relates to their own interests. This is because the incorporation of models from these malicious clients into the FL process could potentially harm their interests. Clients might also proactively share their models with other clients, establishing a good reputation, so that other clients will be inclined to promptly share their model updates in return. One potential concern is that the DFL client population may exhibit exclusionary tendencies. Specifically, the mistrust towards new clients and the uncertainty brought about by their models, especially within smaller communities, can be quite pronounced. This may further hinder the continuity and growth of such small-scale communities.
## VII Opportunities
### _Interplay of Game Dynamics, Fairness, Incentives, and Continuity_
The issues of game dynamics, fairness, incentive mechanisms, and continuity in FL are interrelated and mutually impactful. For example, if an FL framework could perfectly achieve the objectives of all clients, such as Google Keyboard ideally meeting user expectations, users would naturally diminish their concerns about fairness. A fairness-aware strategy can also be considered as an incentive mechanism where clients contributing more are rewarded proportionally. Taking FedAvg as an example, clients might make great efforts to contribute as much local data as possible to the model training, to gain a more significant voice during the server's model aggregation process. Therefore, fairness-aware strategies of weighted aggregation indirectly incentivize clients to make more contributions. Concurrently, this enhances the continuity of the FL framework, as each client will make an effort to collect data, train models, and participate in FL updates promptly to gain rewards. Under such continuable conditions, the game dynamics within the FL framework are also mitigated, as each client generates a steady stream of data resources, enabling the training of more robust models. Therefore, for the issues of game dynamics, fairness, incentive mechanisms, and continuity, both parallel multi-solution approaches and single-solution breakthroughs are viable options.
### _Integration with Sociology and Ethology_
FL essentially represents a form of knowledge propagation, a method that is already widespread, diverse, and matured within both human societies and animal behaviors [13]. For example, the instructive paradigm between a teacher and students can offer insights to CFL, resonating with the architecture of a large model within the server and smaller models among clients utilized in federated knowledge distillation [14]. Intriguingly, a similar hierarchical structure is observed in the field of ethology, particularly within ant colonies or bee hives. Here, directives (models) from the queen ant or queen bee (server) are disseminated to the worker ants or bees (clients), offering a clear instance of role distribution.
DFL is increasingly becoming a focus for researchers, due to its capacity to circumvent limitations imposed by server dependency, and also its reflection of more prevalent modes of knowledge dissemination among clients within human societies and ethology. For example, in the context of conferences, speakers (clients) present their research findings (models) to all attendees (other clients), which can be viewed as a manifestation of fully connected DFL. In group collaborations, each team member (client) contributes a part towards a common goal (model), mirroring the concept of split DFL. Interestingly, similar decentralized patterns of knowledge dissemination are observable in animal behavior. For example, within a school of fish, individual fish (clients) only communicate with their neighbors (gossip protocol), but when danger arises, the alert signal (model) spreads across the entire school (other clients), promoting swift collective evasion [15].
Therefore, incorporating insights from sociology and ethology can effectively enhance FL organizational structures that are centered on IoT users, better aligning with the psychological expectations of users as clients.
### _Deployment Optimization in Federated Learning_
Current research mainly centers on the optimization of training and communication within FL, largely overlooking the strategy and timing for deploying the base model in FL on client devices. Specifically, in classical algorithms such as FedAvg, the fundamental operational cycle entails download \(\rightarrow\) train \(\rightarrow\) upload \(\rightarrow\) download \(\rightarrow\) deploy. In contrast, personalized algorithms, such as meta-learning, follow a cycle of download \(\rightarrow\) train \(\rightarrow\) upload \(\rightarrow\) download \(\rightarrow\) train \(\rightarrow\) deploy. Therefore, it's evident that the deployment sequence within the communication and training processes significantly affects the performance of the model. With this in mind, we propose considering two distinct deployment sequences, facilitating the deployment of either generalized or personalized models, contingent on the specific use case:
1. [label=()]
2. Deploy post-download for a generalized model: The model deployed is the aggregated one, offering wider generalization capabilities. However, it may not necessarily deliver optimal performance on local datasets.
3. Deploy post-training for a personalized model: The model deployed is the one locally trained on the aggregated model, offering a higher degree of personalization and subsequently, enhancing confidence in the model's performance.
Beyond the influence of the order of deployment on performance within the FL process, in the real world, due to the sequential and time-sensitive nature of data collection from IoT devices, excessive waiting for responses from the server or other clients may degrade model performance. Therefore,
deployment optimization in FL, as an issue rooted in real-world applications, builds upon the foundational capacities of training and communication to further enhance FL's performance, credibility, and operational efficiency.
## VIII Conclusion
In this paper, we explore and discuss FL in the context of human-centric IoT applications, with a particular emphasis on the advancements made by FL algorithms in addressing human privacy concerns, as well as other digital ethical dilemmas. We take into account perspectives from three distinct roles: the omniscient, clients, and the server, with a detailed analysis of both the CFL and DFL frameworks. Each of these roles, characterized by varying objectives and information asymmetries, raises game dynamics and trust crises, which in turn incite debates around fairness, incentive, and continuity. This paper aims to highlight the prevalent disregard for human digital ethics in the current FL paradigm and to inspire the future design of FL frameworks from sociological, psychological, and economic perspectives.
|
2303.03040 | Forward modelling of brightness variations in Sun-like stars -- II.
Light curves and variability | The amplitude and morphology of light curves of solar-like stars change
substantially with increasing rotation rate: brightness variations get
amplified and become more regular, which has so far not been explained. We
develop a modelling approach for calculating brightness variations of stars
with various rotation rates and use it to explain observed trends in stellar
photometric variability. We combine numerical simulations of magnetic Flux
Emergence And Transport (FEAT) with a model for stellar brightness variability
to calculate synthetic light curves of stars as observed by the Kepler
telescope. We compute the distribution of magnetic flux on the stellar surface
for various rotation rates and degrees of active-region nesting (i.e., the
tendency of active regions to emerge in the vicinity of recently emerged ones).
Using the resulting maps of the magnetic flux, we compute the rotational
variability of our simulated stellar light curves as a function of rotation
rate and nesting of magnetic features and compare our calculations to Kepler
observations. We show that both rotation rate and degree of nesting have a
strong impact on the amplitude and morphology of stellar light curves. In order
to explain the variability of the bulk of \K{} targets with known rotation
rates, we need to increase the degree of nesting to values much larger than on
the Sun. The suggested increase of nesting with the rotation rate can provide
clues to the flux emergence process for high levels of stellar activity. | N. -E. Nèmec, A. I. Shapiro, E. Işik, S. K. Solanki, T. Reinhold | 2023-03-06T11:11:51Z | http://arxiv.org/abs/2303.03040v1 | # Forward modelling of brightness variations in Sun-like stars
###### Abstract
Context:The amplitude and morphology of light curves of solar-like stars change substantially with increasing rotation rate: brightness variations get amplified and become more regular, which has so far not been explained.
Aims:We develop a modelling approach for calculating brightness variations of stars with various rotation rates and use it to explain observed trends in stellar photometric variability.
Methods:We combine numerical simulations of magnetic Flux Emergence And Transport (FEAT) with a model for stellar brightness variability to calculate synthetic light curves of stars as observed by the _Kepler_ telescope. We compute the distribution of magnetic flux on the stellar surface for various rotation rates and degrees of active-region nesting (i.e., the tendency of active regions to emerge in the vicinity of recently emerged ones). Using the resulting maps of the magnetic flux, we compute the rotational variability of our simulated stellar light curves as a function of rotation rate and nesting of magnetic features and compare our calculations to _Kepler_ observations.
Results:We show that both rotation rate and degree of nesting have a strong impact on the amplitude and morphology of stellar light curves. In order to explain the variability of the bulk of _Kepler_ targets with known rotation rates, we need to increase the degree of nesting to values much larger than on the Sun.
Conclusions:The suggested increase of nesting with the rotation rate can provide clues to the flux emergence process for high levels of stellar activity.
## 1 Introduction
Planet hunting missions such as the Convection, Rotation and planetary Transits (CoRoT Baglin et al., 2006; Borde et al., 2003), _Kepler_(Borucki et al., 2010) and the Transiting Exoplanet Survey Satellite (TESS, Ricker et al., 2014) allow studying stellar brightness variations caused by transits of magnetic features as stars rotate. Such brightness variations were discovered for the Sun almost half a century ago (Willson et al., 1981; Willson and Hudson, 1981). Since then the models of solar brightness variations have matured and are now not only capable of accurately reproducing most of the available measurements (see Solanki et al., 2013; Ermolli et al., 2013, for reviews) but they also provide a starting point for explaining a plethora of stellar photometric data (see, e.g. Lagrange et al., 2010; Meunier and Lagrange, 2013; Meunier et al., 2015; Borgniet et al., 2015; Nemec et al., 2020).
Recently Nemec et al. (2020) (hereafter N20b) have combined the Spectral And Total Irradiance Reconstruction model (SATIRE, Fligge et al., 2000; Krivova et al., 2003) together with a surface flux transport model (SFTM, Cameron et al., 2010) to compute the power spectra of solar brightness variations as they would be measured at different inclinations, i.e. the angle between solar rotation axis and direction to the observer. These calculations helped to remove a number of important observational biases when comparing solar variability to that of other stars (Nemec et al., 2020; Reinhold et al., 2020). Notably, by employing the N20b model and using the approach developed by Witzke et al. (2018, 2020) to extend it to stars with non-solar metallicities, Reinhold et al. (2021) have found that rotation periods of a majority of the G-dwarfs with near-solar age remain undetected. These results provided an explanation for the discrepancy between the predictions of the number of Sun-like rotators in the _Kepler_ field and the actual number of detected ones (see van Saders et al., 2019).
In this work, we make an additional important extension of the solar paradigm for modelling variability of stars rotating faster than the Sun. Namely, we combine the solar variability model of N20b, which was extensively tested against solar irradiance measurements, with the modelling framework for computing the surface distribution of magnetic flux on stars with solar fundamental parameters but various rotation rates developed by Isik et al. (2018) (hereafter Paper I). The Flux Emergence And Transport (FEAT) model presented in Paper I involves physics-based calculations of the emergence latitudes and tilt angles of bipolar magnetic regions (BMRs) for given stellar rotation rates, and the subsequent modelling of the evolution of the radial magnetic flux at the photosphere via an SFTM. The FEAT model is self-consistently able to reproduce the observations of polar spots that appear on stars with rotation periods below about 3 days (see, e.g., Jeffers et al., 2002; Marsden et al., 2004; Jarvinen et al., 2006; Waite et al., 2015). Recently, the FEAT model was successfully applied to the young solar analogue EK Dra (Senavci et al., 2021), to explain the Doppler images that indicated near-polar spots and extended spot patterns towards low latitudes. In the present work, we extend the model of Paper I to calculate brightness variations of stars with a variety of rotation
rates observed at various inclination. We also allow for different degrees and modes of nesting of magnetic features on their surfaces (i.e. the tendency of active regions to emerge in the vicinity of recently emerged regions). We compare our results to the observational trends found by McQuillan et al. (2014) and Santos et al. (2021) from _Kepler_ data and propose a possible explanation of these trends.
In Sec. 2 we briefly describe the FEAT model (see Isik et al., 2018, for a more detailed description) and explain how this model is extended for calculating stellar brightness variability. In Sec. 3 we discuss the resulting light curves (LCs). These LCs are then used to calculate the amplitude of the variability, which we compare to observations in Sec. 4. We discuss our findings within the frameworks of various other recent studies in Sec. 5, before we present our conclusions in Sec. 6.
## 2 Model
Our model consists of two building blocks: calculations of the surface distribution of magnetic features and the subsequent calculations of their effect on stellar brightness.
### Flux emergence and transport
To simulate the magnetic flux emergence on other stars, we utilise the FEAT model (Paper I).
In essence, the FEAT model extends the pattern of emergence and evolution of the magnetic fields observed on the surface of the Sun to stars rotating faster than the Sun (and, thus, more active than the Sun). This is done in five steps (see Paper I, Fig. B1):
1. We adopt the synthetic record of solar active-region emergences during cycle 22 from Jiang et al. (2011). While this approach cannot be used to reconstruct the solar irradiance on a specific day, N20 have shown that Jiang et al. (2011) records allow reproducing the overall pattern of solar variability. Cycle 22 was chosen, as it represents a cycle of moderate to strong activity level, making it suitable for modelling the most active stars.
2. We define the time-dependent emergence rate of BMRs on a star as \(S_{\star}(t)=S_{\odot}(t)\cdot\tilde{s}\), where \(S_{\star}\) and \(S_{\odot}\) are stellar and solar emergence rates, respectively, and \(\tilde{s}\) is a scaling factor. To reflect the observed rotation-activity relation, we followed Paper I and took \(\tilde{s}=\hat{\omega}\equiv\Omega_{\star}/\Omega_{\odot}\), where \(\Omega_{\star}\) is the rotation rate of a star and \(\Omega_{\odot}\) is the solar rotation rate.
3. The resulting input record of emergences is mapped down to the base of the convection zone using thin flux tube simulations (Schussler et al., 1996; Isik et al., 2018) for the solar rotation rate \(\Omega_{\odot}\).
4. The record of emergences at the base of the convective zone is mapped back to the surface, but in contrast to Step 3 using thin flux tube simulations for a star with a given rotation rate \(\Omega_{\star}\). These simulations follow the rise of the flux tubes throughout the convection zone up to the surface of the star, where they emerge in the form of a loop with two footpoints of opposite polarity (i.e. BMRs). An important feature of these simulations is that they account for the Coriolis effect that pushes rising flux tubes towards higher latitudes for higher rotation rates Schussler and Solanki (1992).
5. Assuming that the size distribution of BMRs does not change with the rotation rate, we modify the emergence locations of flux loops, to simulate different degrees and modes of active-region nesting (see a similar approach by Isik et al., 2020), motivated by the observed activity complexes on the Sun. The details of this step will be described later in this Section.
6. In this last step, the calculated locations of emergence and the tilt angles of BMRs are fed into the surface flux transport model (SFTM). The SFTM describes the passive transport of magnetic fields of the BMRs on the surface of stars, by taking into account both large-scale flows (meridional flow and differential rotation) and the diffusion of the fields (Cameron et al., 2010; Isik et al., 2018).
In the present work, we employ the steps outlined to obtain the surface distribution for stars with 1, 2, 4, and 8\(\Omega_{\odot}\). The emergence patterns of the BMRs (e.g. the input record) for 1\(\Omega_{\odot}\) and 8\(\Omega_{\odot}\) are shown in Fig. 1. The main purpose of this study is to model stellar variability on the rotational timescale. We follow the approach of Paper I and take the solar temporal profile of activity (even though it is unlikely to be observed on the faster rotating and more active stars, we discuss this choice later in this paper), but we model the light curves only over the four-year window centred at the activity maximum (marked in Fig. 1). Our light curves (and the resulting variability amplitudes) correspond to a representative stellar activity level scaled from the solar maximum. Hence they are largely unaffected by the temporal profile of the underlying activity cycle.
In Fig. 2 we present snapshots of the magnetic field distribution on the surface of stars with 1\(\Omega_{\odot}\) (top row) and 8\(\Omega_{\odot}\) (bottom row) for various inclinations. The figure nicely demonstrates the difference in the emergence frequency, as the 8\(\Omega_{\odot}\) star is covered by more BMRs. Also the difference in the latitudinal distribution is remarkable. This is a result of Step 4 of the FEAT approach, which in turn leads to the formation of strong polar fields for the fast rotator depicted here. We also show snapshots of the magnetic field distribution of the surface of stars with \(\Omega_{\odot}\) and 8\(\Omega_{\odot}\) for various inclinations in Fig. 2.
Additionally, the emergence pattern of BMRs is altered by introducing active-region nesting. This effect is observed on the Sun, albeit to a small degree (see, e.g., Pojoga and Cudnik, 2002, who estimated that 40-50% of solar active regions can be associated to nests) and it was recently suggested that nesting can be substantially stronger on highly variable stars with near solar rotation rates (Isik et al., 2020). Following Isik et al. (2020), we introduce two modes of nesting (Step 5): the active-longitude (AL) and the free nesting (FN) modes. In both modes, we define a degree of nesting, \(p\), which gives the probability of a given BMR to be part of a nest. In the AL mode, those nests are centered around two fixed longitudes with a separation of 180 degrees. If the BMR is drawn to belong to a nest, then it is shifted to one
Figure 1: Butterfly diagrams of BMR emergence for 1\(\Omega_{\odot}\) left panel and 8\(\Omega_{\odot}\) right panel. The colour-bar gives the longitudes of emergence and the vertical dashed lines indicate the 4 years of the 11 year cycle considered in our brightness variation calculations. No amount or nesting is included here.
of the two ALs (with equal probability). Its new longitude is defined by a 1D normal distribution around the AL (with the standard deviation \(10^{\circ}\)), whereas the latitude is kept unchanged. This ensures that the latitudes of emergence still follow the general trends of the solar butterfly diagram. In the FN mode, nesting occurs around central latitudes and longitudes, which are randomly picked from the unaltered (i.e. non-nested) emergence record. If the BMR is drawn to belong to a nest then its emergence location is moved to a random location drawn from a 2D normal distribution with the mean at the nest centre having standard deviations of \(2^{\circ}\) in latitude and \(3^{\circ}\) in longitude. For more details on the nesting definition and choice of the parameters we refer to Isik et al. 2018 for the FN employed here and Isik et al. 2020 for AL.
We show a comparison between the two modes of nesting for \(1\Omega_{\odot}\) in Fig. 3. In order to distinguish more easily between the two modes, we show both time-latitude (left panels) and time-longitude diagrams, with the colourbar indicating the longitude and latitude each. The AL nesting is easier to identify in the time-longitude diagrams, whereas the FN nesting is easier to observe in a time-latitude graph.
### Defining the area coverages
The output of the FEAT model consists of full stellar surface maps (\(360^{\circ}\) by \(180^{\circ}\)) of magnetic fields, with a resolution of \(1^{\circ}\times 1^{\circ}\) per pixel. As our brightness calculations rely on the area coverages of spots and faculae, we have to first convert the magnetic field maps into the surface distributions of spots and faculae. N20b did this by following the evolution of sunspots after their emergence to calculate the coverage of the solar disk by spots. While this approach proved itself to be accurate for calculating solar variability (see also Dais-Espuig et al. 2014) it did not account for spots which appeared due to the superposition of the magnetic flux from different active regions. In other words, the main assumption of this approach was that all spots on the stellar surface have emerged _as spots_ within the corresponding active region. While this is a good assumption for the present Sun it prohibits the formation of polar spots, which are observed on young and rapidly rotating G stars (Jeffers et al. 2002; Marsden et al. 2004; Jarvinen et al. 2008; Waite et al. 2015; Senavci et al. 2021). These polar spots most likely form via flux superposition. We therefore take a slightly different approach than in N20b. We acknowledge that, apart from the intrinsic mechanism in the FEAT model it is theoretically also possible to form polar spots by overly increasing the rate of BMR emergence (Schrijver & Title 2001), or meridional flow rate (Holzwarth et al. 2006), but leaving the emergence latitudes solar-like.
To calculate the spot area coverage of a given pixel of the synthetic magnetogram returned by FEAT we define two thresholds: a lower cut-off, \(B_{min}\), and an upper saturation level, \(B_{max}\). The spot coverage of a given pixel is related to the field in the pixel following
\[\alpha_{s}^{m,n}=\begin{cases}0&\text{if}\quad|B_{mn}|<B_{min}\\ \frac{|B_{min}|}{(B_{max}-B_{min})}&-\frac{B_{min}}{(B_{max}-B_{min})}\\ 1&\text{if}\quad|B_{min}|>=B_{max},\end{cases}\quad\text{if}\quad B_{min}<=|B _{min}|<B_{max} \tag{1}\]
where \(\alpha_{s}^{m,n}\) is the spot filling factor of a pixel with coordinates m and n of the magnetogram, and \(|B_{min}|\) is the absolute value of the field in this pixel.
In order to calculate the faculae area coverage in each pixel, we follow an approach similar to N20b by setting a saturation threshold \(B_{sat}\) (see also papers describing SATIRE, e.g., Fligge et al. 2000; Krivova et al. 2003). If we find that a pixel is covered already partially by spots, we disregard this pixel for the facular masking. If a given pixel is spot-free, we calculate the faculae area coverage following:
\[\alpha_{f}^{m,n}=\begin{cases}\frac{|B_{min}|}{|B_{min}|}&if\quad|B_{mn}|<B_{ sat}\\ 1&if\quad|B_{min}|>=B_{sat},\end{cases} \tag{2}\]
The values of the parameters are selected such that the current model returns the same rotational variability (see Sect. 2.4) for the solar case (i.e. \(\Omega_{\star}=\Omega_{\odot}\)) as the N20b model.
### Calculating the brightness variations
Using Eq. 1 and 2 we obtain maps of area coverages per pixel of the visible stellar disc. What part of the full surface (hence the disc) of a star is visible depends on the inclination between the stellar rotation axis and the line-of-sight to the observer. The spectral irradiance, \(S(t,\lambda)\) (i.e., the spectral stellar flux, normalized to 1 AU), where \(t\) is the time and \(\lambda\) the wavelength, is then
Figure 3: Butterfly diagrams of BMR emergence for \(1\Omega_{\odot}\), with \(p=0.7\) in the different nesting modes considered in this work. Top row: AL nesting, bottom row: FN nesting. Left panels are time-latitude diagrams with the colourbar indicating the longitudes, right panels are time-longitude diagrams, with the colourbar indicating the latitude.
Figure 2: **Snapshot of magnetic field distribution for \(1\Omega_{\odot}\) (top row) and \(8\Omega_{\odot}\) (bottom row) as returned by the FEAT model at various inclination angles around maximum of the activity cycle. First column shows \(i=90^{\circ}\), second column \(i=57^{\circ}\), third column \(i=30^{\circ}\), and forth column \(i=0^{\circ}\).**
calculated by summing up the intensities weighted by the corresponding area coverages by magnetic features of a pixel as given by
\[S(t,\lambda)=S^{q}(\lambda)+\sum_{mn}\sum_{k}\left(I_{mn}^{k}(\lambda)-I_{mn}^{q}( \lambda)\right)\alpha_{mn}^{k}(t)\,\Delta\Omega_{mn}. \tag{3}\]
The summation is done over the pixels of the maps of the visible 2D stellar disc and the \(m\) and \(n\) indices are the pixel coordinates (longitude and latitude, respectively), \(\alpha_{mn}^{k}\) is the pixel (\(m\),\(n\)) coverage by magnetic feature \(k\) (in the present work faculae, umbra or penumbra), \(\Delta\Omega_{mn}\) is the solid angle of the area on the stellar disc corresponding to one pixel, as seen from the distance of 1 AU. \(I_{mn}^{k}\) is the intensity spectrum of magnetic feature \(k\) observed at the location corresponding to pixel (\(m\),\(n\)). We use the values computed by Unruh et al. (1999) with the radiative transfer code ATLAS9 (Kurucz, 1992; Castelli & Kurucz, 1994).
S\({}^{q}\) is the quiet-star irradiance, defined as
\[S^{q}(\lambda_{w})=\sum_{mn}I_{mn}^{q}(\lambda_{w})\Delta\Omega_{mn}. \tag{4}\]
Note that the solid angles of the pixels, as well as the corresponding intensity values depend on the vantage point. Hence \(S(t,\lambda)\) is sensitive to the stellar inclination. The calculations presented in this work are performed in the _Kepler_ passband, following
\[LC(t)=\int_{\lambda_{1}}^{\lambda_{2}}R(\lambda)S(\lambda,t)\frac{\lambda}{hc }\,d\lambda, \tag{5}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are the blue and red threshold wavelengths of the filter passband, \(R(\lambda)\) is the response function of the filter and \(S(\lambda,t)\) is the spectral irradiance at a given wavelength and time \(t\), \(h\) is the Planck constant, and \(c\) is the speed of light.
### Defining the parameters of the model
As mentioned previously, we fix the parameters of the model introduced in Sect. 2.2 to return the same level of rotational variability, represented via \(R_{var}\), of the solar case as N20b. N20b showed that their model is able to reproduce the observed brightness variations of the Sun. The N20b model and the model we develop here are both based on the SFTM, with similar underlying statistical BMR emergence records. We therefore use the N20b model as our reference in the present work. For this, we considered the four-year interval (indicated by vertical dashed black lines in Fig. 1) around the maximum of the synthetic cycle.
We then split the time series into 90-day segments, detrended the new time series by their mean value, and calculated the difference between the extrema in each of the segments using the approach outlined above and that of N20b. We note that we directly consider the difference between the extrema instead of the differences between the 95th and 5th percentiles of sorted flux values, as is usually done in the literature with the more noisy _Kepler_ measurements (see, e.g., Basri et al., 2013).
We show a comparison between the \(R_{var}\) values for the solar case as returned by N20b and the present model in Fig. 4. The best-values for the parameters \(B_{min}\), \(B_{max}\), and \(B_{sat}\) were found to be 60, 700 and 250 G respectively and were chosen, as they resulted in a slope of the linear regression close to unity (1.029) and a high \(r^{2}\) value (0.957). The mean rotational variability in the present model is comparable to that of the N20b model (1459.6 to 1457.71 ppm). However, we note that the threshold approach used in this model favours spots over faculae. While this is not necessarily accurate for the solar rotator in the present work, it leads to the formation of polar spots and spot domination of the stellar variability for the faster rotating stars.
We present maps for different inclinations and nesting degrees of the spot and facula distributions (following the description in Sect. 2.2 and the choice of the parameters presented just above) for a star with \(8\Omega_{\odot}\) as returned by FEAT with various nesting degrees (see Sect. 2.1) in Fig. 5 and Fig. 6, respectively. Clearly visible in Fig. 5 are the polar spots.
## 3 Light curves
Using the model described in Sect. 2.1, we generated a synthetic 11-year-long cycle for each rotation rate considered in this work. We focus on the four years around the activity maximum for two reasons. Firstly, we aim to explain the upper envelope of the variability distribution of stars as a function of the rotation period (McQuillan et al., 2014). Secondly, we model a single activity cycle, without overlapping cycles preceding and following it. Modelling cycle overlap is beyond the present scope,
Figure 4: Comparison of \(R_{var}\) (in ppm) as returned by the N20b model and the present model. All values are given in ppm. Black solid line gives the linear regression, whereas the blue dashed line is the 1-to-1 correspondence between the two models
Figure 5: Spot area coverage per pixel for different nesting realisations at different viewing angles for the \(8\Omega_{\odot}\) case. First column shows \(i=90^{\circ}\), second column \(i=57^{\circ}\), third column \(i=30^{\circ}\), and forth column \(i=0^{\circ}\). Top row is the non-nested case, middle row includes \(p=0.7\) in the free-nesting (FN) case, bottom row includes \(p=0.7\) in the active-longitude (AL) case.
and it would affect the activity level around cycle minima, which we thus exclude from the analysis.
We note that the LCs shown in this section are 180-days-long snippets of the full LCs and were simply chosen for demonstration.
Firstly, we consider the LCs of stars that are observed equator-on (\(i=90^{\circ}\)). Figure 7 displays the detrended LCs for the non-nested case (\(p=0\)), for different rotation rates. One can see that the faster the star rotates, the higher its amplitude of variability (mainly due to the shorter rotation periods - the lifetimes of the active regions change far less) which is a consequence of the activity-rotation scaling (see Step 2 from Sect. 2.1). The small bumps in the LCs that are mostly seen in the case for the \(\Omega_{\bullet}=1\Omega_{\odot}\) are a result of firstly having all BMRs emerge at their maximum size in the SFTM. Since the emergence rate is generally low and not many other BMRs are present, this affects the LCs more severely in this low activity case. Secondly, and lastly, due to the nature of the threshold approach outlined in Sect. 2.2 flux superposition (in case of same polarity encounters) and cancellation (in case of opposite polarity encounters) leads to the possibility of flux switching from being associated with faculae to spots (or spots to faculae) rapidly. Similar effects for the solar rotator will be visible in the following plots as well for the very same reason.
Figure 7 shows that not only amplitude but also the shape of the LCs strongly depends on the stellar rotation rate. For the case of the solar rotation (\(\Omega_{\bullet}=1\Omega_{\odot}\), top panel of Fig. 7), most of the individual dips in the LCs correspond to transits of different active regions (since active regions evolve on timescale shorter than the solar rotation). As active regions emerge randomly in time, the LCs appear quite irregular. In contrast, the LCs of the more rapidly rotating stars show gradually more regular patterns in brightness variations (see the lower three panels in Fig. 7). This is because active regions on such stars can survive several rotation periods. Furthermore, the large amount of BMR emergences over mid- to high latitudes with large tilt angles leads to the formation of polar spots at about \(4\Omega_{\odot}\), with prominent polar spot caps being present for the \(8\Omega_{\odot}\) rotator (see Fig. 5). The formation of polar spots for stars with those rotation rates are consistent with Paper 1 and Doppler-imaging observations. We note that the polar spots in our simulations turn out to be non-axisymmetric unipolar caps. Their overall structure is rather stable, because their decay is compensated by the magnetic flux coming from the new emergences, as long as the activity level and the BMR polarity orientations are sustained. As a next step, we consider models with active-region nesting.
We first consider the effect of the active-longitude nesting (see Sect. 2.1). Figure 8 shows LCs synthesised with AL nesting of 70% (i.e. \(p=0.7\)). The overall shape of the LC for \(1\Omega_{\odot}\) is still rather irregular, compared to the faster rotators, due to the low emergence frequency. With increasing rotation rate, dips related to BMR transits not only occur at a separation of the rotation period, but also at half of the rotation period. In addition to a change in the morphology of the LCs, the variations are amplified with respect to the corresponding non-nested case at each rotation rate (black solid lines versus coloured lines).
Figure 9 gives the most extreme case of AL nesting, where we assume that all BMRs emerge in one of the two active longitudes (e.g. \(p=\)1). Clearly, both dips (at one and half-rotational period interval) occur in all of the cases and the LC amplitudes are further augmented. The LCs in Fig. 9 additionally show that the two peaks have almost the same amplitude for the faster rotators. In case of the solar rotator and the low emergence frequency, temporarily asymmetries between the two AL can arise in case of the emergence of a large BMR. However, with increasing emergence frequency, the two ALs become more and more symmetric and hence the amplitude of the two dips due to the ALs as the star rotates are, to a large extent, comparable.
We now focus on the behaviour of the LCs in the FN mode. In Fig. 10 we present the LCs produced with \(p=0.7\). One can see that the amplitudes of the variability increase for all four shown rotation periods. The light curves also appear more regular than those calculated with \(p=0\). We increase the nesting even further to \(p=0.99\) in Fig.11. The change in the LCs with respect to the non-nested case is remarkable. The amplitude of the LCs is enhanced for all cases and the runs with \(p=0.99\) exhibit regular patterns, even in the solar case (top panel of Fig. 10). Interestingly, in the displayed LCs, not only dips with separation of the rotation period, but also half-of the rotation period (e.g. most prominently seen in the \(4\Omega_{\odot}\) star) appear. This is interesting, as the dips at half of the rotation period, appear and disappear and are not constant, compared to the AL case. We will discuss this further in Section 5.
Next, we consider the inclination effect on the LCs. For demonstration, we limit ourselves to the non-nested cases with different rotation rates. In Fig. 12 we show the time-span of 0-90 days from Fig. 7 for inclinations of \(90^{\circ}\), \(60^{\circ}\) and \(30^{\circ}\). In the given timespan, for \(1\Omega_{\odot}\), the amplitude of the variability de
Figure 6: Similar to Fig. 5, but showing the faculae area coverages.
Figure 7: Synthetic light curves (LCs) for stars with different rotation rates as they would be observed in the _Kepler_ passband at an inclination of \(i=90^{\circ}\). Shown are not-nested cases (\(p=0\)) with rotation rate values of \(1\Omega_{\odot}\) (blue), \(2\Omega_{\odot}\) (orange), \(4\Omega_{\odot}\) (green), and \(8\Omega_{\odot}\) (red).
creases with decreasing inclination. The shape of the transits also change. For 2 and 4\(\Omega_{\odot}\), the LC amplitudes decreases for the inclinations shown here as well. Interestingly, the situation changes for the 8\(\Omega_{\odot}\) case. The amplitude increases from \(i=90^{\circ}\) to \(60^{\circ}\) and then decreases from \(i=60^{\circ}\) to \(i=30^{\circ}\). Also the amplitude of variability observed at \(i=30^{\circ}\) is larger than that at \(i=90^{\circ}\). These inclination dependencies can be explained with the help of the magnetic field maps in Fig. 5. For the case of 1\(\Omega_{\odot}\) all regions emerge within \(\pm\) 30\({}^{\circ}\)around the equator. The Coriolis effect gets stronger with increasing rotation rate, so that for the 8\(\Omega_{\odot}\) star, the BMRs can emerge at latitudes up to \(\pm\) 70\({}^{\circ}\), while a latitudinal belt free of active regions opens up around the equator between \(\pm\)20\({}^{\circ}\) latitudes (see Sec. 5). For \(i=90^{\circ}\), the high-latitude spots appear close to the limb, where their effect on the brightness is significantly reduced due to the foreshortening. With decreasing inclination, the majority of spots shift towards the centre of the visual disc and their effect on brightness increases, so that at intermediate inclinations, the variability reaches a maximum (see Figs. 13 panel a), before it starts decreasing again, as the spots move towards the limb of the visible disc once more.
## 4 Comparison to observations
In the following, we compare our model to observational records of Sun-like stars obtained by the _Kepler_ telescope. Since our model builds on the solar paradigm, we select stars with near-solar effective temperatures between 5500-6000 K and surface gravity \(log\,g\geq 4.2\) using the updated fundamental parameters of _Kepler_ stars from Berger et al. (2020). Next, we consider stars with known rotational periods using the data sets of McQuillan et al. (2014) (hereafter McQ14) and Santos et al. (2021) (hereafter SS21). These constraints lead to a sample of 6,228 stars when using McQ14 rotation periods and 11,493 stars when using S21 rotation periods.
We express the variability through the quantity \(R_{var}\), first introduced by Basri et al. (2010, 2011). We use a slightly modified version of the range \(R_{var}\): we compute the difference between the 95th and 5th percentile of the sorted differential intensities. Even for the latest _Kepler_ data reduction (DR25) it occurs that some quarters still include instrumental systematics that might influence the range. We then compute the median absolute deviation (MAD) of all \(R_{var}\) values, and remove those quarters, that deviate more than 6x the MAD. Afterwards the median is taken of all
Figure 11: Similar to Fig. 8, with free nesting and \(p=0.99\).
Figure 8: Similar to Fig. 7, where the black curves represent the calculations with \(p=0\) for each rotation rate and the coloured curves represent those with added AL-type nesting at \(p=0.7\).
Figure 10: Similar to Fig. 8, with free nesting and \(p=0.7\).
Figure 9: Similar to Fig. 8 with AL-type nesting and \(p=1\).
remaining quarters, which we call \(R_{var}\). Since the SFTM returns instantaneous values with 6-hour cadences, we take every 12th data point (the _Kepler_ long cadence is \(\approx\)30 min) and compute \(R_{var}\) for this unbinned time series. All computations are based on the latest _Kepler_ data release (DR25) using the PDC-MAP light curves. We also cross-checked our calculated variabilities represented through \(R_{var}\), with the metric \(R_{per}\) used by McQuillan et al. (2014) and found very good agreement.
Before computing \(R_{var}\) of the models, we need to add noise to the light curves. We follow the same strategy as described in detail in Reinhold et al. (2020). Here, we multiply the noise with a factor of \(\sqrt{3h/30min}=\sqrt{6}\) to account for the different time bins. For each inclination, 1000 noise realizations are considered, and the mean and standard deviations are computed. This has been done only for the \(1\Omega_{\odot}\)\(p=0\) case, as we found that the noise level is significantly lower than the actual stellar variability in all other cases.
In Figs. 13 and 14 we show the comparison between the calculated \(R_{var}\) values of our simulated stars for various degrees of nesting in the AL and the FN mode, respectively, and the two samples of _Kepler_ stars shown as grey (McQ14) and black (S21) dots. Evidently, if we do not include any nesting (\(p=0\)), our calculated variabilities underestimate the bulk of the observed variability amplitudes, especially for the faster rotators. With increasing nesting level (Fig. 13 and 14 b, c, d, and e), \(R_{var}\) increases and the values move towards the upper edge of the distribution. Interestingly, \(p=0.99\) in the FN mode (Fig. 14 e) overestimates the variability of the solar rotators but leads to similar variability values as the upper envelope of the variability distributions of the faster rotating stars. We note that, while the numbers of BMR emerging is the same for a given rotation rate between the non-nested and nested cases, with higher nesting degree, the spot disc area coverage increases due to the formation of spots by flux superposition. As a consequence, the spot area coverage is not preserved in contrast to the approach presented in Isik et al. (2020) and the nesting has a stronger effect on variability in our model than in Isik et al. (2020). We will elaborate on this point further in Sect. 5.
For a more detailed comparison between the simulations and the observations, we bin the distribution of observed variabilities. Namely, we compare the variabilities returned by our model for a star rotating X times faster than the Sun, with variabilities of a sample of stars with periods in the range [23/X, 27/X] days from the _Kepler_ samples. This comparison is shown in Fig. 15. The histograms in grey and black display the range of variabilities within each of the rotation period bins. We note that the number of stars within each rotation bins decreases from panel a to d. Similarly to Figs. 13 and 14, Fig. 15 shows that the calculations with \(p=0\) clearly underestimate the variability for the \(2\Omega_{\odot}\) and \(4\Omega_{\odot}\) sub-samples (while the small number of _Kepler_ stars in the \(8\Omega_{\odot}\) rotation bin makes interpretation of panel d rather difficult).
For the stars with near-solar rotation periods (i.e the \(1\Omega_{\odot}\) case), there is a substantial difference between the stars in McQ14 and S21. The sample of S21 contains many more stars with variabilities lower than the solar variability at the maximum of cycle 22 (blue curve in the left panel of Fig. 15), while both samples roughly contain the same number of stars substantially more variable than the Sun (i.e. the high-variability tail). This result again raises the question of whether the Sun could also become as variable as those stars in the high-variability tail (Reinhold et al., 2020). Moreover, it agrees with the conclusions drawn in Reinhold et al. (2020, 2021) that the solar variability is not unexpectedly low but that the rotation periods of many stars with similar variabilities have just been missed by previous rotation period surveys (such as McQ14).
At the same time, the \(p=0.99\) FN calculations in Fig. 15 lie towards the upper bound of the variability distribution of the faster rotators, while highly overestimating the variability of stars with near-solar rotation periods. One can see that different nesting modes can lead to similar values of median \(R_{var}\), especially if the inclination of the stellar rotation axis is not known. This degeneracy might be lifted if alternative metrics are used. We discuss further metrics of characterising stellar variability including the morphology of the LCs in Sect. 5.
## 5 Discussion
The premise of this work was to extend the solar paradigm to model the distribution of magnetic features on stars rotating faster than the Sun to use these to consecutively calculate the stellar light curves and the amplitude of the variability. Figure 15 shows that while our calculations with a rather high degree of nesting can reproduce the bulk of variabilities in the _Kepler_ sample they do not catch the maximum of the variability distribution. This might be because we have used the emergence frequency of BMRs for solar cycle 22 as reference cycle and scaled the emergence frequency as a function of the rotation period based on that cycle (i.e. a star rotating twice as fast as the Sun exhibits twice as many BMR emergences). First, solar cycle 22 does not
Figure 13: Comparison of \(R_{var}\) as a function of the rotation period between stars with effective temperatures with 5500–6000 K and \(log\)\(g>4.2\) with detection rotation periods from McQ14 (grey dots) and S21 (black dots) and the modelled stars. Each panel includes different nesting probabilities \(p\) in the form of active-longitude (AL) nesting. The different colours indicate the inclination of the modelled stars.
represent the maximum level of activity the Sun is capable of (see, e.g. Usoskin, 2017). Second, the activity-rotation scaling itself is rather approximate (see Isik et al., 2018, for more details). We refer here to Isik et al. (2020), who considered the effect of activity increase on the variability with a considerably simpler model limited to stars with near-solar rotation rates.
Empirical studies suggest that faster rotating stars have activity cycles shorter than the present Sun (Bohm-Vitense, 2007). In spite of this, as the main purpose of this study is to model stellar variability on the rotational timescale, we follow the approach of Paper I and assume that the solar temporal profile of activity remains unchanged for the faster rotators, for simplicity. However, we only consider the maximum level of activity (in a four-year window during activity maximum, see Fig. 1). Thus, our light curves (and the resulting peak-to-peak variability) correspond to the maximum of the stellar activity cycle and are largely unaffected by the temporal profile of the underlying activity cycle.
Another parameter space that we have not taken into account in this study is the effect of stellar fundamental parameters (e.g. the effective temperature and the stellar metallicity). While we limited our sample of _Kepler_ stars to a temperature range of 5500-6000K, it contains stars with a rather broad range of metallicity values. Witzke et al. (2020) have shown that changing the metallicity for a star with solar level of activity enhances the possibility of recovering its rotation period, as the star moves out of the compensation regime between facular and spot contribution on the rotational timescale. It is less intuitive, however, how the change in metallicity will affect the calculations presented in this paper. According to our calculations, the rotational variability in the rapid rotators is primarily driven by spots, yet the spot component is less affected by the change in the metallicity than the facular component (see Witzke et al., 2018, for a more coherent discussion). However, metallicity might have an impact on the activity of a star and on the surface distribution of magnetic features (Amard and Matt, 2020; Amard et al., 2020; See et al., 2021) - an effect that we deem to be outside the scope of the present study.
The latitude of emergence calculated by flux-tube simulations (see Sect. 2 and Paper I) puts a well-defined lower limit for the latitude of emergence, which becomes more visible for fast rotators (see Fig. 1). This is a consequence of the inward-directed Coriolis force in the rotating frame, consistently acting on rising flux tubes having a pro-grade azimuthal flow (i.e., they rotate faster than their locality). This leads to a well-defined minimum latitude of emergence, corresponding to the minimum non-zero latitude of injection near the equator at the base of the convection zone. Whether such a latitudinal gap around the equator occurs on rapid rotators is unknown (see Senavci et al., 2021, for a discussion), but one would expect that stochastic effects (e.g. convection) can induce scatter around the minimum latitude of emergence. We reckon, however, that this additional scatter will not affect the photometric variability to a large degree.
When comparing different rows in Fig. 5, one interesting feature becomes apparent: the spot area coverage is larger for the free nesting mode than for the non-nested case and active longitude nesting. This is because the proximity of neighbouring magnetic flux elements leads to a high possibility of same-polarity encounters. Magnetic flux, which accumulates this way, leads to the formation of spots. We note that the possibility of spontaneous spot formation via flux superposition has been detected in numerical simulations (Kitiashvili et al., 2010), but so far such a formation has not been observed on the Sun, probably due to its relatively low activity level and a small degree of nesting of solar magnetic features.
Figures 7-11 show that nesting affects not only the amplitude of the variability, but also the morphology of the LCs (see also discussion in Sect. 3), in parallel with the computations by Isik et al. (2020). For example, in the case of AL nesting, dips in the LCs mainly occur each half of the stellar rotation period. Basri and Nguyen (2018) have introduced the "single/double ratio" (SDR) metric, which provides information about the ratio of time a star spends in single- or double-dip modes (i.e. its LC shows one or two peaks per period, respectively). The SDR was proposed to be an effective metric for characterizing stellar LCs. SDR as well as other metrics might be useful for testing the models and constraining the distribution of stellar magnetic features. Indeed, Fig. 15 shows that both AL- and FN-type modes of nesting can result in the same amplitude of variability, albeit at different nesting degrees. Any metric, such as \(R_{\rm var}\), which is used in the present work, do not take into account LC morphology, as they only measure the peak-to-peak variability. Using metrics sensitive to the morphology of the LCs will help to distinguish between various modes of nesting. This will be addressed in a forthcoming publication.
Finally, our calculations are based on the assumption that the size distribution of emerging spots is the same on solar-like stars and the Sun, i.e. it does not depend on the rotation rate and stellar activity level. Specifically, the source term in Jiang et al. (2011) used in this study (see Step 1 in Sect. 2.1) is based on the sunspot size distribution during solar cycle 22. This neglects that the spot size distribution might depend on the activity level (see, e.g., Solanki and Unruh, 2004; Krivova et al., 2021). We note that increase of spot sizes in the source term of SFTM model would simultaneously amplify the variability of a star and make
Figure 14: Similar to Fig. 13, with different degrees of free nesting (FN).
its LC more regular. Thus, the change of distribution of emerging spots might be another mechanism capable of explaining _Kepler_ observations (together with nesting investigated in this study). Since the lifetime of sunspots depends on their sizes, a change in the size distribution of spots would also lead to a change in their lifetimes, which would have a very direct effect on the variability amplitude and the LC statistics, with longer-living spots making the LC more regular. In that sense extending the lifetime of a spot should have a similar effect as stronger nesting. This is an important parameter whose influence will be studied in a future publication.
## 6 Conclusions
We coupled the model for the emergence and surface transport of magnetic flux in Sun-like stars (Isk et al., 2018, Paper I) with a model for calculating stellar brightness variations (partly based on the approach presented in Nemec et al., 2020). This allowed us to compute light curves of stars with rotation rates between 1 and 8\(\Omega_{\odot}\) as they would be observed by _Kepler_ at different inclination angles. Following up on the findings of Isk et al. (2018) and Isk et al. (2020), we investigated the impact of active-region nesting on the light curves and, in particular, the amplitude of the variability.
We compared the output of our model to the observed variabilities of _Kepler_ stars in the temperature range 5500-6000K. In particular, we aimed at explaining the dependence of the amplitude of the variability on the rotation rate. Recently, Isjk et al. (2020) showed that the model without nesting underestimates the variability of stars with known near-solar rotation periods (see also Reinhold et al., 2020). We found that the same is true for stars rotating faster than the Sun. Our runs without nesting dramatically underestimate the stellar variability at all rotation rates.
We showed that the observed dependence of _Kepler_ variabilities on the rotation period, for stars with detected rotation periods, can be explained by an increase of nesting degree with the rotation rate, in parallel with increasing activity level. As both modes of nesting used in this work led to similar levels of variability for stars with different rotation rates, we plan to further investigate the use of metrics that consider LC morphologies (instead of the peak-to-peak variability), to retrieve more information regarding the surface magnetic activity of stars. Additionally, we plan to include the effects of different rotation-activity relationships, cycle length and of stellar fundamental parameters (i.e. effective temperature and metallicities), on the variability. The applications of the FEAT model extend beyond stellar photometric variability, which is presented in this work. The model has been adapted to study the astrometric jitter introduced by stellar magnetic activity (Sowmya et al., 2021, 2022) and Doppler imaging (Senavci et al., 2021). It can also be used to study the magnetic contamination of high and low-resolution transmission spectra (see, e.g. Rackham et al., 2018, 2019; Dravins et al., 2021; Rackham et al., 2022).
###### Acknowledgements.
The research leading to this paper has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement No. 715947).
|
2308.01525 | VisAlign: Dataset for Measuring the Degree of Alignment between AI and
Humans in Visual Perception | AI alignment refers to models acting towards human-intended goals,
preferences, or ethical principles. Given that most large-scale deep learning
models act as black boxes and cannot be manually controlled, analyzing the
similarity between models and humans can be a proxy measure for ensuring AI
safety. In this paper, we focus on the models' visual perception alignment with
humans, further referred to as AI-human visual alignment. Specifically, we
propose a new dataset for measuring AI-human visual alignment in terms of image
classification, a fundamental task in machine perception. In order to evaluate
AI-human visual alignment, a dataset should encompass samples with various
scenarios that may arise in the real world and have gold human perception
labels. Our dataset consists of three groups of samples, namely Must-Act (i.e.,
Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity
of visual information in an image and further divided into eight categories.
All samples have a gold human perception label; even Uncertain (severely
blurry) sample labels were obtained via crowd-sourcing. The validity of our
dataset is verified by sampling theory, statistical theories related to survey
design, and experts in the related fields. Using our dataset, we analyze the
visual alignment and reliability of five popular visual perception models and
seven abstention methods. Our code and data is available at
https://github.com/jiyounglee-0523/VisAlign. | Jiyoung Lee, Seungho Kim, Seunghyun Won, Joonseok Lee, Marzyeh Ghassemi, James Thorne, Jaeseok Choi, O-Kil Kwon, Edward Choi | 2023-08-03T04:04:03Z | http://arxiv.org/abs/2308.01525v3 | # VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception
###### Abstract
AI alignment refers to models acting towards human-intended goals, preferences, or ethical principles. Given that most large-scale deep learning models act as black boxes and cannot be manually controlled, analyzing the similarity between models and humans can be a proxy measure for ensuring AI safety. In this paper, we focus on the models' visual perception alignment with humans, further referred to as _AI-human visual alignment_. Specifically, we propose a new dataset for measuring _AI-human visual alignment_ in terms of image classification, a fundamental task in machine perception. In order to evaluate _AI-human visual alignment_, a dataset should encompass samples with various scenarios that may arise in the real world and have gold human perception labels. Our dataset consists of three groups of samples, namely _Must-Act_ (_i.e._, Must-Classify), _Must-Abstain_, and _Uncertain_, based on the quantity and clarity of visual information in an image and further divided into eight categories. All samples have a gold human perception label; even _Uncertain_ (_e.g._, severely blurry) sample labels were obtained via crowdsourcing. The validity of our dataset is verified by sampling theory, statistical theories related to survey design, and experts in the related fields. Using our dataset, we analyze the visual alignment and reliability of five popular visual perception models and seven abstention methods. Our code and data is available at [https://github.com/jiyoungelee-0523/VisAlign](https://github.com/jiyoungelee-0523/VisAlign).
## 1 Introduction
AI alignment [65] seeks to align models to act towards human-intended goals [50; 81], preferences [69; 64], or ethical principles [29]. Alignment is a prerequisite before deploying AI models in the real world. Misaligned models may show unexpected and unsafe behaviors which can bring about negative outcomes, including loss of human lives [57; 81]. This is particularly true for high-capacity models like deep neural networks, where there is little manual control of feature interaction. In such cases, analyzing the alignment between models and humans can be a proxy measure for safe behavior [47]. Well-aligned models induce more agreeable and acceptable results to human society in the targeted domain [38].
In this paper, we particularly focus on alignment in _visual_ perception, henceforth referred to as _AI-human visual alignment_, and propose a new dataset for measuring this alignment. Note that recent
work in AI-human alignment tends to focus on societal topics with ethical implications, such as racial or gender bias [73, 12, 44]. In this work, however, we use image classification as the target task, which is more fundamental to machine perception but is less contentious.
Despite its seeming simplicity, image classification presents significant challenges for deployed visual AI systems due to noise, artifacts, and spurious correlations in the images. When confronted with an image lacking any object from the designated classes, humans typically abstain from making an incorrect decision. In contrast, machine learning models may still generate an output unless they are explicitly trained to abstain from making predictions under certain confidence levels. Similarly, when an image provides imperfect information (_e.g._, due to blurred vision or a dark environment), human decisions tend to waver between a correct prediction and abstention. Conversely, machines often make overconfident predictions [48]. Given this discrepancy between human and model behaviors, we focus on image classification as a foundational starting point. Before we delve into more complex and potentially contentious topics, we view this work as a crucial initial step in measuring visual perception alignment.
As AI alignment aims to guide an AI to resemble human behaviors and values for a safe use of AI, _AI-human visual alignment_, being a subcategory of AI alignment, aims to guide the AI to resemble the aforementioned human behaviors in visual perception (_i.e._, abstaining from making incorrect decisions, wavering between a correct prediction and abstention) to ensure safety across diverse use cases. Our dataset, VisAlign, encapsulates these behaviors across three distinct groups: _Must-Act_, _Must-Abstain_, and _Uncertain_. _Must-Act_ contains identifiable photo-realistic images that humans can correctly classify (see Figure 1 green box). _Must-Abstain_ includes images that most humans would abstain from classifying due to their lack of photo-realism or because they clearly contain no objects within the target classes (see Figure 1 red box). _Uncertain_ category hosts images that have been cropped or corrupted in diverse ways and at varying intensities (see Figure 1 orange box). For this last group, we provide gold human labels from multiple annotators via crowd-sourcing. Given a moderately corrupted image, some people might be able to recognize the true class, while others might not. In Section 3, we further elaborate on crucial requirements that a visual alignment dataset must meet and provide details about our survey design, which has been validated using relevant statistical theories.
Figure 1: The overview of VisAlign. The example images are given with reference to the class Zebra. _Category 1_. A photo-realistic image of a zebra. _Category 2_. A zebra crossing a road. _Category 3_. A slight noise is added to the Category 1 image. _Category 4_. A picture of a truck. _Category 5_. A head and two limbs of an elephant with the remaining body of a zebra. _Category 6_. A donkey. _Category 7_. A zebra illustrated on a piece of clothing. _Category 8_. Two pictures, one with cropping and the other frosted glass blur, respectively, of a zebra.
_Must-Act_ and _Must-Abstain_ have been addressed in previous studies under the purview of robustness [23; 75; 26] and Out-of-Distribution Detection (OOD) [52; 77], respectively. However, most studies overlook _Uncertain_ samples, which are frequently found in real-world scenarios where visual input can continuously vary in aspects such as brightness and resolution. To the best of our knowledge, VisAlign is the first dataset to explore the diverse aspects of visual perception, including _Uncertain_ samples, under the concept of _AI-human visual alignment_. Furthermore, all decisions regarding the construction of VisAlign were based strictly on statistical methods for survey design [67; 9] and expert consultations to maximize the validity of the alignment measure (see Section 3).
We benchmark various image classification methods on our dataset using two different metrics. Firstly, we measure the visual alignment between the gold human label distribution and the model's output distribution using the distance-based method (Section 4.1). Secondly, considering visual alignment as a potential proxy method for measuring a model's reliability, we evaluate the model's _reliability score_ (Section 4.2). We test models with various architectures, each combined with various ad-hoc abstention functions that endow the model with the ability to abstain. Our findings suggest that current robustness and OOD detection methods cannot be directly applied to _AI-human visual alignment_, thus highlighting the unique challenges posed by our task as compared to conventional ones.
Our contributions can be summarized as follows:
* To the best of our knowledge, this is the first work to construct a test benchmark for quantitatively measuring the visual perception alignment between models and humans, referred to as _AI-human visual alignment_, across diverse scenarios (8 categories in total).
* We propose VisAlign, a dataset that captures varied real-world situations and includes gold human labels. The construction of our dataset was carried out meticulously, adhering to statistical methods in survey designs (_i.e_., the number of samples in a dataset [9], intra and inter-consistency in surveys [15], and the required minimum number of participants [67]) and expert consultations.
* We benchmarked visual alignment and reliability on VisAlign using five baseline models and seven popular abstention functions. The results underscore the inadequacy of existing methods in the context of visual alignment and emphasize the need for novel approaches to address this specific task.
## 2 Related Works
Related Datasets.Previous datasets only focus on one aspect or do not have human gold labels. Mazeika et al. [43] focus on subjective interpretations and collected human annotations on emotions (_e.g_., amusement, interest, adoration). Existing corruptions datasets [23; 45; 75] apply slight corruptions to study the robustness of deep neural networks. These works overlook the moderately or severely corrupted images that appear in the real world. Although the dataset by Park et al. [51] applied brightness corruptions on hand X-ray images with multiple severities, they do not have gold human labels. Out-of-Distribution (OOD) datasets [52; 77] only handle two cases where label space or semantic space changes. OpenOOD [77] includes both cases by dividing two situations as _far-OOD_ and _near-OOD_. Plex [72] uses a compilation of different datasets to study the reliability of models; however, it does not test on ambiguous or uncertain samples. CIFAR10H [54] is a dataset that collects a distribution of soft human labels for CIFAR10 images [31] to represent human perceptual uncertainty. However, the images' uncertainty only comes from low fidelity, which does not represent diverse cases. Our dataset sits aside from existing datasets by handling various scenarios and providing gold human labels. Similarly, Schmarie et al. [68] collected multiple annotations per image. There are three key differences that distinguish our dataset from prior works that focus on uncertainty in object recognition. First, we applied corruption and cropping with different intensities ranging from 1 to 10 to reflect the continuity of uncertainty. As uncertainty is continuous and it is critical to test models on samples where uncertainty may increase in stages. Second, we obtained 134 human annotations per image to obtain numerically robust annotations. Third, while previous dataset include soft labels distributed only among classes, we include soft labels distributed among classes and abstention, which can represent recognizability uncertainty (_i.e_.,, whether an image itself is recognizable or not). Visual perception includes not only object identification (predicting that it is an elephant) but also object recognizability (the object itself is recognizable). In this sense, we cover broader scenarios compared to previous works as we include object recognizability uncertainty in our uncertain category.
Visual Alignment with Humans.Alignment is more broadly studied, including the gap between data collection and model deployment [2], natural language modeling [38], and object similarity [30; 53]. For visual alignment, specifically, previous works [19; 20; 55; 80] use only corrupted or perturbed datasets to compare the humans' and models' decisions. Similarly, Rajalingham et al. [56] analyzes patterns that confuse the decision-making process of deep neural networks, humans, and monkeys. Other studies [39; 17] induce models to take similar steps as humans before making the final prediction. Jozwik et al. [30] compares the semantic space where the task is to predict the human-generated semantic similarities given different object images. Zhang et al. [79] and Bomatter et al. [5] show that both model and human have better object recognition when given more context information. Both papers provided human-model correlations to describe their relative trends across conditions. However, our study on visual perception alignment is not about following human trends, but about measuring how well the model replicates human perception sample-wise. Geirhos et al. [18] and Bhojanapalli et al. [4] test the robustness of models to perturbations that does not affect the object identity. Peterson et al. [54] only test their models on in-class (_i.e_., Category 1) and out-of-class samples (_i.e_., Category 2 and Category 3) and Schmaje et al. [68] only tested their models on in-class samples (_i.e_., Category 1). In order to thoroughly evaluate visual alignment, models should also be tested under various scenarios with out of distribution properties (_i.e_., Category 3 and Category 4). We prepared VisAlign to include these out of distribution properties, and if needed, generated the samples by ourselves, of which details are in Section 3.2. Furthermore, they showed only accuracy and cross entropy or KL divergence. (which is analogous to KL divergence) of the models. Therefore, they did not test their models on various possible scenarios and did not use proper measurement, as KL divergence is not an optimal choice for visual perception alignment as will be described in Section 4.1. Therefore, although previous works trained their models with the goal of achieving visual perception alignment, none of the works have thoroughly verified how much the models have actually achieved visual perception alignment under diverse situations with an appropriate measurement. In contrast, we quantitatively measured visual perception alignment across various scenarios with multiple human annotations on uncertain images. In addition, we borrowed Hellinger distance to precisely calculate the visual perception alignment after careful consideration of other distance-based metrics. More details of comparison to previous works are in Appendix H
## 3 Dataset Construction
We have carefully considered what conditions must be met in a visual alignment dataset during the process of selecting the classes and the contents of VisAlign. We define four requirements that a visual alignment dataset must satisfy:
Requirement 1: Clear Definition of Each Class.Each class must be distinctly and precisely defined. This criterion proves more challenging to meet than initially anticipated, given that most everyday objects are defined in relatively vague terms and therefore do not lend themselves to rigorous classification. For example, the term "automobile," which is defined by the Cambridge Dictionary as a synonym for "car", is described as "a vehicle with an engine, four wheels, and seats for a few people."1 The phrase "seats for a few people" is ambiguous, and the definition is broad enough to encompass trucks. Despite this, certain parties may contend that "automobile" and "truck" are distinctly separate classes, a view reflected in datasets like CIFAR-10 [31] and STL-10 [8], which treat automobiles and trucks as separate classes.
Footnote 1: [https://dictionary.cambridge.org/dictionary/english/car](https://dictionary.cambridge.org/dictionary/english/car)
Requirement 2: Class Familiarity to Average Individuals.The classification target (_i.e_., each class) must be known to average people. This is because we employ hundreds of MTurk workers to derive statistically robust ground-truth labels for a subset of images.
Requirement 3: Coverage of Diverse and Realistic Scenarios.The dataset must contain samples covering a wide range of scenarios that are likely to occur in reality. This includes samples outside of defined classes, out of distributions (_i.e_., Category 3 or 4) and confusing samples where people might not able to recognize or identify. The test will fail to sufficiently evaluate the AI's alignment with human visual perception without this diversity.
Requirement 4: Ground Truth Label for Each Sample.Each sample must have an indisputable or, at the very least, reasonable ground truth. Our dataset's ground truth is human-derived, as we aim to measure the degree of alignment between AI and human visual perception.
### Class Selection
For our dataset to serve as a universal benchmark that any model can be tested on, the classes should have clear definitions so that model developers can easily prepare their models and training strategy. To meet Requirement 1, we cannot choose under-specified class definitions. For example, the class definitions in CIFAR10 [31] can be disputed, as shown in the example of 'automobile' and 'truck' in Requirement 1. Likewise, the MNIST [35] classes cannot be used since numbers are recognized via trivial geometric patterns. After careful consideration, we use the taxonomic classification in biology, which is the meticulous product of decades of effort by countless domain experts to hierarchically distinguish each species as accurately as possible. Following Requirement 2, familiarity is one of the critical criteria since we conducted an MTurk survey to build a subset of our dataset. For example, CIFAR100 [31] uses species of flowers (orchids, poppies) that may not be commonly known. The ImageNet [63] class space is also challenging to use for similar reasons. Therefore, among animal species, we select mammals that are familiar to the average person.
In summary, animal species were selected that 1) can be grouped under one scientific name for clear definitions, 2) are visually distinguishable from other species to avoid multiple correct answers, 3) have characteristic visual features allowing them to be identified by a single image, and 4) are familiar to humans, facilitating participation in our survey.
The final 10 classes are _Tiger_, _Rhinoceros_, _Camel_, _Giraffe_, _Elephant_, _Zebra_, _Gorilla_, _Kangaroo_, _Bear_, and _Human_. This selection was revised and verified by two zoologists according to the aforementioned criteria. The scientific names and subspecies for each class can be found in Table 6 of Appendix C.
### Sample Categories
Our dataset, depicted in Figure 1, is partitioned into three groups based on the quantity and clarity of visual information: _Must-Act_, _Must-Abstain_, and _Uncertain_. To avoid misclassifications due to background objects, all samples exclusively contain one object. The authors manually scrutinized all test samples to ensure this. In line with Requirement 3, these three groups are further subdivided into eight categories to account for as many real-world scenarios as possible. Each category comprises 100 samples, with the exception of Category 8 comprising 2002, totaling 900 samples. To establish the reliability of the dataset as a valid benchmark, Cronbach's alpha [9] was used, a metric that evaluates the reliability of tests. The dataset was deemed reliable, with a minimum of 100 samples per category. The complete calculation for Cronbach's alpha is detailed in Appendix D.1.
Footnote 2: As category 8 contains a diverse set of croppings and corruptions of varying intensities, we double the number of samples for more reliable evaluation.
* contains clearly identifiable photo-realistic samples belonging to only one of the 10 classes. We intentionally restricted our dataset to photo-realistic samples to avoid ambiguous boundaries between in-class and out-of-class, such as abstract paintings or sculptures (_e.g._,, claiming that a box with four sticks at the bottom and a sinusoidal line on the side is an elephant). Individuals with no visual impairments and familiarity with the 10 mammals can consistently classify these images correctly.
* Category 1: Unaltered samples from the designated classes are included. This category serves as the most basic step required for visual perception alignment. We sourced images from ImageNet1K [63] and images.cv3. Footnote 3: [https://images.cv/](https://images.cv/)
* Category 2: Image classification models have been known to sometimes base decisions based on unrelated features, such as the background of an image [26; 60]. We aim to challenge the models by testing them with samples that feature incongruous backgrounds, _i.e._,, images of animals in environments where they are not commonly seen. Well-aligned models should accurately classify objects regardless of the changes in the background. Samples were generated using Stable Diffusion [62]. Examples of text prompts used for generating samples are provided in the Appendix D.2.
* Category 3: Another case of images that humans can easily identify but models cannot are perturbed images used for adversarial attacks [21; 32]. Well-aligned models would not be influenced by noise or adversarial attacks intentionally designed to deceive them. Here we include Category 1 samples with adversarial perturbation to test such cases. We use Fast Gradient Sign Method (FGSM) [21] to inject adversarial perturbations. The gradients are produced by pre-trained image classifiers available in PyTorch4. Footnote 4: [https://pytorch.org/](https://pytorch.org/)
* **MUST-ABSTAIN** are images that qualified individuals always abstain from classifying.
* Category 4: This category includes images that do not belong to any one of VisAlign's 10 mammals. Examples might include other animal species (e.g., birds, cats, dogs), textures (e.g., bubbly, banded), or objects (e.g., truck, inline skate, guitar). This category tests the model's ability to abstain from classifying objects outside its defined scope. Well-aligned models should be able to disregard infinitely diverse objects outside the target classes. The space of Category 4 is merhaustible; thus, the authors use their best efforts to include as diverse samples as possible to represent this space. Samples were collected from ImageNet1K [63], Describable Textures Dataset [7], and Caltech 10 [14].
* Category 5: While Category 2 tests whether models focus on relevant features of the class definition, it is also important to assess if a model evaluates the object as a whole, rather than focusing on specific portions of a sample. Thus, we included images of creatures that incorporate features from two different animals (e.g., a creature with the head and two limbs of an elephant but the body of a zebra). Recent advances in text-to-image models [58; 59; 66] enable us to rapidly and easily generate images of objects that do not naturally exist. We used Stable Diffusion [62] to create these images. Details of prompts are in Appendix D.2.
* Category 6: An image may contain an object that does not belong to the target class but has features closely resembling those of the target classes. Given the challenging nature of these near-miss cases, we include Category 6, featuring mammals that are biologically close to the 10 target mammals according to scientific taxonomy (e.g., donkeys are close to zebras). The primary purpose of Category 6 is to test the model abstention ability on seemingly similar yet different samples. This category can be considered a more challenging version of Category 4. We have set aside this category as these samples can check the model visual alignment on samples near the natural evolutionary boundary. Samples are collected from ImageNet21K [61].
* Category 7: This category includes images in styles other than photo-realistic (e.g., a drawing of an elephant, a sculpture of a giraffe). Considering that MUST-ACT samples are photo-realistic images confirmed by humans, well-aligned models should be able to discern styles that deviate from photo-realism. The images were collected from DomainNet [52] and ImageNet-R [25].
* includes images that are cropped or corrupted in various styles in different intensities
* Category 8: This category includes images that are either cropped at varying sizes and regions or corrupted using one of the 15 corruption types5. The original samples were collected from ImageNet21K [61]. Well-aligned models should be able to correctly classify slightly corrupted images while abstaining from making decisions on indistinguishably corrupted images. The corruption process follows the approach outlined in ImageNet-C [23], with corruption intensities varying from 1 to 10. Footnote 5: We leveraged open-sourced code available at [https://github.com/hendrycks/robustness](https://github.com/hendrycks/robustness)
### Uncertain Group Label Generation
One challenging yet intriguing aspect of the _Uncertain_ group is the variability of these samples' gold standard labels, which fluctuates depending on corruption types and intensities. For instance, it would be optimal to correctly classify images with slight corruptions as they remain identifiable. However, when dealing with a severely darkened image, the object might resemble a tiger, a jaguar, or be entirely unrecognizable. In such scenarios, determining whether a human observer would classify it as a tiger or abstain from decision-making becomes challenging. Therefore, we derive a gold human ratio (_i.e._, the distribution over classes provided by human annotators), rather than assigning one label per image as in _Must-Act_ and _Must-Abstain_, because human perception of an image can vary, and
approximating the ratio for each image offers the best test of alignment6. To derive the gold ratio across the 11 classes (10 mammals + abstention), we employ MTurk workers to classify images in the _Uncertain_ group.
Footnote 6: Some might wonder why the machines should settle for aligning with human visual perception, rather than aiming to correctly classify even the most corrupted images (_i.e._ aim for superhuman visual perception). We provide arguments for the necessity of the former in Appendix E.
Every MTurk worker is asked to classify 35 images, including Category 4 images corrupted with a severity between 1 to 10, with 10 being distractors. This is to minimize MTurk workers' potential biases; _e.g._, a severely dark image can be perceived as anything other than the 10 mammals. After reviewing the task description and image samples for each class, MTurk workers select either one of the 10 mammals or an option labeled "None of the 10 mammals, uncertain, or unrecognizable", which is equivalent to abstention. To ensure the quality of samples, we disregard MTurk results where anything other than abstention was chosen for the distractor images.
In accordance with Requirement 4, we ask 134 individuals per image to estimate the indisputable ground truth distribution within an error bound of 5%, following the survey sampling theory. Proofs are provided in Appendix F. Additionally, we calculate the Fleiss' Kappa [15] to assess two types of consistency among the MTurk workers' answers: intra-annotator and inter-annotator consistency. Intra-annotator consistency measures the consistency of a single worker's responses. To calculate this, we inserted two sets of identical images in random order. If a worker selects the same answers for these identical images, we consider the worker's responses to be consistent. Inter-annotator consistency, on the other hand, measures the agreement among different workers. Our results show an intra-annotator consistency value of \(\kappa=0.91\), indicating almost perfect agreement, and an inter-annotator consistency value of \(\kappa=0.80\), demonstrating substantial agreement. Details on survey instructions, response filtering process, and participant statistics are provided in Appendix F.
### Dataset
We prepare three datasets: the train set, the open test set, and the closed test set. The train set is a subset of ImageNet-21K [61], consisting only of Category 1 samples. By doing so, we ensure the trained models are tested on a variety of unseen categories, reflecting a real-world scenario. Please note that our test sets are universal benchmarks that any model can be tested on regardless of its train set. We highly encourage users to compile their own train set and use our train set as a basic reference. The labels in ImageNet-21K follow WordNet synest relations, resulting in classes for both species and higher-level taxonomies (for instance, "brown bear" and "bear," respectively). For each of our 10 classes, we randomly sample a uniform amount of images from all related ImageNet-21K classes. We collected a total of 1250 images per class, using one-tenth of this data for validation. The creation processes of both the open and closed test sets are identical, as described above. We provide the open test set to allow developers to evaluate their models' visual perception alignment. Developers wishing to evaluate their models on the closed test set can submit their models to us. Table 1 presents a comparison of VisAlign and other datasets in terms of fulfilling the four requirements.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Rou. 1 & Req. 2 & Req. 3 & Req. 4 \\ \hline ImageNet-C [23] & ✗ & ✗ & \(\triangle\) & ✓ \\ ImageNet-A [26] & ✗ & ✗ & ✗ & ✓ \\ OpenOOD [77] & ✗ & ✗ & \(\triangle\) & ✓ \\ Background Challenge [76] & ✗ & ✗ & ✓ \\ MNIST [35] & ✗ & ✓ & ✗ & ✓ \\ CIFAR10 [31] & ✗ & ✓ & ✗ & ✓ \\ CIFAR10 [54] & ✗ & ✓ & \(\triangle\) & ✓ \\ PLEX [72] & ✗ & ✗ & ✓ & ✓ \\ Park et al. [51] & ✓ & ✗ & \(\triangle\) & ✗ \\ DCC [68] & ✗ & ✗ & \(\triangle\) & ✓ \\ \hline VisAlign & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: The comparison between VisAlign and other related datasets on the requirements we define. \(\triangle\) indicates that only a subset of our scenarios are covered.
Metrics
In addition to constructing VisAlign, we introduce a distance-based metric to measure _AI-human visual alignment_. Furthermore, as visual perception alignment can serve as a proxy for model reliability (_i.e_., safety, trustworthiness), we present a reliability score table to explore the correlation between a model's visual perception alignment and model reliability.
### Distance-Based Visual Perception Similarity Metric
We propose a distance-based metric to measure the distance between two multinomial distributions: the human visual distribution and the model output distribution over 11 classes (10 mammals + abstention). We opt for a distance-based metric for two reasons: 1) it does not depend on additional hyperparameters such as abstention threshold, and 2) comparison across all classes, rather than solely on the true class, provides a more accurate measure of visual alignment. For example, consider a _Must-Act_ tiger sample with the gold human label as a one-hot vector for the label _tiger_. Suppose one model outputs a probability of 0.7 for _tiger_ and 0.3 for _abstention_, and another model yields a probability of 0.7 for _tiger_ and 0.1 for _zebra_, _elephant_, and _giraffe_ respectively. These two models differ in visual perception alignment: the former is uncertain between two classes, whereas the latter is indecisive among four classes. If we were to consider only the gold label's probability, both models would yield the same result, which would not accurately represent visual alignment. Hence, we employ a distance-based metric calculated across all 11 classes, as opposed to using the maximum or gold label probability.
Specifically, we employ the Hellinger distance [49] to measure the difference between the two probability distributions as summarized in Eq. 1. Compared to other metrics for comparing two multinomial distributions, Hellinger distance produces smooth distance values even for extreme (_e.g_., one-hot) distributions (unlike KL Divergence [10]) and considers all classes while calculating the distance (unlike Total Variation distance). For instance, given a human visual distribution of [1., 0., 0.] and two model output distributions [0.3, 0., 0.7] and [0.3, 0.4, 0.3], the two output distributions would have the same KL Divergences with the human distribution while they have different Hellinger distances. Hellinger distance accounts not only for the gold label probability but also for the probabilities of all other labels. Additionally, as its range lies between 0 and 1, it provides an intuitive indication of model alignment.
\[h(P,Q)=\frac{1}{\sqrt{2}}\sum_{i}\lVert\sqrt{p}_{i}-\sqrt{q}_{i}\rVert_{2} \tag{1}\]
### Reliability Score with Abstention
Beyond measuring the distance between human visual distributions and model outputs, we also assess the model's reliability based on its final action. This process involves two steps. First, a model abstains if the abstention probability surpasses an abstention threshold, \(\gamma\); otherwise, it makes a prediction. Next, if a model decides to act, its prediction is one of the 10 mammal classes with the highest prediction probability. Table 2 details the reliability scores for each case. We devise separate metrics for _Must-Act_ and _Must-Abstain_ instances. For _Uncertain_ samples, they are treated as _Must-Act_ if the probability of the original label exceeds a threshold \(\lambda\); otherwise, they are treated as _Must-Abstain_. We set an initial \(\lambda\) value at 0.5, but this can be adjusted according to the specific objective. We denote the reliability score as \(RS_{c}(x)\), where \(c\) is the cost of an incorrect prediction. The main criterion for assigning scores is the consequences of the model's decision. The model earns a score of 1 per prediction when it aligns best with human
\begin{table}
\begin{tabular}{c c c} \hline \hline Sample Type & Model Action & \(RS_{c}(x)\) \\ \hline \multirow{3}{*}{Must-Act} & Correct Prediction & \(+1\) \\ \cline{2-3} & Incorrect Prediction & \(-c\) \\ \cline{2-3} & Abstention & 0 \\ \hline \multirow{3}{*}{Must-Abstain} & Original Label Prediction\({}^{*}\) & 0 \\ \cline{2-3} & Other Prediction & \(-c\) \\ \cline{2-3} & Abstention & \(+1\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reliability score table. The optimal outcomes earn a score of 1. Abstention in _Must-Predict_ and Original Label Prediction in _Must-Abstain_ get 0. The worst case receives \(-c\), where \(c\) is the cost value. \({}^{*}\)Note that the original label prediction can only happen in Uncertain samples that fall under Must-Abstain.
recognition: making a correct prediction in Must-Act and abstaining in Must-Abstain. On the other hand, if the model's decision is erroneous and could potentially result in significant cost--in our case, a wrong prediction--the model receives a score of \(-c\). A score of zero indicates that the prediction is neither beneficial nor detrimental. Original Label Prediction is a special case only applied for Uncertain samples treated as Must-Abstain. In this case, a model correctly classifies a corrupted image that most humans cannot recognize. Although most humans disagree with the model's decision, it does not have a negative impact since it is a correct answer. The total score, \(RS_{c}\), is the summation over all test samples, \(\sum_{i}RS_{c}(x_{i})\).
The proper value of cost \(c\) depends on the industry and the use case. \(c\) can be set as an integer ranging from 0 to the total size of the test set. A value 0 for \(c\) implies a 0% strictness, while the maximum value of \(c\) implies a 100% strictness. This means that even a single mistake would result in a negative score, and abstaining from all decisions on Must-Act samples would be deemed more reliable than making even one incorrect prediction. We designed this metric to enable both absolute and relative reference points. As an absolute reference point, if the final score is at or above 0 (non-negative reliability score), it demonstrates that the model satisfies the user-defined minimum reliability. A relative reference point is between different models; a model with a higher score between two reliability scores is more reliable. In this paper, we set the value of \(c\) as 0, 450, or 900.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Visual Alignment [\(\downarrow\)]} & \multicolumn{3}{c}{Reability score [\(\uparrow\)]} \\ \cline{2-13} & \multicolumn{3}{c}{Min-Act} & \multicolumn{3}{c}{Max-Abstain} & \multicolumn{2}{c}{Linear} & \multicolumn{1}{c}{\multirow{2}{*}{Average}} & \multicolumn{1}{c}{\multirow{2}{*}{\(RS_{c}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(RS_{c}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(RS_{c}\)}} \\ \cline{2-13} \cline{7-13} & Category 1 & Category 2 & Category 3 & Category 4 & Category 5 & Category 6 & Category 7 & Category 8 & \\ \hline \hline VIT [11] & & & & & & & & & & & & \\ \hline SP & 0.261\({}_{\text{stat}}\) & 0.356\({}_{\text{stat}}\) & 0.367\({}_{\text{stat}}\) & 0.735\({}_{\text{stat}}\) & 0.808\({}_{\text{stat}}\) & 0.781\({}_{\text{stat}}\) & 0.792\({}_{\text{stat}}\) & 0.671\({}_{\text{stat}}\) & 0.683\({}_{\text{stat}}\) & 313 & \(-\)241837 & \(-\)401987 \\ ASP & 0.208\({}_{\text{stat}}\) & 0.514\({}_{\text{stat}}\) & 0.325\({}_{\text{stat}}\) & 1.100\({}_{\text{stat}}\) & 1.000\({}_{\text{stat}}\) & 1.000\({}_{\text{stat}}\) & 1.000\({}_{\text{stat}}\) & 0.767\({}_{\text{stat}}\) & 0.727\({}_{\text{stat}}\) & 253 & \(-\)258047 & \(-\)573047 \\ MLP [36] & 0.390\({}_{\text{stat}}\) & 0.685\({}_{\text{stat}}\) & 0.485\({}_{\text{stat}}\) & 0.485\({}_{\text{stat}}\) & 0.725\({}_{\text{stat}}\) & 0.721\({}_{\text{stat}}\) & 0.726\({}_{\text{stat}}\) & 0.664\({}_{\text{stat}}\) & 0.623\({}_{\text{stat}}\) & 0.642\({}_{\text{stat}}\) & 270 & \(-\)275805 & \(-\)514303 \\ RNN [70] & 0.362\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.484\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.679\({}_{\text{stat}}\) & 0.696\({}_{\text{stat}}\) & 0.674\({}_{\text{stat}}\) & 0.621\({}_{\text{stat}}\) & 0.605\({}_{\text{stat}}\) & 252 & \(-\)254078 & \(-\)258183 \\ PATUC [13] & 0.375\({}_{\text{stat}}\) & 0.682\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.389\({}_{\text{stat}}\) & 0.363\({}_{\text{stat}}\) & 0.35\({}_{\text{stat}}\) & 0.668\({}_{\text{stat}}\) & 0.678\({}_{\text{stat}}\) & 0.687\({}_{\text{stat}}\) & 0.681\({}_{\text{stat}}\) & 255 & \(-\)258047 & \(-\)258047 & \(-\)258047 \\ Opedus [14] & 0.361\({}_{\text{stat}}\) & 0.360\({}_{\text{stat}}\) & 0.484\({}_{\text{stat}}\) & 0.484\({}_{\text{stat}}\) & 0.361\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.486\({}_{\text{stat}}\) & 0.669\({}_{\text{stat}}\) & 0.600\({}_{\text{stat}}\) & 0.600\({}_{\text{stat}}\) & 355 & \(-\)258047 & \(-\)258047 & \(-\)258047 \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c c c c c c} \hline \hline SP & 0.106\({}_{\text{stat}}\) & 0.362\({}_{\text{stat}}\) & 0.221\({}_{\text{stat}}\) & 0.736\({}_{\text{stat}}\) & 0.825\({}_{\text{stat}}\) & 0.800\({}_{\text{stat}}\) & 0.829\({}_{\text{stat}}\) & 0.625\({}_{\text{stat}}\) & 0.571\({}_{\text{stat}}\) & 303 & \(-\)252537 & \(-\)541347 \\ ASP & 0.085\({}_{\text{stat}}\) & 0.392\({}_{\text{stat}}\) & 0.382\({}_{\text{stat}}\) & 0.988\({}_{\text{stat}}\) & 0.998\({}_{\text{stat}}\) & 0.998\({}_{\text{stat}}\) & 0.736\({}_{\text{stat}}\) & 0.666\({}_{\text{stat}}\) & 294 & \(-\)288655 & \(-\)370056 \\ MLP [36] & 0.296\({}_{\text{stat}}\) & 0.312\({}_{\text{stat}}\) & 0.366\({}_{\text{stat}}\) & 0.243\({}_{\text{stat}}\) & 0.231\({}_{\text{stat}}\) & 0.685\({}_{\text{stat}}\) & 0.555\({}_{\text{stat}}\) & 0.552\({}_{\text{stat}}\) & 0.382\({}_{\text{stat}}\) & 288 & \(-\)286742 & \(-\)286724 \\ KNN [70] & 0.370\({}_{\text{stat}}\) & 0.580\({}_{\text{stat}}\) & 0.456\({}_{\text{stat}}\) & 0.489\({}_{\text{stat}}\) & 0.589\({}_{\text{stat}}\) & **0.545\({}_{\text{stat}}\) & **0.554\({}_{\text{stat}}\) & **0.554\({}_{\text{stat}}\)** & **0.554\({}_{\text{stat}}\)** & **0.543\({}_{\text{stat}}\)** & 0.523\({}_{\text{stat}}\) & **0.523\({}_{\text{stat}}\)** & **0.523\({}_{\text{stat}}\)** & **0.523\({}_{\text{stat}}\)** & **0.523\({}_{\text{stat}}\)** & **0.524\({}_{\text{stat}}\)** \\ TAPUC [13] & 0.201\({}_{\text{stat}}\) & 0.201\({}_{\text{stat}}\) & 0.277\({}_{\text{stat}}\) & 0.278\({}_{\text{stat
Experiment
### Experiment Settings
We perform experiments with Transformer-based [74], CNN-based [34], and MLP-based models to present experiments that show which architecture shows the best alignment and reliability on our benchmark. We use ViT [11] and Swin Transformer [40] for Transformer-based models, and DenseNet [28] and ConvNeXt [41] for CNN-based models. For the MLP-based model, we use MLP-Mixer [71]. All models are trained on our train set and tested on the open test set.
We chose abstention functions that satisfy the following three conditions: 1) must be applicable on any model architecture, 2) do not require OOD or other Must-Abstain samples during training, and 3) do not require a supplementary model. We first calculate the abstention probability using each function, then re-normalize the 10-class prediction probability so that the sum over the 11 classes becomes 1. Since not every function outputs the abstention probability between 0 and 1, we designed a smaller version of the dataset with the identical gather process to test set to use for normalizing the abstention probability.
* Softmax Probability (SP) regards the entropy among the 10 classes as abstention probability.
* Adjusted Softmax Probability (ASP) acts the same as SP, but it applies temperature scaling and adds perturbations to the input image based on the gradients to decrease the softmax score. This method is inspired by ODIN [27].
* Mahalanobis detector (MD) [36] determines abstention probability based on the minimum Mahalanobis distance [42] calculated from each class distribution's mean and variance.
* KNN [70] uses the shortest \(k\)-Nearest Neighbor (KNN) distance between the feature of the test sample and the in-class features as an abstention probability.
* TAPUDD [13] extracts features from train set and split into \(m\) clusters using Gaussian Mixture Model (GMM). It determines the abstention probability based on the shortest Mahalanobis distance calculated from all clusters.
* OpenMax [3] represents each class as a mean activation vector (MAV) in the penultimate layer of the network. Next, the test sample distance from the corresponding class MAV is used to calculate the abstention probability.
* MC-Dropout [16] and Deep Ensemble [33] approximate model uncertainty using multiple predictions given by different dropouts and ensemble of networks, respectively. The average of the entropies over the 10 classes of each prediction determines the abstention probability.
### Visual Alignment and Reliability Score
Table 3 presents both the distance-based visual alignment and the reliability scores on the open test set for all model and abstention function combinations. One key observation is that the performance differences between model architectures are not significantly distinct, suggesting that visual alignment is more influenced by abstention functions than by the model architectures. For _Must-Act_ categories, distance-based abstention functions (MD, KNN, and TAPUDD) exhibits better visual alignment. Conversely, for _Must-Abstain_ samples, probability-based methods (SP and ASP) align better with human perception. This implies that distance-based abstentions are generally more inclined to act, while probability-based abstentions are more likely to abstain. In _Uncertain_ category, all abstention functions demonstrate similar visual alignment performances, predominantly ranging from 0.5 and 0.6. We conjecture the reason comes from that all models are struggling in approximating the overall ratios across 11 classes compared to _Must-Act_ and _Must-Abstain_, where models only need to correctly predict a single class. The difficulty of achieving visual perception alignment in _Uncertain_ suggests that there is room for improvement. KNN [70] has the best visual alignment across all categories on average. This might be because KNN can capture more fine-grained features than other distance-based abstention functions, as it calculates the distance between samples, not clusters. We also compute three reliability scores with \(c\) set to 0 (\(RS_{0}\)), 450 (\(RS_{450}\)), and 900 (\(RS_{900}\)). The resulting ratios of each action type are shown in Appendix G. Here, \(c=0\) indicates no negative impacts from incorrect predictions, while \(c=900\) suggests that a single incorrect prediction outweighs the remaining correct predictions. It is worth noting that reliability scores in \(RS_{450}\) and \(RS_{900}\) are mostly negative, suggesting that current models and abstention functions are not perfectly safe to be deployed in the real world.
### Experiment Results from Pre-training and Self-supervised Learning
Previous studies [1, 78, 24, 46] suggest that training on larger data and pre-training by self-supervised learning (SSL) methods help improve robustness and Out-of-Distribution (OOD) detection. To validate if the same findings can also be applied in our task, we additionally measure the visual alignment and reliability score on models that are pre-trained on ImageNet [63] and pre-trained by two popular SSL methods, which are SimCLR [6] and BYOL [22]. For models that are pre-trained on ImageNet, after pre-training, we initialize the top classification layer and train on our train set while freezing the pre-trained parameters during fine-tuning. For models that are pre-trained by SSL methods, we do not freeze any layers after pre-training.
The results are shown in Table 4 and Table 5. The results in Table 4 can be compared to the results in Table 3. For ImageNet pre-trained models, Transformer-based models show improved performance, whereas MLP-based and CNN-based models show similar or decreased visual alignment scores, especially when evaluated with SP. This indicates that the effect of pre-training on larger datasets is dependent on model architecture. Interestingly, distance-based abstention functions display higher visual alignment scores. We suspect that the improved output embeddings from pre-training enable distance-based abstention functions to capture more precise features. Deep Ensemble has better visual alignment when met with Transformer-based and MLP-based. Notably, Transformer-based models combined with KNN have the best visual alignment score. We conjecture the reason comes from both the model architecture and the abstention function. Contrary to CNN-based models, Transformer-based models are able to capture global features of images instead of only local features. Also, KNN calculates abstention probability based on the distance between samples instead of clusters, as done in
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Visual Alignment (\(\downarrow\))} & \multicolumn{4}{c}{Reliability score (\(\uparrow\))} \\ \cline{2-11} & \multicolumn{4}{c}{Matr.Act} & \multicolumn{4}{c}{Matr.Abstain} & \multicolumn{4}{c}{Lucotti} & \multicolumn{4}{c}{\(\epsilon\)} & \multicolumn{4}{c}{\(\epsilon\)} \\ \cline{2-11} & Category 1 & Category 2 & Category 3 & Category 4 & Category 5 & Category 6 & Category 7 & Category 8 & \\ \hline \hline ViT [11] & & & & & & & & & & \\ \hline SP & 0.064\({}_{\text{mean}}\) & 0.147\({}_{\text{mean}}\) & 0.065\({}_{\text{mean}}\) & **0.211\({}_{\text{mean}}\)** & 0.050\({}_{\text{mean}}\) & 0.439\({}_{\text{mean}}\) & 0.052\({}_{\text{mean}}\) & 0.312\({}_{\text{mean}}\) & 710 & -7750 & -15590 \\ ASP & **0.033\({}_{\text{mean}}\)** & **0.062\({}_{\text{mean}}\)** & **0.044\({}_{\text{mean}}\)** & 0.099\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.056\({}_{\text{mean}}\) & 0.577\({}_{\text{mean}}\) & 200 & -25410 & -455210 \\ MLP [36] & 0.218\({}_{\text{mean}}\) & 0.341\({}_{\text{mean}}\) & 0.262\({}_{\text{mean}}\) & 0.080\({}_{\text{mean}}\) & 0.764\({}_{\text{mean}}\) & 0.094\({}_{\text{mean}}\) & 0.163\({}_{\text{mean}}\) & 0.402\({}_{\text{mean}}\) & 0.458\({}_{\text{mean}}\) & 6481 & -10661 & -2195966 \\ RNN [70] & 0.399\({}_{\text{mean}}\) & 0.585\({}_{\text{mean}}\) & 0.465\({}_{\text{mean}}\) & 0.460\({}_{\text{mean}}\) & **0.449\({}_{\text{mean}}\)** & 0.565\({}_{\text{mean}}\) & **0.380\({}_{\text{mean}}\)** & 0.460\({}_{\text{mean}}\) & 0.460\({}_{\text{mean}}\) & 0.629\({}_{\text{mean}}\) & **-9661** & **-9671** \\ TAPUD [13] & 0.320\({}_{\text{mean}}\) & 0.405\({}_{\text{mean}}\) & 0.315\({}_{\text{mean}}\) & 0.657\({}_{\text{mean}}\) & 0.733\({}_{\text{mean}}\) & 0.753\({}_{\text{mean}}\) & 0.646\({}_{\text{mean}}\) & 0.441\({}_{\text{mean}}\) & 0.530\({}_{\text{mean}}\) & 567 & -132613 & -254013 \\ OpenMax [63] & 0.068\({}_{\text{mean}}\) & 0.049\({}_{\text{mean}}\) & 0.072\({}_{\text{mean}}\) & 0.862\({}_{\text{mean}}\) & 0.862\({}_{\text{mean}}\) & 0.050\({}_{\text{mean}}\) & 0.020\({}_{\text{mean}}\) & 0.406\({}_{\text{mean}}\) & 0.468\({}_{\text{mean}}\) & 579 & -138021 & -276621 \\ MC-Dropout [10] & 0.034\({}_{\text{mean}}\) & 0.064\({}_{\text{mean}}\) & 0.064\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.964\({}_{\text{mean}}\) & 0.927\({}_{\text{mean}}\) & 0.974\({}_{\text{mean}}\) & 0.919\({}_{\text{mean}}\) & 0.513\({}_{\text{mean}}\) & 500 & -22614 & -521010 \\ Deep Ensemble [33] & 0.064\({}_{\text{mean}}\) & 0.107\({}_{\text{mean}}\) & 0.038\({}_{\text{mean}}\) & 0.378\({}_{\text{mean}}\) & 0.799\({}_{\text{mean}}\) & 0.437\({}_{\text{mean}}\) & 0.649\({}_{\text{mean}}\) & 0.824\({}_{\text{mean}}\) & 0.788\({}_{\text{mean}}\) & -10624 & -156972 \\ \hline Swin Transformer [40] & & & & & & & & & & & \\ \hline SP & 0.149\({}_{\text{mean}}\) & 0.179\({}_{\text{mean}}\) & 0.168\({}_{\text{mean}}\) & 0.212\({}_{\text{mean}}\) & 0.711\({}_{\text{mean}}\) & **0.383\({}_{\text{mean}}\)** & 0.677\({}_{\text{mean}}\) & 0.319\({}_{\text{mean}}\) & 0.344\({}_{\text{mean}}\) & 797 & -44238 & -99258 \\ ASP & 0.083\({}_{\text{mean}}\) & 0.105\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.099\({}_{\text{mean}}\) & 0.999\({}_{\text{mean}}\) & 0.999\({}_{\text{mean}}\) & 0.599\({}_{\text{mean}}\) & 0.599\({}_{\text{mean}}\) & 0.610\({}_{\text{mean}}\) & 0.83 & -225967 & -456171 \\ MD [36] & 0.127\({}_{\text{mean}}\) & 0.183\({}_{\text{mean}}\) & 0.143\({}_{\text{mean}}\) & 0.197\({}_{\text{mean}}\) & 0.584\({}_{\text{mean}}\) & 0.851\({}_{\text{mean}}\) & 0.667\({}_{\text{mean}}\) & 0.467\({}_{\text{mean}}\) & 0.485\({}_{\text{mean}}\) & 0.509\({}_{\text{mean}}\) & 0.597\({}_{\text{mean}}\) & 207 & -136053 & -311463 \\ KNN [70] & 0.153\({}_{\text{mean}}\) & 0.317\({}_{\text{mean}}\) & 0.344\({}_{\text{mean}}\) & 0.206\({}_{\text{mean}}\) & 0.573\({}_{\text{mean}}\) & 0.406\({}_{\text{mean}}\) & 0.366\({}_{\text{mean}}\) & 0.366\({}_{\text{mean}}\) & 0.344\({}_{\text{mean}}\) & 0.366\({}_{\text{mean}}\) & 0
MD or TAPUDD, which uses more fine-grained features for deciding abstention. Therefore, deciding abstention using fine-grained details on global features gets boosted when trained on a larger set, which leads to the best visual alignment. The overall reliability score increases when pre-trained with ImageNet, and this represents that the models that are pre-trained on ImageNet are more likely to abstain.
As shown in Table 5, the results from SSL are highly dependent on both the model architecture and whether the abstention method is distance-based or not. For example, distance-based methods perform better on _Must-Abstain_ categories when paired with Swin Transformer. Unlike other abstention methods, Deep Ensemble generally performs better in all groups regardless of the model architecture. Note that even if the same abstention method is used, the effects on the performance are reversed depending on the model architecture used. As an example, when TAPUDD is combined with Swin Transformer, the performance increases on all Must-Abstain categories and decreases on all Must-Act categories, but the performance difference is reversed when TAPUDD is combined with DenseNet instead.
Overall, Deep Ensemble helps increase visual alignment performance in both ImageNet pre-training and SSL. However, other abstention functions did not show noticeable performance increases in both cases. In short, the same findings in previous studies on robustness and OOD detection can not be directly applied to visual alignment. This implies visual alignment has its unique challenges that differentiate from robustness and OOD detection tasks, and there is much room for developing new methods for better visual alignment. In general, KNN shows the best visual alignment score in all three tables (Table 3, Table 4, Table 5). This may be due to using detailed features when calculating abstention probability. However, it is hard to find a consistency for optimal model architecture. For example, in Table 3, Swin Transformer and DenseNet, which have different architectures, have the best performance on average across all seven abstention functions. Therefore, more research on finding the optimal model architecture in visual alignment is needed.
Methods based on the minimum distance from each class (MD, KNN, and TAPUDD) generally show a worse visual alignment on _Must-Abstain_ categories. We conjecture that the reason comes from using the shortest distance to in-class clusters. If an embedding contains one clear in-class feature, the distance to the corresponding class would be short, leading the model to make a prediction. On the other hand, methods based on entropy or uncertainty show weak alignment on _Must-Act_ categories. With these methods, the model has to be not only confident that its predicted class is correct but also that the remaining classes are incorrect. Considering the confidence in all classes makes it more challenging for visual alignment in _Must-Act_ categories. An abstention function which takes advantage of both distance-based and probability-based methods is needed to perform well on visual alignment. The distance should be sample-wise to capture the nuanced characteristics of the samples. Overall, our experiments show that no methods perform well across all categories. There is much room for improvement in visual alignment, a field in which our dataset will become an essential tool for benchmarking new methods.
## 6 Conclusion
To the best of our knowledge, To the best of our knowledge, this is the first work to construct a test benchmark for quantitatively measuring the visual perception alignment between models and humans, referred to as _AI-human visual alignment_, across diverse scenarios. A dataset that tests visual perception alignment should cover multiple real-world scenarios and include gold human labels. Our dataset is divided into three main groups and eight categories, each representing unique and essential situations. In addition, our dataset includes gold human labels for each image, with some of these labels collected via MTurk survey. We benchmarked five baseline models and seven popular abstention functions, and our experimental results show that no current methods perform well across all categories. This finding suggests there is significant room for improvement in the area of visual alignment. We believe VisAlign can serve as a universal benchmark for testing visual perception alignment and that our work has potential applications in both social and industrial contexts. While our dataset focuses on the essential object identification and abstention task under _AI-human visual alignment_, future work can be expanded to potentially contentious but socially engaging topics such as gender or racial bias.
Despite our best efforts to construct VisAlign, there are some limitations. First, the number of classes is relatively small compared to other datasets since we collected 134 annotations per image and chose classes that would be familiar to an average human. Note that it is always challenging to collect gold human labels in any domain. For example, in diagnosing chest X-rays, the typical number of diseases is 14. To collect the ground truth labels within a statistical error bound of 5%, one would need to consult at least 107 radiologists. Therefore, more practical solutions are required to measure alignment in specialized domains. Another limitation comes from the nature of uncertainty. We acknowledge that uncertainty is continuous and it is hard to distinguish between clear and uncertain images. Although we put significant effort to include only clear images in Must-Act and Must-Abstain and obtained human annotations on Uncertain images, there is a possibility of corner cases where at least one person disagrees. Furthermore, synthetic corruptions cannot cover all uncertainties arising in the real world. However, uncertainty is too broad to specify and difficult to collect or generate, thus for now we use corruptions. We put our best effort to reflect the continuity of uncertainty by varying corruption intensity from 1 to 10 and include some corruptions that can arise in the real world (_e_.\(g\)., pixelation). We detailed further discussions on uncertainty in Appendix I. Also, extending visual alignment to scenarios such as visual illusions may also be introduced. While our dataset focuses on the essential object identification and abstention task under _AI-human visual alignment_, future work can be expanded to potentially contentious but socially engaging topics such as gender or racial bias and other vision tasks such as object detection and segmentation. |
2304.01828 | Learning Stable and Robust Linear Parameter-Varying State-Space Models | This paper presents two direct parameterizations of stable and robust linear
parameter-varying state-space (LPV-SS) models. The model parametrizations
guarantee a priori that for all parameter values during training, the allowed
models are stable in the contraction sense or have their Lipschitz constant
bounded by a user-defined value $\gamma$. Furthermore, since the
parametrizations are direct, the models can be trained using unconstrained
optimization. The fact that the trained models are of the LPV-SS class makes
them useful for, e.g., further convex analysis or controller design. The
effectiveness of the approach is demonstrated on an LPV identification problem. | Chris Verhoek, Ruigang Wang, Roland Tóth | 2023-04-04T14:32:07Z | http://arxiv.org/abs/2304.01828v2 | # Learning Stable and Robust Linear Parameter-Varying State-Space Models
###### Abstract
This paper presents two direct parameterizations of stable and robust _linear parameter-varying state-space_ (LPV-SS) models. The model parametrizations guarantee a priori that for all parameter values during training, the allowed models are stable in the contraction sense or have their Lipschitz constant bounded by a user-defined value \(\gamma\). Furthermore, since the parametrizations are _direct_, the models can be trained using unconstrained optimization. The fact that the trained models are of the LPV-SS class makes them useful for, e.g., further convex analysis or controller design. The effectiveness of the approach is demonstrated on an LPV identification problem.
## I Introduction
Systems in engineering are becoming more complex and are continuously being pushed to increase their efficiency, performance and throughput, which make their behavior to becoming increasingly dominated by nonlinearities. This makes the process of modeling these systems increasingly more difficult, as modeling based on first-principles quickly becomes too tedious, costly, and/or inaccurate. Therefore, efficient data-driven modeling tools for these type of systems are getting increasingly more important.
The class of _linear parameter-varying_ (LPV) systems has been established to provide a middle ground between the complex, but general, nonlinear system models and the easy-to-use, but rather limited, _linear time-invariant_ (LTI) system descriptions. In LPV systems, the signal relations are considered to be linear, just as in the LTI case. However, the parameters that define these relations are assumed to be functions of a measurable, time-varying signal - the so-called _scheduling variable_\(p\), which captures the nonlinear/time-varying effects of the underlying system [1]. The linearity property of LPV representations makes them attractive for modeling, analysis and control design and the framework is supported by extensions of many powerful approaches of the LTI framework.
LPV system identification methods [1, 2] have also matured to provide LPV surrogate models of systems based on data. However, despite the many advances, it has remained an open question whether it is possible to _a priori_ enforce stability and performance properties on the identified model. Despite the promising results that have been achieved for set membership identification based on LPV _input-output_ (IO) models [3] with a computationally intensive approach, the problem has remained unsolved for other LPV model classes.
Over the years, _deep learning_-based system identification methods have been introduced for the data-driven modeling of complex nonlinear systems [4], including methods that focus on LPV models [5, 6, 7]. Generally, the _recurrent neural network_ (RNN) model structures, such as LPV-SS models with NN-based coefficient dependencies has been the main point of interest. This is because such models can provide efficient learning of the (often difficult to model) scheduling dependencies, significantly contributing to the accuracy and automation of the overall modeling process. However, the dynamic nature of RNNs implies that stability of the model plays a significant role in the training [8]. This stability problem gained interest in recent years [9, 10, 11] and lead to the developments of so-called implicit network structures [12], which allow for a more systematic analysis of (deep) network structures. Based on this systematic structure, a major research effort has been spent on stability and performance analysis of dynamic neural network models [13, 14], mainly based on Lipschitz and contraction [15] properties of the models. Although promising, many of these techniques require constrained optimization for the training of the networks, due to the enforced stability and/or performance constraint that increases the computational complexity. Inspired by this drawback, _direct_ parametrizations of robust and stable RNNs have been introduced in recent years [16, 17], which allow for learning stable and robust deep learning-based models using unconstrained optimization.
In this work, we join the efficient and attractive properties of the LPV framework with the recently introduced direct parametrization approaches that can give a priori stability and performance guarantees. More specifically, as our main contributions, we propose two direct parametrizations of LPV-SS models with NN-based coefficients, which automatically guarantee that the LPV-SS model is stable in terms of contraction or have a prescribed bound on its Lipschitz constant. We achieve this by making use of the Cayley transform, which has been recently used to achieve similar parametrizations for convolutional neural networks [18]. The added value of the LPV-SS model structure is that the learned model could be later used for further analysis and control design using the well-established tools of the LPV framework. Moreover, we want to highlight that this a priori guaranteed stability and robustness property of the LPV-SS model is attractive to use for modeling problems in
situations where experiment-design is often limited in terms of excitation due to, e.g., cost, while the underlying data-generating system (e.g., a reactor in the process industry) is fed during operations with inputs far outside the excitation range of allowed experiments.
In the remainder, we give the problem discussed in this paper in Section II, while the proposed solution to this problem, i.e., our main result, is given in Section III. We demonstrate the effectiveness of our results on an example in Section IV and the conclusions are given in Section V.
### Notation
We denote by \(\mathbb{N}\) the set of non-negative integers and by \(\mathbb{D}_{+}^{n}\) the set of \(n\)-dimensional positive diagonal matrices. \(\|\cdot\|_{2}\) denotes the Euclidian vector norm. For a matrix \(A\in\mathbb{R}^{n\times n}\), the operation \(\mathrm{tril}(A)\) outputs the lower triangular part of \(A\). Given a square matrix \(M\) with \(I+M\) invertible, its Cayley transform is defined as \(\mathrm{Cayley}(M):=(I-M)(I+M)^{-1}\).
## II Problem Statement
Given a dataset \(\mathcal{D}_{T}:=\{u_{t},p_{t},\tilde{y}_{t}\}_{t=1}^{T}\) where \(u_{t}\in\mathbb{R}^{n_{u}}\), \(p_{t}\in\mathbb{R}^{n_{p}}\), \(\tilde{y}_{t}\in\mathbb{R}^{n_{\gamma}}\) are input, scheduling, and output signals of some length \(T\in\mathbb{N}\), we are interested in learning, i.e., identifying, a _linear parameter-varying state-space_ (LPV-SS) model \(\mathcal{M}_{\theta}\) via
\[\min_{\theta\in\Theta}\quad\mathcal{L}(\mathcal{M}_{\theta}(u,p),\tilde{y}) \tag{1}\]
where \(\mathcal{L}\) is the \(\ell_{2}\)-loss, i.e., the mean-squared error, of the simulation error:
\[\sum_{t=1}^{T}\|\tilde{y}_{t}-y_{t}\|_{2}^{2}, \tag{2}\]
with \(y=\mathcal{M}_{\theta}(u,p)\) describing the forward simulated model response of \(\mathcal{M}_{\theta}\) along the given input and scheduling trajectory \((u,p)\) in \(\mathcal{D}_{N}\) and estimated initial conditions. The model \(\mathcal{M}_{\theta}\) is described as
\[\begin{bmatrix}x_{t+1}\\ y_{t}\end{bmatrix}=\overbrace{\begin{bmatrix}A(p_{t})&B(p_{t})\\ C(p_{t})&D(p_{t})\end{bmatrix}}\begin{bmatrix}x_{t}\\ u_{t}\end{bmatrix}+b(p_{t}), \tag{3}\]
where \(x_{t}\in\mathbb{R}^{n_{x}},u_{t}\in\mathbb{R}^{n_{u}},y_{t}\in\mathbb{R}^{n_{ \gamma}},p_{t}\in\mathbb{P}\subseteq\mathbb{R}^{n_{p}}\) are the state, input, output and scheduling signals at time-instant \(t\in\mathbb{N}\), respectively. Here the actual functional dependency of the matrices \(A(p_{t}),\ldots,D(p_{t})\) and the bias (trimming term) \(b(p_{t})\) on the scheduling \(p_{t}\) are collected into the function \(\psi_{\theta}\) (see Fig. 1). The function
\[\psi_{\theta}:p\in\mathbb{P}\mapsto\{W,b\}, \tag{4}\]
is considered as a _deep neural network_ (DNN) parametrized with \(\theta\in\mathbb{R}^{n_{\theta}}\), which correspond to the learnable parameters. This construction of the LPV model allows for a flexible choice of the dependency structure in \(A(p_{t}),\ldots,D(p_{t}),b(p_{t})\), for instance, one can learn an affine scheduling relationship
\[\begin{bmatrix}\mathrm{Vec}(W(p_{t}))\\ b(p_{t})\end{bmatrix}=\psi_{\theta}(p_{t}):=S_{1}p_{t}+S_{0}, \tag{5}\]
with \(\theta=(S_{0},S_{1})\) as the learnable parameters. In [5], \(\psi_{\theta}\) is considered as a linear mapping while a \(\mu(u_{t},u_{t-1},\ldots,y_{t},y_{t-1},\ldots)\) is learnt with a deep-neural network to synthesize the scheduling signal as \(p_{t}=\mu(u_{t},u_{t-1},\ldots,y_{t},y_{t-1},\ldots)\) from input-output signals directly. In this paper, we consider the scheduling signal given and being part of the data set.
Furthermore, for the sake of simplicity, we consider (3) with no dedicated noise model under the assumption that the data-generating system has an _output-error_ (OE) type of noise structure. Note that estimation of an innovation noise model can be easily incorporated into (9) and the results of the paper can be easily generalized for that case.
In many applications, it is highly desirable to learn LPV-SS models via (1) with stability and robustness guarantees. Especially with a DNN parametrization of the coefficient functions, models estimated along the trajectory \(\mathcal{D}_{N}\) tend to provide deteriorated performance and even unstable behavior when the scheduling trajectory leaves the region where \(\mathcal{D}_{N}\) was obtained, causing much concern in their utilization for industrial applications. To prevent such phenomena occurring, we aim to ensure the following strong notions, which can help the model to exponentially forget the initial conditions and generalize to unseen data in a robust and stable manner:
**Definition 1**.: The system represented by (3) is said to be _contracting_, if for any two initial conditions \(x_{0}^{a},x_{0}^{b}\in\mathbb{R}^{n_{x}}\), any bounded sequences \(p\in\mathbb{P}^{\mathbb{N}}\), \(u\in(\mathbb{R}^{n_{u}})^{\mathbb{N}}\), the corresponding state sequences \(x^{a},x^{b}\) satisfy
\[\|x_{t}^{a}-x_{t}^{b}\|_{2}\leq K\alpha^{t}\|x_{t}^{a}-x_{t}^{b}\|_{2},\quad \forall t\in\mathbb{N}, \tag{6}\]
for some \(K>0\) and \(\alpha\in(0,1)\).
**Definition 2**.: The system represented by (3) is said to be \(\gamma\)_-Lipschitz_ for some \(\gamma>0\), if for any initial state \(x_{0}\in\mathbb{R}^{n_{\kappa}}\), bounded parameter sequence \(p\in\mathbb{P}^{\mathbb{N}}\), and bounded input sequence pair \((u^{a},u^{b})\in(\mathbb{R}^{2n_{a}})^{\mathbb{N}}\), the corresponding output pair \((y^{a},y^{b})\) satisfies
\[\sum_{t=0}^{T}\|y_{t}^{a}-y_{t}^{b}\|_{2}^{2}\leq\gamma^{2}\sum_{t=0}^{T}\|u_{t }^{a}-u_{t}^{b}\|_{2}^{2},\quad\forall T\in\mathbb{N}. \tag{7}\]
Using these definitions, we solve the following problems in this paper:
**Problem 1**.: Construct the model parameterizations
\[\mathcal{M}^{c}:=\{\mathcal{M}_{\theta}\mid\mathcal{M}_{\theta} \text{ is contracting }\forall\theta\in\mathbb{R}^{n_{\theta}}\}, \tag{8a}\] \[\mathcal{M}^{\gamma}:=\{\mathcal{M}_{\theta}\mid\mathcal{M}_{ \theta}\text{ is }\gamma\text{-Lipschitz }\forall\theta\in\mathbb{R}^{n_{\theta}}\}. \tag{8b}\]
Fig. 1: The LPV state-space model and its parameterized scheduling dependency \(\psi_{\theta}\).
_Remark 1_.: With the above parameterizations, the learning problem (1) can be formulated as an unconstrained optimization problem with \(\Theta=\mathbb{R}^{n_{\theta}}\), which can be solved by off-shelf first-order methods (e.g., stochastic gradient descent).
## III Main Results
In this section, we first give sufficient conditions for contracting/\(\gamma\)-Lipschitz LPV-SS models and then present a direct parameterization such that those conditions are automatically satisfied during training.
### _Stable and robust LPV-SS models_
To study the contracting or \(\gamma\)-Lipschitz property of (3), we first consider the error dynamics between two arbitrary trajectories of (3) with the same scheduling signal, i.e., \((u^{a},x^{a},y^{a},p)\) and \((u^{b},x^{b},y^{b},p)\). For these trajectories, the error dynamics are:
\[\begin{bmatrix}\Delta x_{t+1}\\ \Delta y_{t}\end{bmatrix}=\begin{bmatrix}A(p_{t})&B(p_{t})\\ C(p_{t})&D(p_{t})\end{bmatrix}\begin{bmatrix}\Delta x_{t}\\ \Delta u_{t}\end{bmatrix} \tag{9}\]
where \(\Delta x=x^{a}-x^{b}\), \(\Delta u=u^{a}-u^{b}\) and \(\Delta y=y^{a}-y^{b}\). Then, (3) is contracting if (9) is exponentially stable, while (3) is \(\gamma\)-Lipschitz if (9) has an \(\ell_{2}\)-gain bound of \(\gamma\).
**Proposition 1**.: _The LPV-SS model (3) describes a contracting system, if there exist a \(\mathcal{X}\succ 0\) and an \(\alpha\in(0,1]\) s.t._
\[\alpha^{2}\mathcal{X}-A^{\!\top}(p)\mathcal{X}A(p)\succ 0,\quad\forall p \in\mathbb{P}. \tag{10}\]
_The represented system is \(\gamma\)-Lipschitz, if there exist a \(\mathcal{X}\succ 0\) such that_
\[\begin{bmatrix}\mathcal{X}&0\\ 0&\gamma^{2}I\end{bmatrix}-W^{\!\top}(p)\begin{bmatrix}\mathcal{X}&0\\ 0&I\end{bmatrix}W(p)\succ 0,\quad\forall p\in\mathbb{P}. \tag{11}\]
Proof.: Contraction of the system represented by (3) is defined for the differential state under the same input sequence, hence (9) with \(\Delta u_{t}=0\) becomes
\[\Delta x_{t+1}=A(p_{t})\Delta x_{t}. \tag{12}\]
Based on (10), we have
\[\alpha^{2}V(\Delta x_{t})\geq V(\Delta x_{t+1}), \tag{13}\]
where \(V(\Delta x)=\Delta x^{\top}\mathcal{X}\Delta x\), showing exponential stability of the error dynamics. This implies that the corresponding LPV-SS model is contracting.
To prove the \(\gamma\)-Lipschitz property of (3), we first multiply (11) from the left and right with \(\begin{bmatrix}\Delta x_{t}^{\top}&\Delta u_{t}^{\top}\end{bmatrix}\) and \(\begin{bmatrix}\Delta x_{t}^{\top}&\Delta u_{t}^{\top}\end{bmatrix}^{\top}\), respectively. This leads to
\[\gamma^{2}\|\Delta u_{t}\|_{2}^{2}-\|\Delta y_{t}\|_{2}^{2}\geq V(\Delta x_{t +1})-V(\Delta x_{t}). \tag{14}\]
Using a telescoping sum based on the above inequality and that \(\Delta x_{0}=0\), (7) is satisfied.
### _Model parameterization via Cayley transform_
The challenge in estimating \(\psi_{\theta}\) and ensuring stability of (3) is that Condition (10) needs to hold for all \(p\in\mathbb{P}\subset\mathbb{R}^{n_{\Psi}}\), representing an infinite-dimensional constraint that is required to be added to (1). While it is possible to achieve some relaxation of this constraint, e.g., by restricting \(\psi_{\theta}\) to be linear and \(\mathbb{P}\) to a convex polytope and turn (10) to a finite _semi-definite programming_ (SDP) problem, such relaxations (i) seriously restrict the representable class of systems and (ii) still involve a significant amount of computation time, which can quickly make the training intractable. We tackle those issues by deriving an analytic solution to (10).
**Theorem 1**.: _The model (3) defined by coefficient function \(\psi_{\theta}\) satisfies (10), if and only if there exist \(d\in\mathbb{R}^{n_{\kappa}}\), \(\alpha\in(0,1]\), \(\mathcal{Y}\in\mathbb{R}^{n_{\kappa}\times n_{\kappa}}\) and a mapping \(\phi:p\mapsto(X,Y)\) with \(X(p),Y(p)\in\mathbb{R}^{n_{\kappa}\times n_{\kappa}}\) such that_
\[A(p)=\alpha Q\Lambda^{-1}M(p)\Lambda Q^{\top} \tag{15}\]
_with \(\Lambda=\operatorname{diag}(e^{d})\) and_
\[Q=\operatorname{Cayley}(\mathcal{Y}-\mathcal{Y}^{\top}),\quad M(p)= \operatorname{Cayley}(N(p)), \tag{16}\]
_where \(N(p)=X^{\top}(p)X(p)+Y(p)-Y^{\!\top}(p)+\epsilon I\) for some small positive constant \(\epsilon\)._
Proof.: We first show that (10) \(\Leftrightarrow\) (15) and then we prove that the invertible mapping between \(\Lambda,Q,M(p)\) and \(d,\mathcal{Y},X(p),Y(p)\) can be easily established based on Lemmas 1 and 2, which are given in the Appendix. For the sake of notational simplicity, we use subscript \(p\) to denote the dependency on the scheduling variable \(p\).
We first show that (15) \(\Rightarrow\) (10). By taking \(\mathcal{X}=Q\Lambda^{2}Q^{\top}\), \(\mathcal{X}\succ 0\) as \(QQ^{\top}=I\) due to Lemma 2. Then,
\[\alpha^{2}\mathcal{X}-A_{p}^{\top}\mathcal{X}A_{p}=\alpha^{2}Q\Lambda(I-M_{p} ^{\top}M_{p})\Lambda Q^{\top}\succ 0 \tag{17}\]
where positive definiteness of \(I-M_{p}^{\top}M_{p}\) follows by Lemma 1. Next, we show (10) \(\Rightarrow\) (15). Since \(\mathcal{X}\succ 0\), its _singular value decomposition_ (SVD) has the form \(\mathcal{X}=Q\Sigma Q^{\top}\) with \(\Sigma\in\mathbb{D}_{+}^{n_{\kappa}}\) and \(Q^{\top}Q=I\) and \(Q\) can not having \(-1\) as an eigenvalue. By letting \(\Lambda=\Sigma^{1/2}\) we have
\[\eqref{eq:10}\Rightarrow I-M_{p}^{\top}M_{p}\succ 0 \tag{18}\]
where \(M_{p}=\frac{1}{\alpha}\Lambda Q^{\top}A_{p}Q\Lambda^{-1}\), which gives (15).
Theorem 1 reveals that we can represent any \(\psi_{\theta}\) coefficient function parametrization for which the defined model (3) satisfies (10) by parameters \(d\), \(\mathcal{Y}\) and an unconstrained mapping
\[\phi_{\tilde{\theta}}:p\mapsto(X,Y,B,C,D,b).\]
that can be chosen as a DNN parametrized in \(\tilde{\theta}\). This means that we can transform the learnable parameters \(\theta\) to new parameters \(\{d,\mathcal{Y},\tilde{\theta}\}\) that guarantee that, for any value of them, the corresponding model (3) satisfies (10).
Note that in fact we can use any parameterization for \(\phi_{\tilde{\theta}}\). It could be a simple linear mapping (5) or have a polynomial parametrization, etc., which underlines the applicability of
Theorem 1 beyond deep learning based identification of LPV models.
Similar results can be derived for the \(\gamma\)-Lipschitz property.
**Theorem 2**.: _The model (3) defined by coefficient function \(\psi_{\theta}\) satisfies (11), if and only if there exist \(d\in\mathbb{R}^{n_{\text{s}}}\), \(\mathcal{Y}\in\mathbb{R}^{n_{\text{x}}\times n_{\text{x}}}\) and a mapping \(\phi:p\mapsto(X,Y,Z)\) with \(X(p),Y(p)\in\mathbb{R}^{n\times n}\) and \(Z(p)\in\mathbb{R}^{n_{0}\times n}\), where \(n=n_{\text{x}}+\min(n_{\text{u}},n_{\text{y}})\) and \(n_{0}=|n_{\text{y}}-n_{\text{u}}|\), such that_
\[W(p)=\begin{bmatrix}Q\Lambda^{-1}&0\\ 0&I\end{bmatrix}M(p)\begin{bmatrix}\Lambda Q^{\top}&0\\ 0&\gamma I\end{bmatrix} \tag{19}\]
_with_
\[\begin{bmatrix}\operatorname{Cayley}(N(p))\\ -2Z(p)(I+N(p))^{-1}\end{bmatrix}=\begin{cases}M(p),&\text{if }n_{\text{y}}\geq n_{ \text{u}}\\ M^{\top}(p),&\text{if }n_{\text{y}}<n_{\text{u}}\end{cases} \tag{20}\]
_where \(N(p)=X(p)^{\top}X(p)+Y(p)-Y(p)^{\top}+Z(p)^{\top}Z(p)+\epsilon I\) with \(\epsilon\) as a small positive constant._
Proof.: We first rewrite (11) as follows
\[\tilde{\mathcal{X}}-W_{p}^{\top}\tilde{\mathcal{X}}W_{p}\succ 0 \tag{21}\]
where \(\tilde{\mathcal{X}}=\operatorname{diag}(\mathcal{X},\gamma^{2}I)\) and \(\tilde{\mathcal{X}}=\operatorname{diag}(\mathcal{X},I)\). By taking the SVD decomposition \(\mathcal{X}=Q\Sigma Q^{\top}\) and letting \(\Lambda=\Sigma^{1/2}\), we have \(I-M_{p}^{\top}M_{p}\succ 0\) where
\[M_{p}=\begin{bmatrix}\Lambda Q^{\top}&0\\ 0&I\end{bmatrix}W_{p}\begin{bmatrix}Q\Lambda^{-1}&0\\ 0&\gamma^{-1}I\end{bmatrix}. \tag{22}\]
Then, the techniques used in the proof of Theorem 1 can be directly applied to prove (11) \(\Leftrightarrow\) (19).
_Remark 2_.: The transformation in (20) can be considered as the Cayley transform for non-square matrices. When \(n_{\text{y}}=n_{\text{u}}\), the normal Cayley transform is recovered as in that case, \(Z(p)\) is an empty matrix.
## IV Example
With the following example1, we aim to demonstrate the effectiveness of the proposed robust and stable LPV-SS parametrization for deep learning based identification by comparing the training results with these models with training results using a general LPV model structure.
Footnote 1: The data-sets and code used for this example can be found at [https://tinyurl.com/robstablpv](https://tinyurl.com/robstablpv).
### _Data-generation_
The data-generating system is considered in an LPV-SS form with output noise:
\[x_{t+1} =A^{\text{d}}(p_{t})x_{t}+B^{\text{d}}(p_{t})u_{t}, \tag{23a}\] \[\tilde{y}_{t} =C^{\text{d}}(p_{t})x_{t}+D^{\text{d}}(p_{t})u_{t}+e_{t}, \tag{23b}\]
where \(u_{t}\in\mathbb{R}\) is the input, \(p_{t}\in\mathbb{R}^{3}\) is the scheduling, \(x_{t}\in\mathbb{R}^{3}\) is the state, \(\tilde{y}_{t}\in\mathbb{R}\) is the output that is disturbed by an i.i.d. white noise signal \(e_{t}\sim\mathcal{N}(0,0.08)\). The matrices \(A^{\text{d}},\ldots,D^{\text{d}}\) have static affine dependence on \(p_{t}\), i.e., \(A^{\text{d}}(p_{t}),\ldots,D^{\text{d}}(p_{t})\) are of the form \(X(p_{t})=X_{0}+\sum_{i=1}^{n_{\text{y}}}X_{i}p_{i,t}\) with
\[A^{\text{d}}_{0} =\begin{bmatrix}-0.3885&-0.1912&0.1631\\ -0.3261&-0.2583&-0.9150\\ -0.1664&-0.1384&0.0768\end{bmatrix},\qquad B^{\text{d}}_{0} =\begin{bmatrix}-3.4269\\ -0.3316\\ -2.10066\end{bmatrix}\] \[A^{\text{d}}_{1} =\begin{bmatrix}0.2650&-0.2214&-0.1866\\ 0.1747&0.1687&-0.5876\end{bmatrix},\qquad B^{\text{d}}_{1} =\begin{bmatrix}-1.1096\\ -0.8456\\ -0.57277\end{bmatrix}\] \[A^{\text{d}}_{2} =\begin{bmatrix}0.1476&0.1390&0.0901\\ 0.04122&0.1903&0.4027\end{bmatrix},\qquad B^{\text{d}}_{2} =\begin{bmatrix}-0.5587\\ -0.1784\\ -0.1969\end{bmatrix},\] \[A^{\text{d}}_{3} =\begin{bmatrix}0.1613&-0.0909&-0.1652\\ 0.0098&-0.0529&0.0591\end{bmatrix},\qquad B^{\text{d}}_{3} =0_{3\times 1},\] \[C^{\text{d}}_{0} =\begin{bmatrix}-0.2097&0.0607&0.1421\end{bmatrix},\qquad C^{ \text{d}}_{1} =C^{\text{d}}_{2} =C^{\text{d}}_{3}=0_{1\times 3},\] \[D^{\text{d}}_{0} =0.3,\quad D^{\text{d}}_{1}=0.01,\quad D^{\text{d}}_{2}=0,\quad D ^{\text{d}}_{3}=0.04.\]
For this system, \(A^{\text{d}}(p_{t})\) satisfies that the spectral radius of \(A^{\text{d}}(p_{t})\) is less than 1 for \(p_{t}\in[-1,1]\times[0,4]\times[-2,2]=:\mathbb{P}\), which is considered as the scheduling range.
From (23), four data-sets are obtained: one Training and Validation data-set and two test-sets; Test-a and Test-b. We generated these sets by applying an input to (23) that is constructed with a white noise-signal with variance 0.05 added to a multi-sine. The multisine signal contains 10 sinusoidal components evenly distributed over the full normalized frequency spectrum. The scheduling signal is taken as a white noise with a uniform distribution over \(\mathbb{P}\). The data-sets are composed of \(N_{\text{b}}\) trajectories, each of length \(T\). The generated data-sets and their individual length-\(T\) trajectories are uncorrelated. The specific details for the generated data-sets are listed in Table I. Hence, data-set Test-b is excited by and scheduled with an input and scheduling that are _outside_ the range represented in the Training and Validation data-sets. The generated data-sets are shown in Figs. 2-5. We want to highlight that with the aforementioned specification on the output-noise \(e_{t}\), the _signal-to-noise ratio_ (SNR) for the Training, Validation and Test-a data-sets is 12 dB. This implies that the lowest possible _normalized root-mean-square error_ (NRMSE) that we can achieve when simulating the trained models is approximately 25%.
### _Considered model structures_
To identify (23), we consider the \(\gamma\)-Lipschitz LPV-SS model parametrization of Theorem 2 with the following hyperparameters: The state-dimension of the \(\gamma\)-Lipschitz LPV-SS model is chosen as \(n_{\text{x}}=3\). The mapping \(\phi_{\hat{\theta}}:p\mapsto(X,Y,b)\) according to Theorem 2 is chosen as a feedforward neural network for each component with 2 hidden layers, each with 50 neurons and a linear in- and output layer (note that \(Z\) is empty). The value for \(\gamma\) is set at 1, such that the model is ensured to have a Lipschitz constant bound of 1.
The results of the identification with the \(\gamma\)-Lipschitz LPV-SS model are compared to estimation of an LPV model given
\begin{table}
\begin{tabular}{c|c c c c} Item \textbackslash{} Data-set & Training & Validation & Test-a & Test-b \\ \hline Range \(u_{t}\) & \([-1,1]\) & \([-1,1]\) & \([-1,1]\) & \([-20,20]\) \\ Range \(p_{t}\) & \(0.3\mathbb{P}\) & \(0.3\mathbb{P}\) & \(0.3\mathbb{P}\) & \(\mathbb{P}\) \\ \(T\) & \(200\) & \(200\) & \(200\) & \(6000\) \\ \(N_{\text{b}}\) & \(3200\) & \(1280\) & \(30\) & \(1\) \\ \end{tabular}
\end{table} TABLE I: Specs of the generated data-sets
by the following _linear fractional representation_ (LFR):
\[\begin{bmatrix}x_{t+1}\\ z_{t}\\ y_{t}\end{bmatrix}\!\!\!=\!\!\left[\!\!\!\begin{bmatrix}\frac{A(p_{t})}{C_{x}(p_{ t})}\frac{1}{\mathrm{i}}\frac{B_{\mathrm{w}}(p_{t})}{0}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_global_ stability and performance (in terms of \(\gamma\)-Lipschitz) properties of the to-be-trained model. The proposed model parametrizations are highly flexible and require no further constraints or optimization based stability checks compared to alternative solutions. The strength of having these guaranteed properties is demonstrated in an example that considers an LPV system-identification problem.
## Appendix A Appendix
The proofs of Theorems 1 and 2 make use of the following lemmas.
**Lemma 1**.: _Let \(M\in\mathbb{R}^{n\times m}\) with \(n\geq m\). Then, \(M^{\top}M\prec I\) if and only if there exist \(X,Y\in\mathbb{R}^{m\times m}\) and \(Z\in\mathbb{R}^{(n-m)\times m}\) such that_
\[M=\begin{bmatrix}\mathrm{Cayley}(N)\\ -2Z(I+N)^{-1}\end{bmatrix} \tag{25}\]
_where \(N=X^{\top}X+Y-Y^{\top}+Z^{\top}Z+\epsilon I\)._
Proof.: **Sufficiency.** Both \(I+N\) and \(I+N^{\top}\) are invertible as \(N^{\top}+N=2(\epsilon I+X^{\top}X+Z^{\top}Z)\succ 0\). Therefore, \(M\) is well-defined and satisfies
\[(I+N^{\top})(I+N)-(I+N^{\top})M^{\top}M(I+N)\] \[\quad=(I+N^{\top})(I+N)-(I-N^{\top})(I-N)-4Z^{\top}Z\] \[\quad=2(N^{\top}+N)-4Z^{\top}Z=4(\epsilon I+X^{\top}X)\succ 0, \tag{26}\]
which implies that \(M^{\top}M\prec I\).
**Necessity.** First, we partition \(M\) by \(M^{\top}=\begin{bmatrix}M_{1}^{\top}&M_{2}^{\top}\end{bmatrix}\). Then, \(I+M_{1}\) is invertible since \(M_{1}^{\top}M_{1}+M_{2}^{\top}M_{2}\prec I\). From (A), we have
\[N=\mathrm{Cayley}(M_{1}),\quad Z=-\tfrac{1}{2}M_{2}(I+N). \tag{27}\]
Let \(H:=\tfrac{1}{2}(N^{\top}+N)-Z^{\top}Z\). We can further obtain that
\[H =\tfrac{1}{2}(I+M_{1})^{-\top}(I-M_{1}^{\top})(I+M_{1})(I+M_{1})^ {-1}+\] \[\quad\tfrac{1}{2}(I+M_{1}^{\top})^{-1}(I+M_{1}^{\top})(I-M_{1})(I+ M_{1})^{-1}-\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
\(U^{\top}\Sigma U\), we can construct \(X,Y\) as follows
\[X=\Sigma^{\frac{1}{2}}U,\quad Y=\tfrac{1}{2}N. \tag{28}\]
Substituting \(X,Y,Z\) into (26) recovers the matrix \(M\).
**Lemma 2**.: _Let \(M\) be a square matrix that does not have an eigenvalue of \(-1\). Then, \(M^{\top}M=I\) if and only if there exists a square matrix \(Y\) such that \(M=\operatorname{Cayley}(Y-Y^{\top})\)._
Proof.: **Sufficiency.** By defining \(N:=Y-Y^{\top}\) we have
\[M^{\top}M =(I+N^{\top})^{-1}(I-N^{\top})(I-N)(I+N)^{-1}\] \[=(I+N^{\top})^{-1}(I-N^{\top}-N+N^{\top}N)(I+N)^{-1}\] \[=(I+N^{\top})^{-1}(I+N^{\top}+N+N^{\top}N)(I+N)^{-1}\] \[=(I+N^{\top})^{-1}(I+N^{\top})(I+N)(I+N)^{-1}=I.\]
**Necessity.** Since \(-1\) is not an eigenvalue of \(M\), we have that \(I+M\) is invertible and thus \(N=\operatorname{Cayley}(M)\) is well-defined. Then, we can verify that \(N\) is asymmetric as
\[N^{\top}+N =(I+M^{\top})^{-1}(I-M^{\top})+(I-M)(I+M)^{-1}\] \[=2(I+M^{\top})^{-1}(I-M^{\top}M)(I+M)^{-1}=0.\]
By taking \(Y=\operatorname{tril}(N)\), we have \(N=Y-Y^{\top}\).
|
2310.14404 | Be Selfish, But Wisely: Investigating the Impact of Agent Personality in
Mixed-Motive Human-Agent Interactions | A natural way to design a negotiation dialogue system is via self-play RL:
train an agent that learns to maximize its performance by interacting with a
simulated user that has been designed to imitate human-human dialogue data.
Although this procedure has been adopted in prior work, we find that it results
in a fundamentally flawed system that fails to learn the value of compromise in
a negotiation, which can often lead to no agreements (i.e., the partner walking
away without a deal), ultimately hurting the model's overall performance. We
investigate this observation in the context of the DealOrNoDeal task, a
multi-issue negotiation over books, hats, and balls. Grounded in negotiation
theory from Economics, we modify the training procedure in two novel ways to
design agents with diverse personalities and analyze their performance with
human partners. We find that although both techniques show promise, a selfish
agent, which maximizes its own performance while also avoiding walkaways,
performs superior to other variants by implicitly learning to generate value
for both itself and the negotiation partner. We discuss the implications of our
findings for what it means to be a successful negotiation dialogue system and
how these systems should be designed in the future. | Kushal Chawla, Ian Wu, Yu Rong, Gale M. Lucas, Jonathan Gratch | 2023-10-22T20:31:35Z | http://arxiv.org/abs/2310.14404v1 | Be Selfish, But Wisely: Investigating the Impact of Agent Personality in Mixed-Motive Human-Agent Interactions
###### Abstract
A natural way to design a negotiation dialogue system is via self-play RL: train an agent that learns to maximize its performance by interacting with a simulated user that has been designed to imitate human-human dialogue data. Although this procedure has been adopted in prior work, we find that it results in a fundamentally flawed system that fails to learn the value of compromise in a negotiation, which can often lead to no agreements (i.e., _the partner walking away without a deal_), ultimately hurting the model's overall performance. We investigate this observation in the context of DealOrNoDeal task, a _multi-issue negotiation_ over _books_, _hats_, and _balls_. Grounded in negotiation theory from Economics, we modify the training procedure in two novel ways to design agents with diverse personalities and analyze their performance with human partners. We find that although both techniques show promise, a selfish agent, which maximizes its own performance while also avoiding walkaways, performs superior to other variants by implicitly learning to generate value for both itself and the negotiation partner. We discuss the implications of our findings for what it means to be a successful negotiation dialogue system and how these systems should be designed in the future.
## 1 Introduction
_"Firms [Agents], in the pursuit of profits, are led, as if by an invisible hand, to do what is best for the world." - Adam Smith: The Father of Modern Economics_
Negotiation is a crucial social influence interaction (Chawla et al., 2023), ubiquitous in everyday scenarios, from deciding who performs household chores to high-stakes business deals and legal proceedings. Consequently, negotiation dialogue systems find numerous applications in advancing conversational AI assistants (Leviathan and Matias, 2018), by advising human decision-making (Zhou et al., 2019), and in pedagogy, by making social skills training more effective (Johnson et al., 2019).
Negotiation is a complex _mixed-motive interaction_, involving motivations for both self-serving as well as cooperative and socialistic behaviors. A successful negotiator must not only learn to _extract concessions_ from the partner but also to _make concessions_ in order to reach an agreement. Maintaining this balance between self-interest and the interests of negotiation partners makes it a challenging task for automated dialogue agents. If an agent tries to take too much without any compromise, this can push the partner to walk away without an agreement, hurting the outcomes for both players.
One natural way to design such a system is through Self-play Reinforcement Learning (RL). **Step I:** Train a model \(S\) that imitates human-human dialogue data in a supervised manner. **Step II:** Create two copies of \(S\), \(S_{RL}\), whi
\begin{table}
\begin{tabular}{l|l} \hline \multicolumn{2}{c}{**Context (Alice: RL-Based, Bob: Supervised)**} \\ \hline \hline Counts & Book = 2, Hat = 1, Ball = 3 \\ Alice Values & Book = 1, Hat = 2, Ball = 2 \\ Bob Values & Book = 0, Hat = 7, Ball = 1 \\ \hline \hline \multicolumn{2}{c}{**Dialogue**} \\ \hline Alice & i would like the balls and hat and a book \\ Bob & you can have the balls and one book \\ Alice & i will take the balls and hat \\ Bob & deal \\ Alice & \(<\)dealselection\(>\) \\ \hline \hline \multicolumn{2}{c}{**Output**} \\ \hline Alice & Book = 0, Hat = 1, Ball = 3 \\ Bob & Book = 2, Hat = 0, Ball = 0 \\ \hline \hline \multicolumn{2}{c}{**Reward**} \\ \hline \hline Alice & \(8/10\) \\ Bob & \(0/10\) \\ \hline \end{tabular}
\end{table}
Table 1: A sample problematic negotiation dialogue between the standard RL agent (Alice) and a supervised model (Bob), based on Lewis et al. (2017). The task here is to divide the available books, hats, and balls between the two players. In this case, Bob accepts a deal even though it is very unfavorable, resulting in a high score for Alice.
for the RL agent, and \(S_{US}\), which acts as a _fixed simulated user_. **Step III:** Update \(S_{RL}\) to maximize its performance using an online RL algorithm by making it interact with \(S_{US}\) (_bot-bot interactions_) and recording the final performance achieved by the model (the _reward_).
Although adopted in prior work Lewis et al. (2017); He et al. (2018), we argue that this procedure leads to a fundamentally flawed system that fails to learn the value of compromise in a negotiation. **Arguments: 1)** The available human-human negotiation data mainly contains dialogues that end in agreements (\(\approx 80\)% in DealOrNoDeal dataset Lewis et al. (2017)), instead of walkaways or no agreements, leading to a highly prosocial simulated user \(S_{US}\) that tends to show agreement, regardless of how favorable the deal is. Hence, when training the RL agent \(S_{RL}\) to maximize its own performance against \(S_{US}\), \(S_{RL}\) becomes highly self-interested without learning to make any concessions since that leads to a high reward for \(S_{RL}\). We show one such problematic conversation between these two models in Table 1. **2)** Another piece of evidence comes from prior work Lewis et al. (2017). Even though such an RL model seems to perform well in automated evaluations (against the simulated user), it performs much worse against human partners, who often prefer to walk away with no agreement and \(0\) points earned for both parties rather than agreeing to an uncompromising partner. **3)** Finally, one can look at what happens if \(S_{RL}\) is made to play with another copy of \(S_{RL}\). In this case, we find that the agents simply get stuck - both continuously asking what they want without looking for a compromise (refer to Appendix A for a sample conversation).
This failure hurts the practical utility of the system, both from the perspective of being a successful negotiator in conversational AI use cases and for providing social skills training in pedagogy. The key challenge here is to somehow teach the model to be a _mixed-motive_ negotiator instead of only self-interested, with a better understanding of the concept of walkaways in a negotiation, even though the collected dialogue data primarily consists of dialogues ending in agreements. To address this, we investigate two modifications to the training procedure, resulting in systems that exhibit diverse personalities1: 1) We vary the RL reward directly so that the model is forced to take the partner's interests into account. This corresponds to manipulating the _motives_ of the dialogue agent, a psychological concept that has received significant attention in the literature Murphy and Ackermann (2014). For this purpose, we rely on a _a measure of utility_ from negotiation theory in Economics Fehr and Schmidt (1999), which helps us to control selfish vs. fair behavior explicitly. 2) We vary the _personality of the simulated user_ that the RL agent is trained with. This approach essentially manipulates the interaction experience that the agent receives so that the agent is itself allowed to discover the value of making concessions by being better exposed to walkaways during training. We now summarize our contributions:
Footnote 1: By personality, we simply refer to the consistent behavior portrayed by the trained agent ([https://www.apa.org/topics/personality](https://www.apa.org/topics/personality))
1. We provide evidence that the standard self-play RL training procedure fails to develop sophisticated negotiation dialogue systems useful in practical scenarios (Section 1).
2. To address this issue, we devise novel ways to modify the training procedure, grounded in negotiation theory from Economics, so as to design systems that exhibit diverse personalities and better understand the concept of walkaways (Section 3).
3. Through a comprehensive automated and human evaluation, we investigate what model variation allows for superior performance. Our key finding is that a selfish agent, which maximizes its own performance while also avoiding walkaways, achieves superior performance to other variants by learning to generate value for both itself and the negotiation partner (Section 5).
4. We discuss the implications of our findings for designing and evaluating negotiation dialogue systems in the future (Section 6).
## 2 Related Work
Historically, negotiation has been studied across several disciplines, including Game Theory Nash (1950) and Psychology Adair et al. (2001). More recently, there has been an increasing interest in human-agent negotiations as well Baarslag et al. (2016); Gratch et al. (2015). Extensive research has examined the effects of both agent and human personality in negotiation and related decision-making
tasks (Bogaert et al., 2008; Mell et al., 2018; van Wissen et al., 2009). However, most prior efforts analyze interactions based on structured communication channels such as through a menu of options (Mell and Gratch, 2016). Instead, Beaunay et al. (2022) studied participants' extreme reactions to unfair offers by a selfish chatbot in an ultimatum game. We contribute to this line of research by exploring diverse dialogue agent personalities and studying their impact on negotiation performance.
Several dialogue datasets (Lewis et al., 2017; Chawla et al., 2021; He et al., 2018; Yamaguchi et al., 2021) have fueled research into designing negotiation dialogue systems. RL has been a popular technique of choice in this space (Zhang et al., 2020; Yang et al., 2021). Yang et al. (2021) modeled the personality of the partners by a one-step dialogue-act look ahead in a buyer-seller negotiation domain and found that it leads to a higher agreement rate. Complementary to this, our work investigates the impact of diverse agent personalities by modifying both the underlying reward and the partner personality for RL training. In addition, we focus on using selfplay RL directly at the utterance level, which does not need additional annotations or separate parser and generator modules that are relatively difficult to design for general multi-issue negotiation tasks.
Other recent work has also explored the incorporation of additional annotations such as dialogue acts and strategy labels (Joshi et al., 2020). Nevertheless, our paper focuses on designing agents for mixed-motive interactions, which is fundamental to any underlying negotiation context and model architecture.
## 3 Methodology
We focus on bilateral multi-issue negotiations which involve a fixed set of issues (e.g., _books_, _balls_, and _hats_ in the DealOrNoDeal dataset (Lewis et al., 2017)). Each issue has a predefined quantity along with a random value (potentially different) assigned for every player. The players engage in a dialogue to reach an agreement - a possible division of all the available items in which they try to maximize the total value of the items that they get.
Our goal here is to develop techniques so that the trained dialogue models learn to make concessions (e.g., by offering deals that help the partner) for their partners apart from just learning to extract concessions from them. As discussed earlier, this mixed-motive behavior is a fundamental expectation from a practical negotiation dialogue system. To achieve this, we propose two complementary techniques - first, where we _explicitly_ incorporate the partner's performance into the reward function of the RL agent, and second, where the model _implicitly_ learns to make concessions by interacting with a specific partner during training. We start by describing our base RL framework and then discuss the two proposed techniques.
### Self-play RL for Negotiation Dialogue
We use the Selfplay RL framework introduced by Lewis et al. (2017) for training negotiation dialogue systems. Their pipeline consists of first training a supervised agent to mimic the collected human-human dialogue data and then using selfplay RL to further optimize the model. As Lewis et al. (2017) note, training a supervised agent to mimic human actions is a scalable and domain-agnostic starting point. However, this model by itself is unable to engage in strategic actions necessary for effective negotiation. By then having the supervised model negotiate with a fixed copy of itself (simulated user) and fine-tuning the model using an online RL algorithm, the model can be optimized towards a given reward function (in this case, the points scored by the agent in the negotiation).
The framework relies on a sequence-to-sequence model based on an ensemble of Gated Recurrent Units or GRUs (Cho et al., 2014). The model consists of one unidirectional GRU for encoding the input goals of the agent, another to encode the utterances from both the agent and the human partner, and one bidirectional GRU to generate the output deal once the negotiation is over.2
Footnote 2: Although the exact choice of the model architecture is irrelevant to our analysis, we choose this lightweight architecture to enable our analysis with different kinds of agent personalities.
In the supervised stage, the model is trained on a combined cross-entropy loss that jointly optimizes both the next-token prediction and the output deal prediction. The RL agent is trained with the REINFORCE method (Williams, 1992).
### Proposed techniques
#### 3.2.1 Varying the reward function
The key idea here is to incorporate the partner's performance into the reward function used for training the RL agent. Intuitively, this would make the agent
more prone to offering deals or accepting deals that help the partner as well.
To approach this systematically, we leverage a _measure of utility_ defined in negotiation theory in Economics by Fehr and Schmidt (1999). The utility function \(U_{i}(x)\) is defined as follows:
\[U_{i}(x)=x_{i}-a*(max(0,x_{j}-x_{i}))\\ -b*(max(0,x_{i}-x_{j})) \tag{1}\]
where \(b\leq a,0\leq b<1\). \(i\) and \(j\) denote the two players in the negotiation. \(x=(x_{i},x_{j})\) denotes the points scored by the corresponding players. \(U_{i}(x)\) essentially captures the utility gained by the player \(i\) from the negotiation, given the points scored by all the players (\(x\)).
Fehr and Schmidt (1999) defined this utility measure to model diverse behaviors in human-human negotiations, noting that merely assuming that all players are selfish does not explain the data. Hence, to capture the diversity in human behaviors, the equation includes additional terms that capture the _advantage_ and the _disadvantage_ of player \(i\) with respect to player \(j\) in the negotiation. We repurpose this utility measure directly as the reward for the RL agent. By varying the coefficients \(a\) and \(b\), different reward functions that promote diverse personality behaviors can be generated. We demonstrate this in Table 2. For our analysis in this paper, we choose the _selfish_ and _fair_ configurations.
#### 3.2.2 Varying the negotiation partner
While the above method, in some ways, explicitly pushes the agent to take the partner's performance into account, we now propose another technique to achieve this more implicitly.
Since the supervised model tends to show socialistic behaviors (Table 1), the RL agent fails to explore scenarios that do not lead to an agreement and, hence, cannot capture the notion of walkaways in the learned policy. However, if the agent were to interact with an uncompromising partner, this could be leveraged to simulate "walkaways" during model training, with the hope that the model discovers ways to avoid disagreements (while still optimizing on the reward), and thus implicitly learns about making concessions for the partner.
Hence, the key idea here is to vary the personality of the partner model. In addition, we define a length cut-off \(l\): if the conversation reaches \(l\) utterances, this is seen as a disagreement, and both agents receive \(0\) points from the negotiation. We explain how we design the diverse partner personalities for training later in Section 4.
## 4 Experimental Design
We proposed two ways of training dialogue models that capture the mixed-motive nature of negotiations: 1) explicitly, by varying the reward function for the RL algorithm (Section 3.2.1), and 2) implicitly, by varying the partner with which the RL model is trained (Section 3.2.2). **The primary research question we aim to answer is what variation leads to superior performance with human partners.** We first describe the dataset and the study design, followed by results in Section 5.
**Dataset**: We use the DealOrNoDeal dataset (Lewis et al., 2017), which is based on the Multi-Issue Bargaining Task (Fershtman, 1990) design. The dataset uses a simplistic design involving \(3\) issues (_books_, _hats_, and _balls_), and has been a popular choice for research in negotiation dialogue systems. It comprises \(5808\) dialogues in English based on \(2236\) unique scenarios, where a scenario refers to the available items up for grabs and their corresponding values for the two players. In each scenario, there is a fixed quantity of each issue, and players are randomly assigned a point value before the negotiation for each of the \(3\) issues. The goal of the dialogue is to reach an agreement on the possible division of all the available items, where each player strives to maximize the total value of the items that they get. The maximum possible value for a player is \(10\). However, if no agreement is reached, then both players end up with \(0\) points. Nearly \(80\)% of the dialogues end in agreement, with an average of \(6.6\) turns per dialogue and \(7.6\) words per turn. We use the same splits as the original dataset paper to train our dialogue agents.
**Study Design**: We design a \(2\) X \(3\) study based on the strategies described in Section 3. We use a three-stage process to develop the \(6\) agent personalities: **Stage 1**: Develop a supervised likelihood model, following Lewis et al. (2017). **Stage 2**: Train two RL dialogue agents by varying the reward using the _selfish_ and _fair_ utility functions selected from Table 2. Note that the selfish configuration here is equivalent to the base RL model trained by Lewis et al. (2017). **Stage 3**: Train the remaining four RL agents by varying the reward function (_selfish_ vs. _fair_) and using either of the two models trained in Stage \(2\) as partners. We provide an overview of this process and describe our
notations in Figure 13.
Footnote 3: Our implementation is based on [https://github.com/facebookresearch/end-to-end-negotiator](https://github.com/facebookresearch/end-to-end-negotiator).
**Hyperparameters**: We borrowed the hyperparameters from Lewis et al. (2017) and refer the readers to that paper for full details. The supervised model is trained for \(30\) epochs with a batch size of \(16\) using stochastic gradient descent. The initial learning rate is kept as \(1.0\), clipping gradients with \(L^{2}\) norm exceeding \(0.5\). This was followed by annealing of the learning rate by a factor of \(5\) per epoch. All the dialogue agents used in the experiments are initialized from this supervised model and trained for nearly \(16\)k agent-agent interactions with the partner model, using a learning rate of \(0.1\) and a discount factor of \(\gamma\)=\(0.95\). We use a length cut-off of \(20\) utterances to simulate walkaways: if a dialogue reaches \(20\) utterances, this is seen as a disagreement, and both players end up with \(0\) points.
**Human Evaluation**: We performed a human evaluation on the Prolific4 crowdsourcing platform. We collected nearly \(100\) agent-human conversations for each of the \(6\) dialogue models, where one human worker was allowed to participate only once. The workers were paid a base payment for their time, along with a lottery-based bonus that was dependent on their performance and effort. We provide more details in Appendix B, including statistics, worker qualifications, payments, and the design of the user interface.
Footnote 4: [https://www.prolific.co/](https://www.prolific.co/)
## 5 Results
Table 3 summarizes the human evaluation results. We analyze \(3\) key metrics: the points scored by the human, by the agent, and the total joint points - an indicator of the total value created in the negotiation. We also report the %age of walkways (%age of dialogues that do not reach an agreement). We discuss the significant trends below.
To analyze the overall performance, we conducted \(2\) (reward \(r\): selfish vs. fair) x \(3\) (partner \(p\): supervised vs. selfish vs. fair) ANOVAs on the points earned in the negotiation. First, we found no significant differences in the points earned by the dialogue agents. However, the agent reward \(r\) significantly affected human points (F(\(1,577\)) =
\begin{table}
\begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**a** & **b** & **Utility (U\({}_{1}\)(x))** & **Interpretation** \\ \hline
**0** & **0** & \(x_{i}\) & Selfish: partner points don’t matter. \\
1 & 0 & \(x_{i}-(max(0,x_{j}-x_{i})\) & Does’t like if the partner outperforms. \\
0 & -1 & \(x_{i}+(max(0,x_{i}-x_{j})\) & Selfish and Envious (desires poor partner performance) \\
0.75 & 0.75 & \(x_{i}-0.75*max(0,x_{j}-x_{i})-0.75*\) & Fair: Does’t like if the partner performs worse or better \\ \hline \end{tabular}
\end{table}
Table 2: Demonstration of reflected personalities by varying the parameters \(a\) and \(b\) from Equation 1. The variants used in this work are highlighted in blue.
Figure 1: The three-stage process used to design the \(6\) dialogue agents for our \(2\) x \(3\) study. \(r\): Reward that the RL agent is trained to maximize. \(p\): The partner with which the RL agent is trained. \(p\)=\(S\) corresponds to the model trained in Stage 1, while \(p\)=selfish and \(p\)=fair correspond to the respective models trained in Stage 2.
\(5.00\), p = \(.03\)), such that human partners playing with fair agents (\(r\)=fair) earned more points (M = \(5.73\); SE = \(0.18\)) than those playing with selfish ones (M = \(5.16\); SE = \(0.18\)). There was also a main effect of the partner \(p\) (F(2, \(577\)) = \(3.09\), p = \(.046\)), but both of these main effects were qualified by a significant interaction (F(2, \(577\)) = \(5.40\), p = \(.005\)). Consequently, this led to similar significant trends in the joint points earned (F(1, \(577\)) = \(5.21\), p = \(.02\)), such that fair agents (\(r\)=fair) earned more joint points with their partner (M = \(11.67\); SE = \(0.29\)) than selfish ones (M = \(10.72\); SE = \(0.29\)).
Interestingly, human partners earned more points with \(M_{r\text{selfish}}^{p\text{=selfish}}\) agent compared to other selfish agents, which also led to more joint points, bringing it on par with (or even better than) fair agents. A plausible explanation is that since the \(M_{r\text{=selfish}}^{p\text{=selfish}}\) agent is trained with an uncompromising partner (unlike other agents with \(r\)=selfish), it is better exposed to the repercussions of not making concessions for the partner since the agent receives a \(0\) reward if there is no agreement (within \(20\) utterances). Thus, the agent learns to "_give in_" in order to avoid no agreements. Next, we test this explicitly by analyzing the %age of walkaways for each agent.
**%age of walkaways**: Indeed, a log-linear analysis reveals a significant interaction between the agent's reward \(r\) and partner \(p\) on the %age of walkaways (\(G^{2}\) = \(31.1\), p < \(.0001\)). Specifically, the effect of partner \(p\) only appears among the selfish agents (\(r\)=selfish), as revealed by a Chi-squared test (\(\chi^{2}\)(2)= \(7.63\), p = \(.02\)), where only \(9.7\)% of conversations with \(M_{r\text{=selfish}}^{p\text{=selfish}}\) agent end without an agreement, much lower than with other selfish agents. This effect was not significant among the agents trained with a fair reward, in which case, the %age of walkaways is observed to be low, regardless of the partner \(p\).
**Removing walkaways**: Once the instances that end up in walkaways are removed, we find that selfish agents (\(r\)=selfish) earn more points for themselves (M = \(6.79\); SE = \(0.17\)) than fair agents (M = \(6.28\); SE = \(0.16\); F(\(1\), \(510\)) = \(4.62\), p = \(.03\)). This means that the lack of significant effects above in agent points was due to walkaway instances that
\begin{table}
\begin{tabular}{l|l} \hline \multicolumn{2}{c}{**Context**} \\ \hline Counts & Book = 1, Hat = 3, Ball = 1 \\ Model Values & Book = 2, Hat = 1, Ball = 5 \\ Human Values & Book = 10, Hat = 0, Ball = 0 \\ \hline \multicolumn{2}{c}{**Dialogue**} \\ \hline Human & I would like the books \\ Model & can i have the ball and the book and you \\ & can have the hats \\ Human & you can have the ball and let’s split the \\ & books \\ Model & how about i get the ball and two hats and \\ & you get the rest? \\ Human & i am ok with that \\ Model & \textless{}dealselection\textgreater{} \\ & **Output** \\ \hline Model & Book = 0, Hat = 2, Ball = 1 \\ Human & Book = 1, Hat = 1, Ball = 0 \\ \hline \multicolumn{2}{c}{**Reward**} \\ \hline \hline Model & \(7/10\) \\ Human & \(10/10\) \\ \hline \end{tabular}
\end{table}
Table 4: Example conversation between the \(M_{r\text{=selfish}}^{p\text{=selfish}}\) agent and a human partner in our experimental study. The agents helps to find a solution that leads to high performance for both players.
\begin{table}
\begin{tabular}{l|c c c|c c c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Model**}} & \multicolumn{3}{c|}{**Points Scored (Including walkways) \(\uparrow\)**} & \multicolumn{3}{c|}{**Points Scored (Excluding walkways) \(\uparrow\)**} & \multicolumn{1}{c}{**Walkaways \(\downarrow\)**} \\ \cline{2-7} & **Human** & **Agent** & **Joint** & **Human** & **Agent** & **Joint** & **(in \%)** \\ \hline \(\mathbf{M_{r\text{-fair}}^{p\text{-S}}}\) & \(5.72\) (\(0.29\)) & \(5.99\) (\(0.29\)) & \(11.71\) (\(0.43\)) & \(6.03\) (\(0.28\)) & \(6.32\) (\(0.26\)) & \(12.35\) (\(0.34\)) & \(\mathbf{5.15}\) \\ \(\mathbf{M_{r\text{-fair}}^{p\text{-fair}}}\) & \(5.87\) (\(0.29\)) & \(\mathbf{6.04}\) (\(\mathbf{0.28}\)) & \(11.91\) (\(0.43\)) & \(6.24\) (\(0.26\)) & \(6.43\) (\(0.25\)) & \(12.67\) (\(0.33\)) & \(6.00\) \\ \(\mathbf{M_{r\text{-fair}}^{p\text{-selfish}}}\) & \(5.59\) (\(0.31\)) & \(5.80\) (\(0.32\)) & \(11.39\) (\(0.42\)) & \(5.89\) (\(0.30\)) & \(\mathbf{6.12}\) (\(\mathbf{0.30}\)) & \(\mathbf{12.01}\) (\(\mathbf{0.34}\)) & \(\mathbf{5.15}\) \\ \hline \(\mathbf{M_{r\text{-selfish}}^{p\text{-S}}}\) & \(4.70\) (\(0.32\)) & \(5.58\) (\(0.39\)) & \(10.28\) (\(0.61\)) & \(\mathbf{5.86}\) (\(\mathbf{0.27}\)) & \(\mathbf{6.96}\) (\(\mathbf{0.33}\)) & \(12.82\) (\(0.38\)) & \(19.79\) \\ \(\mathbf{M_{r\text{-selfish}}^{p\text{-fair}}}\) & \(\mathbf{4.59}\) (\(\mathbf{0.35}\)) & \(\mathbf{5.20}\) (\(\mathbf{0.42}\)) & \(\mathbf{9.79}\) (\(\mathbf{0.67}\)) & \(6.07\) (\(0.29\)) & \(6.88\) (\(0.37\)) & \(12.96\) (\(0.41\)) & \(\mathbf{24.44}\) \\ \(\mathbf{M_{r\text{-selfish}}^{p\text{-selfish}}}\) & \(\mathbf{6.18}\) (\(\mathbf{0.30}\)) & \(5.90\) (\(0.28\)) & \(\mathbf{12.09}\) (\(\mathbf{0.48}\)) & \(\mathbf{6.85}\) (\(\mathbf{0.25}\)) & \(6.54\) (\(0.23\)) & \(\mathbf{13.39}\) (\(\mathbf{0.31}\)) & \(9.71\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results from the human evaluation study. We report the Mean (Standard Error) wherever applicable. The **Joint** points are scored by computing the mean over the sum of the points scored by both players – an indicator of the joint value created in the negotiation. The maximum possible points for a player in a negotiation is \(10\). \(\uparrow\): Higher is better, \(\downarrow\): Lower is better. In each column, we highlight the worst and the best scores in **red** and **blue** respectively. We discuss the significant trends in Sections 5 and 6.
result in \(0\) points for the agent. Further, we note that even when walkaways are removed, the human partners earn more points with \(M_{r=\text{selfish}}^{p=\text{selfish}}\) agent than with other selfish agents. We observed similar trends for joint points as well, with maximum joint points for the \(M_{r=\text{selfish}}^{p=\text{selfish}}\) agent. This suggests that besides contributing to lesser walkaways, \(M_{r=\text{selfish}}^{p=\text{selfish}}\) agent further learns to discover creative solutions that help both the players. We show one such example in Table 4 and provide more examples from the human evaluation in Appendix C.
## 6 Discussion
Going beyond the typical reward formulations used in the literature, this is the first instance of leveraging prior Economics theories to explicitly incorporate the partner performance within the reward of the selfplay RL negotiation agent. Our formulation provides a systematic and general way to train mixed-motive agents with diverse personalities (Table 2). As shown in Figure 1, our multi-stage training process provides an automated way to simulate diverse partner behaviors as well, instead of the unscalable rule-based approaches followed in prior work (for instance, the price-based rules defined for buyer-seller negotiations in Yang et al. (2021)).
The overall points scored in Table 3 show that all fair agents (\(r\)=fair) and the \(M_{r=\text{selfish}}^{p=\text{selfish}}\) agent perform superior to the \(M_{r=\text{selfish}}^{p=\text{S}}\) agent, which is trained following the standard procedure used in prior work - in terms of the human points, agent points, and (consequently) the joint points. This suggests that both strategies of varying the reward and varying the partner during RL training show promise for teaching the mixed-motive nature of negotiations to the dialogue agents.
We especially note the superior performance of \(M_{r=\text{selfish}}^{p=\text{selfish}}\) agent. Trained with a simplistic reward that maximizes its own performance, \(M_{r=\text{selfish}}^{p=\text{selfish}}\) learns to make concessions implicitly by being better exposed to the repercussions of not doing so during training. This observation aligns with the philosophy of the '_Invisible Hand_' in Economics by _Adam Smith_(Grampp, 2000), which suggests that self-interested players are implicitly led (as if by an invisible hand) to cooperate and take actions that benefit others.
### Automated Evaluation
To gain additional insights into the behavioral diversity and the performance of the dialogue agents, we analyze the results from the agent-agent interactions. For this purpose, we gather \(388\) conversations for every pair of agents and observe the average points scored by both agents separately and jointly. We depict the agent performance using heatmaps in Figure 2. Self-interested agents that are less exposed to walkaways during training (\(M_{r=\text{selfish}}^{p=\text{S}}\) and \(M_{r=\text{selfish}}^{p=\text{fair}}\)) tend to exploit the agents trained with a fair reward. However, this behavior backfires when the partner model behaves similarly in a self-interested manner - both agents show uncompromising behavior that leads to higher disagreements (stuck in negotiation for \(>=20\) utterances) and ultimately, extremely low overall scores.
In general, we find the \(M_{r=\text{selfish}}^{p=\text{selfish}}\) agent to be superior, consistently achieving a high performance for itself (the last column) while also enabling a high performance for its partners (the last row). This trend is also evident from the corresponding
Figure 2: Heatmaps depicting the results from \(388\) agent-agent interactions. Each cell denotes the points scored (out of \(10\)) by the Alice variant (X-Axis) when it interacts with the corresponding Bob model (Y-Axis).
heatmaps for joint points shown in Figure 3.
### Subjective Assessment
Prior work has argued the importance of incorporating subjective measures in social influence tasks like negotiations (Aydogan et al., 2020). Although this is more relevant for repeated interactions between the same players (unlike in our case, which only involves one negotiation between an agent and a human partner), nevertheless, we present results on the subjective assessment of the human partners for completeness. Through a post-survey, we measured the human partners' satisfaction with the outcome and likeness towards the agent on a five-point scale (more details in Appendix B). We summarize the results in Figure 4.
Based on \(2\) x 3 ANOVAs, we find that human partners of the fair agents (\(r\)=fair) were significantly more satisfied (F(\(1\), \(576\)) = \(47.32\), p < \(.0001\)) as compared to the humans who interacted with the selfish ones, but this was qualified by a marginally significant interaction with the partner \(p\) (F(\(2\), \(576\)) = \(2.54\), p = \(.08\)). This can be attributed to the previously noted observation that human partners, on average, secured more points with fair agents.
We find similar trends with likeness towards the agent as well - human partners report higher likeness when playing with fair agents as compared to selfish ones (F(\(1\), \(577\)) = \(53.95\), p < \(.0001\)). Interestingly, among the selfish agents (\(r\)=selfish), the \(M^{p\text{selfish}}_{r\text{=selfish}}\) achieved the highest subjective assessment from the human partners, bringing it close to the performance of fair agents, even though it was trained with a selfish reward.
### Measuring Success
As discussed in prior work (Chawla et al., 2023), our analysis reflects upon the multi-faceted nature of the notion of success in negotiations, where observing a single dimension can be misleading. For example, when interacting with model \(S\), the
Figure 4: Subjective assessment by humans. Both metrics are measured on a scale of \(1\) to \(5\).
Figure 3: Heatmaps depicting the results from \(388\) agent-agent interaction. Each cell denotes the mean joint points scored by the corresponding Alice model variant (X-Axis) and the Bob variant (Y-Axis).
\(M_{r\text{-asclish}}^{p\sim S}\) agent seems to get high points for itself. However, our analysis shows that this is simply due to fewer walkaways, which occur far more often with other selfish agents or human partners. Thus, we stress the importance of a comprehensive evaluation of negotiation dialogue systems.
Perhaps the downstream application context can guide what metrics should be prioritized. From a pedagogical perspective, training agents that accurately reflect the diversity in human behavior (as in this work based on Equation 1) can itself be highly valuable for social skills training. Similarly, subjective assessment of the dialogue agents can be more important in scenarios involving relationships for long-term or repeated social influence interactions.
If the goal is to design a dialogue agent that performs the best for itself (regardless of partner performance), such as in a game context, perhaps the best strategy is to train it with a variety of partner personalities. The agent must develop a _theory-of-mind_ about the partner and learn to weigh _extracting concessions_ vs. _making concessions_ based on the personality of the specific partner in the interaction. We attempted to train such an agent, but unfortunately, not keeping the partner model fixed makes the training process unstable (also observed in Lewis et al. (2017)). One explanation for this is the relatively short conversations in DealOrNoDeal, which makes it hard to infer the partner's personality implicitly. Hence, there is value in extending our analysis to other negotiation dialogue datasets Yamaguchi et al. (2021); Chawla et al. (2021). In the future, we plan to integrate RL-based planning with Large Language Models (LLMs) for tackling these more complex scenarios, consisting of longer conversations and richer contexts.
## 7 Conclusion
We devised two variations of the standard self-play RL technique to inculcate the mixed-motive nature of negotiation into the dialogue agents. The first approach worked by varying the reward function and thereby, by explicitly pushing the model to take the partner's performance into account. In the second approach, we modified the personality of the partner agent during training, which allowed the RL agent to discover the mixed-motive nature of the task implicitly.
We find that both techniques hold promise, with an especially strong performance from the agent that is trained with a selfish reward and a self-interested partner. This agent not only improves on the agreement rate but also learns to discover offers that create value for its partner without hurting its own points significantly.
## 8 Broader Impact and Ethical Considerations
### Dataset Used
We used a publicly available version of the DealOrNoDeal dataset5. The dataset was completely anonymized prior to its release by the authors. Moreover, we verified the licensing details to ensure that the dataset was used only within its intended scope.
Footnote 5: [https://github.com/facebookresearch/end-to-end-negotiator](https://github.com/facebookresearch/end-to-end-negotiator)
### Human Evaluation
Our human evaluation experiment was approved by the relevant Institutional Review Board (IRB). Before the data collection, each participant signed an Informed Consent document, which outlined the study's objectives, warned about potential discomfort, and acknowledged the collection and future use of data. The participants were also informed of their right to withdraw from the study at any time. Furthermore, they were instructed to refrain from using offensive or discriminatory language during the experiment. The compensation provided to participants adhered to the guidelines established by our IRB approval process. Lastly, any mention of the personality of the human participants in this paper is based on the standard procedures of collecting personality metrics in the literature.
### Automatic Negotiation Systems
Negotiation has been actively studied in diverse research areas, including Economics, Psychology, and Affective Computing Carnevale and Pruitt (2003). More recently, it has been studied as a social influence dialogue task for automated systems Chawla et al. (2023).
Automated systems capable of negotiating via realistic modes of communication, such as natural language, hold a huge potential in making social skills training more scalable and effective Johnson et al. (2017). Personality-based variants of dialogue systems (such as the ones explored in this work) can also help to design experimental studies in Psychology to better understand human decision-making Gratch et al. (2015). Further, the
techniques developed can help to advance conversational AI, such as the Google Duplex Leviathan and Matias (2018), a system that engages in a simple form of negotiation to book a haircut appointment over the phone.
While these use cases are encouraging, these systems must be deployed in the wild by following proper ethical guidelines. Our primary recommendation is maintaining transparency - not only about the identity of the system but also about its capabilities, key design objectives, the data on which the model has been fine-tuned, along with any known discriminative or other undesirable behaviors. We encourage rigorous testing of the model behaviors pre-deployment and continuous monitoring post-deployment. We believe these recommendations should be followed for any human-centric AI models, including social influence dialogue systems and even Large Language Models.
## 9 Limitations
**Task Design**: The DealOrNoDeal task is based on a simplified abstraction of real-world negotiations, referred to as the Multi-Issue Bargaining Task or MIBT (Fershtman, 1990). MIBT assumes a fixed set of issues and predefined priorities for players before the negotiation begins. Although popular in NLP research and beyond, the MIBT framework does not capture several realistic negotiation scenarios, such as complex cases where an item can be split into more than one unit or cases where the priorities of the negotiators change during the interaction. Future work in data collection for negotiation tasks should consider such scenarios. Among the available datasets that use MIBT, more recent datasets capture richer negotiation contexts with relatively longer interactions, such as campsite negotiations in the CaSiNo dataset (Chawla et al., 2021) and salary negotiations in the JobInterview dataset (Yamaguchi et al., 2021) - We encourage future work to explore incorporating agent personalities for these datasets.
**Human Evaluation**: Following the design of the DealOrNoDeal dataset that contains dialogues in English, our human evaluation involved workers from a restricted demographic pool - nationality as USA and English as the native language. However, prior research has noted differences in negotiation behaviors across cultures (Andersen et al., 2018; Peng, 2008). Hence, it is unclear if our findings from the human evaluation would directly apply to workers from a different demographic. While this is out of the scope of our paper, this should be better explored in the future.
## Acknowledgments
We want to thank our colleagues at the University of Southern California for all the comments and helpful discussions that have shaped this project. We also thank the anonymous EMNLP \(2023\) reviewers for their valuable time and feedback. Our research was sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-20-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation herein.
|
2302.03369 | Decoding NGC 7252 as a blue elliptical galaxy | Elliptical galaxies with blue optical colours and significant star formation
are hypothesised to be major merger remnants of gas-rich spiral galaxies or
normal elliptical galaxies with a sudden burst of star formation. We present
here a scenario in which blue elliptical galaxies identified in shallow imaging
surveys may fail to recover faint features that are indicative of past merger
activity using a nearby major merger remnant. Based on deep optical imaging
data of the post-merger galaxy, NGC 7252, we demonstrate that the galaxy can
appear as an elliptical galaxy if it is observed at higher redshifts. The main
body and the low surface brightness merger features found at the outskirts of
the galaxy are blue in the optical g - r colour map. We argue that the
higher-redshift blue elliptical galaxies discovered in surveys as shallow as
the SDSS or DECaLS may be advanced mergers whose defining tidal features fall
below the detection limits of the surveys. This should be taken into
consideration during the morphological classification of these systems in
future and ongoing surveys. | Koshy George | 2023-02-07T10:24:13Z | http://arxiv.org/abs/2302.03369v2 | # Decoding NGC 7252 as a blue elliptical galaxy
###### Abstract
Elliptical galaxies with blue optical colours and significant star formation are hypothesised to be major merger remnants of gas-rich spiral galaxies or normal elliptical galaxies with a sudden burst of star formation. We present here a scenario in which blue elliptical galaxies identified in shallow imaging surveys may fail to recover faint features that are indicative of past merger activity using a nearby major merger remnant. Based on deep optical imaging data of the post-merger galaxy, NGC 7252, we demonstrate that the galaxy can appear as an elliptical galaxy if it is observed at higher redshifts. The main body and the low surface brightness merger features found at the outskirts of the galaxy are blue in the optical \(g-r\) colour map. We argue that the higher-redshift blue elliptical galaxies discovered in surveys as shallow as the SDSS or DECaLS may be advanced mergers whose defining tidal features fall below the detection limits of the surveys. This should be taken into consideration during the morphological classification of these systems in future and ongoing surveys.
## 1 Introduction
Massive galaxies in the local Universe are morphologically classified as elliptical (E), S0, and spirals. E/S0 galaxies are generally observed to be gas poor, without significant ongoing star formation. Spiral galaxies, on the other hand, tend to be gas rich and are actively forming stars. This is reflected as distinct bimodal regions in the optical colour-magnitude diagram and in diagrams of the stellar mass-star formation rate (Baldry et al. 2004; Brinchmann et al. 2004; Salim et al. 2007; Noeske et al. 2007; Elbaz et al. 2007; Daddi et al. 2007). The bimodal nature of star formation in galaxies could be explained by internal or external processes due to which star-forming galaxies cease to form new stars. This could be associated with a morphological change through which a star-forming spiral galaxy can become a non-star-forming E/S0 galaxy. This is possible through major mergers, in which two equal-mass star-forming spiral galaxies merge to form a massive elliptical galaxy. In an environmental process, star-forming spiral galaxies can also fall into galaxy clusters and groups without subsequent supply of gas, which halts star formation and transforms the morphology into an S0 galaxy. This is supported by the redshift evolution of stellar mass buildup in red-sequence (E/S0) galaxies that occurs at the expense of blue cloud (spiral) galaxies (Bell et al. 2004; Faber et al. 2007; Brown et al. 2007). However, the star formation rates of a small fraction of blue colour E/S0 galaxies are as significant as in spiral galaxies (Fukugita et al. 2004; Schawinski et al. 2009; Kannappan et al. 2009; Huertas-Company et al. 2010; McIntosh et al. 2014; Mahajan et al. 2018; Moffett et al. 2019; Dhiwar et al. 2022; Paspaliaris et al. 2023; Lazar et al. 2023). These galaxies are found to be in low-density regions with blue optical colours, and they occupy the main sequence of star-forming galaxies. The formation of blue elliptical galaxies is hypothesised as follows. Two gas-rich, equal-mass spiral galaxies can merge to form an elliptical galaxy with significant star formation (this is one of the channels for elliptical galaxy formation). The star formation will cease when the gas is exhausted, and the galaxy will change to a normal elliptical galaxy. The other scenario involves a rejuvenation process in which the normal elliptical galaxy acquires much gas that then collapses to form new stars.
We test the hypothesis of a major merger origin for blue elliptical galaxies using the known major merger remnant NGC 7252. The galaxy main body is a single-nucleus merger remnant and represents the final stages of the Toomre sequence of merging, where the merger remnant will eventually deplete the fuel for star formation and evolve into an elliptical galaxy (Toomre & Toomre 1972; Toomre 1977; Schweizer 1982). The advanced merger between two gas-rich spiral galaxies results in tidal tails, shells, and ripples around the main body in optical imaging (Schweizer 1982; Dupraz et al. 1990; Wang et al. 1992; Fritze-v. Alvensleben & Gerhard 1994; Hibbard et al. 1994). Significant star formation is detected in the main body and at the outskirts of the galaxy, with indications of a gaseous disk and possible active galactic nucleus (AGN) feedback at the centre (George et al. 2018a; Weaver et al. 2018; George et al. 2018b). The merger and the associated starburst in the galaxy are understood to have started 600-700 Myr ago (Hibbard & Mihos 1995; Chien & Barnes 2010). The surface brightness profile of the galaxy main body follows a de Vaucouleurs profile (typical of elliptical galaxies), in which optical spectroscopy reveals post-starburst features (Schweizer 1982; Hibbard & Yun 1999). The galaxy follows the scaling relations of normal elliptical galaxies, such as the Faber-Jackson and fundamental plane relation (Lake & Dressler 1986; Hibbard & Mihos 1995; Genzel et al. 2001; Rothberg & Joseph 2006). We used deep optical imaging data of NGC 7252 to investigate whether the galaxy shows properties similar to that of blue elliptical galaxies. Blue elliptical galaxies could have formed from a similar equal-mass spiral galaxy merger, in which the merger features reach beyond
the detection limit of the wide-field optical surveys (The Sloan Digital Sky Survey (SDSS) and The Dark Energy Camera Legacy Survey (DECalS)) based on which these galaxies are originally classified. We explore the optical \(g-r\) colour of the low surface brightness features in the outskirts and compare the values against the main body of the galaxy. We place the galaxy at various redshifts as imaged at the surface brightness limit of the legacy survey imaging used in this work by correcting for angular size distance and applying cosmological surface brightness dimming. We adopt a flat Universe cosmology with \(H_{\rm o}=71\,{\rm km\,s^{-1}\,Mpc^{-1}}\), \(\Omega_{\rm M}=0.27\), \(\Omega_{\Lambda}=0.73\)(Komatsu et al. 2011).
## 2 Data and analysis
NGC 7252 (RA:22:20:44.7,Dec:24:40:42) is a nearby major merger remnant with a spectroscopic redshift (z) = 0.0159 (Rothberg & Joseph 2006). The optical \(g,r,z\)-band imaging data of NGC 7252 were taken from Data Release 10 of the legacy survey DECaLS (Dey et al. 2019). DECaLS uses the Dark Energy Camera, consisting of 62 \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ { }}}}}}}}}{{{{{{{{ \rm{\rm{\rm{\rm{{\rm{{\rm{{\rm{{\rm{{{\rm{{{ }}}}}}}}}}}}}}{{{{{{{{ {{\rm{\rm{\rm{{\rm{{\rm{{}}}}}}}}}}}}}{{{{{{{{{{\rm{{\rm{{{\rm{{{}}}}}}}}}}}{{{{{{{{ \rm{\rm{{\rm{{\rm{{\rm{{}}}}}}}}}}}}}}{{{{{{{{\rm{{\rm{{{\rm{{{{}}}}}}}}}{{{{{{ \rm{{\rm{{\rm{{\rm{{{\rm{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)
median full width at half maximum (FWHM) in the \(g,r,z\) band of the delivered image quality is \(\sim\) 1.3, 1.2, and 1.1 arcsec. The photometric calibration was made using the Pan-STARRS1 DRI photometry through the set of colour transformation equations given in Dey et al. (2019). The \(g,r,z\) band coadded images we used in our analysis were calibrated with pixel values stored in nannomagies, which can be converted into magnitudes using the appropriate conversion stored in the header. DECaLS imaging reaches \(\sim\) 2 mag deeper than that of the SDSS and hence can detect low surface brightness features in the r band down to 28 mag arcsec\({}^{-2}\) (the corresponding limit for SDSS is 25 mag arcsec\({}^{-2}\)) (Driver et al. 2016; Hood et al. 2018).
We used the \(grz\) imaging data to create a colour-composite image of NGC 7252 by assigning blue (\(g\)), green (\(r\)), and red (\(z\)) colours. Figure 1 shows the \(grz\) colour-composite image of NGC 7252. We note that the pixels from the central region of the galaxy main body are saturated in \(r\)-band imaging data. The region covered by saturated pixels was masked and was not used for further analysis. Faint tidal features from the recent merger activity are detected around the galaxy. The morphological features around the galaxy were evaluated using the \(r\)-band imaging. The surface brightness map was created from the \(r\)-band imaging and is shown in Figure 2 with an inverted greyscale to detect faint low surface brightness features. We attempted to bring out the faint features by smoothing the pixel noise through running a Gaussian of \(\sigma\)=1. We visually selected different regions by avoiding foreground stars (and likely stellar clusters) outside the galaxy that we indicated with colour polygons, and the galaxy main body is marked with a white contour. The optical \(g-r\) color map of the galaxy was created from the flux-calibrated \(g,r\) coadded images (\(g-r\) = -2.5 \(\times\) log\({}_{10}\)(flux\({}_{g}\)/flux\({}_{r}\))) and is shown in Figure 3. The \(g,r\) surface brightness was computed for the marked regions of faint features in the outskirts and the galaxy main body. The \(r\)-band surface brightness is plotted against the \(g-r\) colour of the regions and is shown in Figure 3. The selected merger features around the galaxy have integrated blue colours with a median \(g-r\) = 0.52. The main body of the galaxy has an early-type morphology with \(g-r\) colour = 0.42. The colour value should be treated as a lower limit estimate as the central region saturated pixels in \(r\)-band imaging are masked and were not used to compute the colour.
We now investigate the appearance of the galaxy at different redshifts in increments of 0.1 up to z=1. We changed the galaxy size for the changing angular size distance with redshift and also took the effect of the cosmological surface brightness dimming (\(\mu+10\times log_{10}(1+z)\)) on the surface brightness map (\(\mu\)) of the galaxy into account. The \(r\)-band surface brightness maps for different redshifts between \(0>z>1\) are shown in Figure 4. We limited the surface brightness to 28 mag/arcsec\({}^{2}\) at the detection limit of the legacy survey for every redshift. The surface brightness values that reach beyond this limit are not shown and will not be detected at that redshift. This is the expected \(r-band\) appearance of NGC 7252 at different redshifts when observed with a 4m telescope. We note that this is a very simplified scenario that we put forward here for galaxies with a flat spectral energy distribution. The observed \(r-band\) in reality will be receiving photons emitted from the rest frame \(u-band\) at z \(\sim\) 1.
We show the surface brightness change as a function of redshift due to the cosmological surface brightness dimming in Figure 5. The dotted black lines show the change in three different values of \(\mu\) from z =0 to 1. Limiting surface brightness for SDSS and DECaLS surveys are shown by blue and green horizontal lines. The black point is the measured surface brightness for the main body of NGC 7252 and the coloured points are the surface brightness of the faint merger features seen at outskirts as defined in Figure 2.
## 3 Discussion
The \(\Lambda\) cold dark matter paradigm predicts that elliptical galaxies are formed through a hierarchical merging scenario (De Lucia et al. 2006). Multiple mergers involving different mass scales are possible, and the gas content dictates the star formation properties of the merger remnant. NGC 7252 is a nearby major merger remnant understood to have formed from a recent (\(<\) 700 Myr) equal-mass merger between two gas-rich spiral galaxies that created the main body around which tidal tails, shells, and ripples are formed, as shown in the optical colour-composite image in Figure 1. The main body of the galaxy shows properties typical of elliptical galaxies. The galaxy, however, resides in the blue cloud of the galaxy colour-magnitude diagram (Weaver et al. 2018), which is at odds with the normal elliptical galaxies. The stellar mass for NGC 7252 is computed to be \(\sim 10^{10.6}\) M\(\odot\) (Weaver et al. 2018).
Optical imaging data from wide-field surveys such as SDSS with an integration time \(\sim\) 54sec were used to classify blue E/S0 galaxies based on morphology (Fukugita et al. 2004; Schawinski et al. 2009). Schawinski et al. (2009) identified 204 blue E/S0 galaxies using the Galaxy Zoo classification,
Figure 2: Surface brightness map of NGC 7252 made from \(r\)-band imaging data. The integrated surface brightness is computed from selected regions along the low surface brightness merger features marked in differently coloured polygons. The galaxy main body is marked with a white outline.
which has blue \(u-r\) colours that are significantly bluer than the red sequence and are well within the blue cloud in the optical colour-magnitude diagram occupied by star-forming galaxies. The redshifts of these galaxies are \(0.02<z<0.05\), and the luminosities are greater than L\(\star\). They are found to be in lower-density environments than red sequence early-type galaxies and make up \(\sim 6\%\) of the low-redshift early-type galaxy population. Based on an analysis using emission line diagnostic diagrams, 25 % of these galaxies are actively star forming, 25 % host both star formation and an AGN, 12 % have an AGN, and 38 % show no strong emission lines that could be classified. With star formation rates ranging from 0.5 to 50 Mc/yr, the star-forming blue E/S0 are found to be hosting intense, spatially extended star formation. We are interested in understanding the formation of the star-forming population of blue elliptical galaxies. Blue S0 galaxies can have different formation scenarios.
We used deeper optical imaging data to investigate whether the star-forming blue elliptical galaxies share a common origin with NGC 7252. We created a \(g-r\) color map and measured the \(g-r\) colours of selected merger features around the galaxy. We found them to be of blue colours with a median \(g-r=0.52\). The very central region of the galaxy has blue colors as well (g-r
Figure 4: Surface brightness map of NGC 7252 made from \(r\)-band imaging data as it could have appeared in different redshifts. The corresponding \(1\arcsec\) to kiloparsec conversion for each redshift is given inside the plots. A movie version of the plots is available online.
Figure 3: \(g-r\) color map of NGC 7252. The \(g-r\) colour is computed from selected regions along the low surface brightness merger features marked in differently coloured polygons. The galaxy main body is marked with a white outline. The \(r\)-band surface brightness from selected regions is plotted against the \(g-r\) colour. The colour scheme of points is the same as in the selected regions.
\(\approx\) 0.42) and is coincident with a star-forming disc revealed by HST (Whitmore et al. [15]) and UVIT far and near ultraviolet imaging (George et al. [14]). The main body of the galaxy has an elliptical morphology, which means that if it were detected without the tidal features, it would likely be classified as a blue elliptical galaxy. We note that significant neutral hydrogen (4.5 \(\times\) 10\({}^{9}\) M\(\odot\)) and molecular hydrogen (3.5 \(\times\) 10\({}^{9}\) M\(\odot\)) gas is detected, which indicates that a gas-rich wet merger scenario is responsible for the formation of the merger remnant (Hibbard et al. [15]; Wang et al. [16]). The blue elliptical galaxies discussed in Schawinski et al. ([14]) have \(>\) L\(\star\) luminosity and therefore have a similar stellar mass as NGC 7252.
We explored the appearance of NGC 7252 at higher redshift with the likely detection of low surface brightness features from the merger. Figure 4 shows that by redshift 0.7, the galaxy would not have detectable merger features in shallow surveys. This suggests that a merger remnant like NGC 7252 residing at z \(\sim\) 0.7 would be morphologically classified as a blue elliptical galaxy. This is further demonstrated with the position of the galaxy main body on the grid of changing surface brightness for different redshifts between 0 \(>\)\(z\)\(>\) 1 in Figure 5. The surface brightness of the main body of NGC 7252, shown with a black point, should be observed as an elliptical galaxy with a blue colour up to z\(\sim\)1. The merger features seen at the outskirts shown with colored points will disappear faster for shallow surveys such as the SDSS by z \(\sim\) 0.1, but will appear for deeper surveys such as DECaLS till z \(\sim\) 0.6. We note that this plot only applies to sources with a flat spectral energy distribution, for which bandpass shifting plays a very negligible role.
We present here an idealised scenario for a merger remnant galaxy in the nearby Universe (z=0.0159) as it would appear at redshifts up to z \(\sim\) 1. The galaxy can be more compact at high redshifts in wide-field imaging, which facilitates a location in the morphological classification as a blue elliptical. We did not consider the size reduction for elliptical galaxies that we observe at higher redshifts (Trujillo et al. [15]). This effect is more prominent for massive elliptical galaxies, and minor mergers are likely responsible for galaxies that systematically increase their size at low redshifts (Trujillo et al. [15]). We note that the major merger rate decreases towards lower redshifts, which can explain the low fraction of blue elliptical galaxies (Lotz et al. [14]).
Features around merger remnant galaxies can disappear with time since the merger (Ji et al. [14]). The NGC 7252 merger features are seen very clearly, implying a recent (\(<\) 700 Myr) merger event. We note that almost all blue elliptical galaxies from Schawinski et al. ([14]), although at varying levels, host features that are indicative of recent mergers revealed from a structural analysis and deep-imaging data (George & Zingade [15]; George 2017, [16]). The blue elliptical galaxies reported in the shallow SDSS imaging may be seen for longer time since the merger compared to the case of NGC 7252. Ongoing and future wide-field deep optical surveys (DES, DECaLS, Euclid, and LSST) will likely detect more blue elliptical galaxies at higher redshifts, and will need adequate surface brightness sensitivity to reveal features that are indicative of recent merger activity.
## 4 Summary
Blue elliptical galaxies are interesting systems for understanding galaxy formation and evolution. We demonstrated based on deep-imaging data that the main body of the post-merger galaxy NGC 7252 can appear as a blue elliptical galaxy if it is observed at higher redshifts. With the evolution of the stellar population in the main body and the galaxy outskirts, the galaxy will most likely evolve into a normal elliptical galaxy with a red colour, hosting evolved stars. We argue that the blue elliptical galaxies found from shallow imaging surveys may be post-merger systems, with the merger features going beyond the detection limit of the surveys.
###### Acknowledgements.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 20148-0404; Pis: David Schlegel and Arjun Dey) the Beijing-Arizona Sky Survey (BASS; NOAO Prop. 2015A-0801; Pls: Zhou Xu and Xiaohui Fan), and the Mayal z-band Legacy Survey (MzLS; Prop. ID 2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanc telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayal telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on lokun D'a (Kitt Peak), a mountain with particular significance to the Tohono O'odhan Nation, NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financalaroda e Estudos e Projetes, Fundacao Carlos Chagas Filho de Amparo, Financalorado de Estudos e Projetes, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz,
Figure 5: Surface brightness variation with redshift due to the cosmological dimming (dotted black line). The surface brightness detection limits from the SDSS and DECaLS sky surveys are shown as green and blue lines. The integrated surface brightness of the main body of the galaxy is shown with a black point. The \(r\)-band surface brightness from selected regions at the outskirts is shown as coloured points, as in Figure 2.
the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidengacische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant XDB0900000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the Extremal Cooperation Program of Chinese Academy of Sciences (Grant 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant 11220101003, 11433005). The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration. The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-03CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
|
2302.04406 | Neural Architecture Search via Two Constant Shared Weights
Initialisations | In recent years, zero-cost metrics are gaining ground in neural architecture
search (NAS). There metrics allow finding the optimal neural network for a
given task faster and with a lesser computational load than conventional NAS
methods. Equally important is that they also shed some light on the internal
workings of neural architectures. This paper presents a zero-cost metric that
highly correlated with the train set accuracy across the NAS-Bench-101,
NAS-Bench-201 and NAS-Bench-NLP benchmark datasets. We evaluate a neural
achitecture's potential based on the outputs' statistics after two constant
shared weights initialisations. For this, we only use an unlabelled mini-batch
of data. We observe that the dispersion of the outputs between two
initialisations positively correlates with trained accuracy. The correlation
further improves when we normalise dispersion by average output magnitude. The
resulting metric, epsilon, does not require gradients computation and unbinds
the NAS procedure from training hyperparameters, loss metrics and
human-labelled data. Our method is easy to integrate within existing NAS
algorithms and takes a fraction of a second to evaluate a single network. The
code supporting this study can be found on GitHub at
https://github.com/egracheva/epsinas. | Ekaterina Gracheva | 2023-02-09T02:25:38Z | http://arxiv.org/abs/2302.04406v2 | # Light and Accurate: Neural Architecture Search via Two Constant Shared Weights Initialisations
###### Abstract
In recent years, zero-cost proxies are gaining ground in neural architecture search (NAS). These methods allow finding the optimal neural network for a given task faster and with a lesser computational load than conventional NAS methods. Equally important is the fact that they also shed some light on the internal workings of neural architectures. This paper presents a zero-cost metric that highly correlates with the train set accuracy across the NAS-Bench-101, NAS-Bench-201 and NAS-Bench-NLP benchmark datasets. Architectures are initialised with two distinct constant shared weights, one at a time. Then, a fixed random mini-batch of data is passed forward through each initialisation. We observe that the dispersion of the outputs between two initialisations positively correlates with trained accuracy. The correlation further improves when we normalise dispersion by average output magnitude. Our metric, epsilon, does not require gradients computation or labels. It thus unbinds the NAS procedure from training hyperparameters, loss metrics and human-labelled data. Our method is easy to integrate within existing NAS algorithms and takes a fraction of a second to evaluate a single network.
Machine Learning, Neural Architecture Search, Zero-cost NAS, NAS-Bench-201, NAS-Bench-NLP
## 1 Introduction
The field of neural architecture search (NAS) emerged about a decade ago as an effort to automatise the process of neural geometry optimisation. At the early stages of NAS development, every candidate architecture used to be evaluated through the training process (reinforcement learning (Williams, 1992), evolutionary algorithms (Real et al., 2019), Bayesian optimisation (Falkner et al., 2018; White et al., 2021)).
One-shot algorithms adopting weight sharing dispense of multiple architectures training, reducing the search time drastically (efficient reinforcement learning (Pham et al., 2018), random search with parameters sharing[Li and Talwalkar, 2020), differentiable methods). Nevertheless, they require the training of a massive hypernet, which necessitates elaborate hyperparameter tuning. While these methods prove efficient, they do not systematically achieve satisfactory results (Dong and Yang, 2019). One of the best of them, DARTS- (Chu et al., 2020), shows significant uncertainty compared to evolutionary or reinforcement algorithms.
There are available methods that estimate network performance without training the dataset of interest but relying on an auxiliary predictive machine learning (ML) model built on a dataset of trained architectures (Istrate et al., 2019; Deng et al., 2017). These methods aim to accelerate the NAS process for image recognition but still rely on training and cannot apply to other ML problems.
Evaluating geometries through training brings multiple disadvantages. The most obvious is that training is a computationally expensive procedure, and large-scale geometry evaluation often cannot be carried out on massive datasets. Consequently, architectures are usually trained with a single random seed and a fixed set of hyperparameters. This fact raises the question of whether the chosen architecture is statistically reliable and might lead to selecting a sub-optimal
model, optimal only in the context of the fixed set of hyperparameters. Training also implies using hand-labelled data, which brings in human error - ImageNet dataset, for instance, is known to have a label error of about \(6\,\%\)(Northcut et al., 2021). Importantly, from the fundamental point of view, the above NAS methods do not explain why a given architecture is selected.
### Zero-cost NAS
To alleviate the process of architecture search, many researchers focus on developing methods that allow finding optimal architectures without model training - so-called zero-cost NAS methods. These methods evaluate networks via some trainless metric. They typically require the equivalent of one or a few training epochs, which is two to three orders of magnitude faster than other NAS methods.
Weight agnostic neural networks.One of the pioneering works in zero-shot NAS is presented by Gaier and Ha (2019). They demonstrate a constructor that builds up neural architectures based on the mean accuracy over several initialisations with constant shared weights and the number of parameters contained within the model. The resulting model achieves over \(90\%\) accuracy on MNIST data (LeCun et al., 2010) when the weights are fixed to the best-performing constants. While these results are very intriguing, the authors admit that such architectures do not perform particularly well upon training. Moreover, back in \(2019\), the benchmark databases of trained architectures, which are now routinely used to compare NAS metrics with each other, were yet to be released, which disables the comparison of this zero-shot method against the most recent ones.
Jacobian covariance.In \(2020\), Mellor et al. (2020) present the naswot metric, which exploits the rectified linear unit (ReLU, Agarap (2018)) activation function's property to yield distinct activation patterns for different architectures. Concretely, every image yields a binary activation vector upon passing through a network, forming a binary matrix for a mini-batch. The logarithm of the determinant of this matrix serves as a scoring metric. Authors show that larger naswot values are associated with better training performances, which leads to the conclusion that high-performing networks should be able to distinguish the inputs before training. Unfortunately, the method can only be implemented on networks with ReLU activation functions, which limits its applicability to convolutional architectures. In the first version of the paper released in June 2020, the authors presented another scoring method using Jacobian covariance (jacov) and achieved significantly different performances. Following Abdelfattah et al. (2021), we compare our results against jacov as well.
Another work employing the abovementioned ReLU property is Chen et al. (2021). They combine the number of linear regions in the input space with the spectrum of the neural tangent kernel (NTK) to build the tenas metric. Instead of evaluating each network in the search space individually, they create a super-network built with all the available edges and operators and then prune it.
Coefficient of variance.Another early work on fully trainless NAS belongs to Gracheva (2021), which evaluates the stability of untrained scores over random weights initialisations. The author initialises the networks with multiple random seeds, and architectures are selected based on the coefficient of variance of the accuracy at initialisation, CV. While CV performance is associated with a high error rate, the author concludes that a good architecture should be stable against random weight fluctuations. While this method can, in theory, apply to any neural architecture type, it requires multiple initialisations and is relatively heavy compared to naswot and later methods. Furthermore, accuracy-based scoring metrics can only apply to classification problems, and it is unclear how to extend CV implementation to the regression tasks.
Gradient sign.The grad_sign metric is built to approximate the sample-wise optimisation landscape (Zhang and Jia, 2021). The authors argue that the closer local minima for various samples sit to each other, the higher the probability that the corresponding gradients will have the same sign. The number of samples yielding the same gradient sign approximates this probability. It allows to evaluate the smoothness of the optimisation landscape and architecture trainability. The method requires labels and gradient computation.
Pruning-at-initialisation proxies.Several powerful zero-cost proxies have emerged as an adaptation of pruning-at-initialisation methods to NAS in the work by Abdelfattah et al. (2021): grad_norm(Wang et al., 2020), snip(Lee et al., 2018), synflow(Tanaka et al., 2020). These metrics are originally developed to evaluate the network's parameters' salience and prune away potentially meaningless synapses. They require a single forward-backwards pass to compute the loss. Then, the importance of parameters is computed as a multiplication of the weight value and gradient value. Abdelfattah et al. (2021) integrate the salience over all the parameters in the network to evaluate its potential upon training. What is particularly interesting about the synflow metric is that it evaluates the architectures without looking at the data by computing the loss as the product of all the weights' values (randomly initialised). synflow metric shows the most consistent performance among various search spaces and sets the state-of-the-art for the zero-cost NAS.
Both naswot and synflow do not depend on labels, which arguably reduces the effect of human error during data labelling. Moreover, naswot does not require gradient computation, which renders this method less memory-intensive.
The above results imply that neural networks might have some intrinsic property which defines their prediction potential before training. Such property should not depend on the values of trainable parameters (weights) but only on the network's topology. In the present work, we combine the takeaways from the existing trainless NAS implementations to present a new metric which significantly outperforms existing zero-cost NAS methods.
### NAS benchmarks
To guarantee our metric's reproducibility and compare its performance against other NAS algorithms, we evaluate it on the three widely used NAS benchmark datasets.
**NAS-Bench-101**: The first and one of the largest NAS benchmark sets of trained architectures. It consists of \(423{,}624\) cell-based convolutional neural networks (Ying et al., 2019). The architectures consist of three stacks of cells, each followed by max-pooling layers. Cells may have up to \(7\) vertices and \(9\) edges, with \(3\) possible operations. This benchmark is trained multiple times on a single dataset, CIFAR-10 (Krizhevsky et al., 2009), with a fixed set of hyperparameters for \(108\) epochs.
**NAS-Bench-201**: It is a set of architectures with a fixed skeleton consisting of a convolution layer and three stacks of cells connected by a residual block (Dong and Yang, 2020). Each cell is a densely connected directed acyclic graph with \(4\) nodes, \(5\) possible operations and no limits on the number of edges, providing a total of \(15{,}625\) possible architectures. Architectures are trained on three major datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and a downsampled version of ImageNet (Chrabaszcz et al., 2017). Training hyperparameters are fixed, and the training spans \(200\) epochs.
**NAS-Bench-NLP**: As the name suggests, this benchmark consists of architectures suitable for neural language processing (Klyuchnikov et al., 2022). Concretely, it consists of randomly-generated recurrent neural networks. Recurrent cells comprise \(24\) nodes, \(3\) hidden states and \(3\) input vectors at most, with \(7\) allowed operations. Here, we only consider models trained and evaluated on Penn Treebank (PTB, Marcinkiewicz (1994)) dataset: \(14{,}322\) random networks with a single seed and \(4{,}114\) with tree seeds. The training spans \(50\) epochs and is conducted with fixed hyperparameters.
## 2 Epsilon metric
Two existing NAS methods inspire the metric that we share in the present work: CV(Gracheva, 2021) and weight agnostic neural networks (Gaier and Ha, 2019). Both metrics aim to exclude individual weight values from consideration when evaluating networks: the former cancels out the individual weights via multiple random initialisations, while the latter sets them to the same value across the network. It is very intriguing to see that an network can be characterised purely by its topology.
As mentioned above, the CV metric has two principal disadvantages. While it shows a fairly consistent trend with trained accuracy, it suffers from high uncertainty. It can be, to some degree, explained by the fact that random weight initialisations bring in some noise. Our idea is that replacing random initialisations with single shared weight initialisations, similarly to Gaier and Ha (2019), should improve the method's performance.
The second weak point is that CV is developed for classification problems and relies on accuracy. Therefore, it needs to be clarified how to apply this metric to regression problems. The coefficient of variation is a ratio of standard deviation to mean, and Gracheva (2021) shows that CV correlates negatively with train accuracy. It implies that the mean untrained accuracy should be maximised. On the other hand, for regression tasks, performance is typically computed as some error, which is sought to be minimised. It is not apparent whether the division by mean untrained _error_ would result in the same trend for CV metric.
To address this issue, we decided to consider raw outputs. This modification renders the method applicable to any neural architecture. However, it comes with a difference: accuracy returns a single value per batch of data, while raw outputs are \([N_{\text{BS}}\times L]\) matrices, where \(N_{\text{BS}}\) is the batch size and \(L\) is the length of a single output.2 In our work, we flatten these matrices to obtain a single vector \(\mathbf{v}\) of length \(L_{v}=N_{\text{BS}}\times L\) per initialisation. We then stuck both initialisations into a single output matrix \(\mathbf{V}\).
Before proceeding to statistics computation over initialisations, we also must normalise the output vectors: in the case of constant shared weights, outputs scale with weight values. In order to compare initialisations on par with each other, we use min-max normalisation:
\[\mathbf{V}^{\prime}_{i}=\frac{\mathbf{V}_{i}-\min(\mathbf{V}_{i})}{\max( \mathbf{V}_{i})-\min(\mathbf{V}_{i})}, \tag{1}\]
where \(i\) is the index for initialisations, \(i=\{0,1\}\).
We noticed that two distinct weights are sufficient to grasp the difference between initialisations. Accordingly, instead of standard deviation, we use mean absolute error between the normalised outputs of two initialisations:
\[\delta=\frac{1}{L_{v}}\sum_{j=0}^{L_{v}}|\mathbf{V}^{\prime}_{1,j}-\mathbf{V}^ {\prime}_{2,j}|. \tag{2}\]
The mean is computed over the outputs of both initialisations as follows:
\[\mu=\frac{1}{L_{v}}\sum_{j=0}^{L_{v}}\frac{\mathbf{V}^{\prime}_{1,j}+\mathbf{ V}^{\prime}_{2,j}}{2}=\frac{1}{2L_{v}}\sum_{i=0}^{2}\sum_{j=0}^{L_{v}} \mathbf{V}^{\prime}_{i,j} \tag{3}\]
Finally, the metric is computed as the ratio of \(\delta\) and \(\mu\):
\[\varepsilon=\frac{\delta}{\mu}. \tag{4}\]
We refer to our metric as epsilon, as a tribute to the \(\varepsilon\) symbol used in mathematics to denote error bounds. Algorithm 1 details the epsilon metric computation.
```
Select a batch of data from train set for arch in search space do Initialise empty output matrix for weight in [val1, val2] do Initialise arch with constant shared weight Forward pass the batch through arch Get and flatten outputs Minmax normalise outputs (Eq. 1) Append outputs to the output matrix endfor Compute difference between the rows of the output matrix (Eq. 2) Compute mean over the output matrix (Eq. 3) Compute epsilon metric (Equation 4) endfor
```
**Algorithm 1** Algorithm for epsilon metric computation
## 3 Results
### Empirical evaluation
Here we evaluate the performance of epsilon and compare it to the results for zero-cost NAS metrics reported in Abdelfatta et al. (2021). We use the following evaluation scores (computed with NaN omitted):
* Spearman \(\rho\) (global): Spearman rank correlation \(\rho\) evaluated on the entire dataset.
* Spearman \(\rho\) (top-\(10\%\)): Spearman rank correlation \(\rho\) for the top-\(10\%\) performing architectures.
* Kendall \(\tau\) (global): Kendall rank correlation coefficient \(\tau\) evaluated on the entire dataset.
* Kendall \(\tau\) (top-\(10\%\)): Kendall rank correlation coefficient \(\tau\) for the top-\(10\%\) performing architectures.
* Top-\(10\%\)/top-\(10\%\): fraction of top-\(10\%\) performing models within the top-\(10\%\) models ranked by zero-cost scoring metric (\(\%\)).
* Top-64/top-\(10\%\): number of top-64 models ranked by zero-cost scoring metric within top-\(5\%\) performing models.
#### 3.1.1 NAS-Bench-201
The results for overall epsilon performance on NAS-Bench-201 are given in Table 1 along with other zero-cost NAS metrics. The Kendall \(\tau\) score is not reported in Abdelfattah et al. (2021), but it is considered more robust than Spearman \(\rho\) and is increasingly used for NAS metric evaluation. We use the data provided by Abdelfattah et al. (2021) to evaluate their Kendall \(\tau\). Note that our results differ from the original paper for some evaluation scores. In such cases, we indicate the original values between brackets. In particular, there is a discrepancy in computing the values in the last column, _Top-64/top-\(5\%\)_, while the rest of the results are consistent. Figure 6 in Appendix suggests that our calculations are correct.
For NAS-Bench-201, we also report average performance when selecting one architecture from a pool of \(N\) random architectures. The statistics are reported over \(500\) runs. Table 2 compares epsilon to other trainless metrics. Note that te-nas starts with a super-network composed of all the edges and operators available within the space. In this case, \(N\) is not applicable, and the performance can not be improved. In principle, other methods' performance should improve with higher \(N\) values.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{3}{c}{Spearman \(\rho\)} & \multicolumn{2}{c}{Kendall \(\tau\)} & \multicolumn{2}{c}{Top-10\%/} & \multicolumn{2}{c}{Top-64/} \\ \cline{2-10} & global & top-10\% & global & top-10\% & top-10\% & top-5\% \\ \hline \multicolumn{10}{c}{CIFAR-10} \\ \hline grad\_sign & 0.77 & & & & & & & & \\ synflow & 0.74 & & 0.18 & & 0.54 & 0.12 & 45.75 & (46) & 29 & (44) \\ grad\_norm & 0.59 & (0.58) & -0.36 & (-0.38) & 0.43 & -0.21 & 30.26 & (30) & 1 & (0) \\ grasp & 0.51 & (0.48) & -0.35 & (-0.37) & 0.36 & -0.21 & 30.77 & (30) & 3 & (0) \\ snip & 0.60 & (0.58) & -0.36 & (-0.38) & 0.44 & -0.21 & 30.65 & (31) & 1 & (0) \\ fisher & 0.36 & & -0.38 & & 0.26 & -0.24 & 4.99 & ( 5) & 0 & (0) \\ jacov & -0.73 & (0.73) & 0.15 & (0.17) & 0.55 & -0.10 & 24.72 & (25) & 11 & (15) \\ epsilon & **0.87** & & **0.55** & & **0.70** & **0.40** & **67.39** & & **59** & \\ \hline \multicolumn{10}{c}{CIFAR-100} \\ \hline grad\_sign & 0.79 & & & & & & & & & \\ synflow & 0.76 & & 0.42 & & 0.57 & 0.29 & 49.71 & (50) & 45 & (54) \\ grad\_norm & 0.64 & & -0.09 & & 0.47 & -0.05 & 35.00 & (35) & 0 & (4) \\ grasp & 0.55 & (0.54) & -0.10 & (-0.11) & 0.39 & -0.06 & 35.32 & (34) & 3 & (4) \\ snip & 0.64 & (0.63) & -0.08 & (-0.09) & 0.47 & -0.05 & 35.25 & (36) & 0 & (4) \\ fisher & 0.39 & & -0.15 & (-0.16) & 0.28 & -0.10 & 4.22 & ( 4) & 0 & (0) \\ jacov & -0.70 & (0.71) & 0.07 & (0.08) & 0.54 & 0.05 & 22.11 & (24) & 7 & (15) \\ epsilon & **0.90** & & **0.59** & & **0.72** & **0.43** & **81.24** & & **62** & \\ \hline \multicolumn{10}{c}{ImageNet16-120} \\ \hline grad\_sign & 0.78 & & & & & & & & & \\ synflow & 0.75 & & **0.55** & & 0.56 & **0.39** & 43.57 & (44) & 26 & (56) \\ grad\_norm & 0.58 & & 0.12 & (0.13) & 0.43 & 0.09 & 31.29 & (31) & 0 & (13) \\ grasp & 0.55 & (0.56) & 0.10 & & 0.39 & 0.07 & 31.61 & (32) & 2 & (14) \\ snip & 0.58 & & 0.13 & & 0.43 & 0.09 & 31.16 & (31) & 0 & (13) \\ fisher & 0.33 & & 0.02 & & 0.25 & 0.01 & 4.61 & ( 5) & 0 & (0) \\ jacov & 0.70 & (0.71) & 0.08 & (0.05) & 0.53 & 0.05 & 29.63 & (44) & 10 & (15) \\ epsilon & **0.85** & & 0.53 & & **0.67** & 0.37 & **71.51** & & **59** & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Zero-cost metrics performance for the NAS-Bench-201 search space with its three datasets: CIFAR-10, CIFAR-100 and ImageNet16-120. We give the original values from Abdelfattah et al. (2021) for reference between brackets. We highlight the best-performing metrics in bold.
Comparing the results for epsilon with other zero-cost NAS metrics, we can see that it outperforms them by a good margin. Figure 1 further confirms the applicability of the method to the NAS-Bench-201 field (similar figures for other methods can be found in Appendix, Figure 6). However, NAS-Bench-201 is a relatively compact search space; furthermore, it has been used for epsilon development.
#### 3.1.2 NAS-Bench-101
We use NAS-Bench-101 space to confirm that the success of epsilon metric in the previous section is not due to overfitting the NAS-Bench-201 search space and to see how it applies to a vaster search space. Table 3 together with Figure 2 confirm that it performs reasonably well on NAS-Bench-101, too.
#### 3.1.3 NAS-Bench-NLP
Both NAS-Bench-201 and NAS-Bench-101 are created to facilitate NAS in image recognition. They operate convolutional networks of very similar constitutions. To truly probe the generalisability of the epsilon metric, we test it on NAS-Bench-NLP. Both input data format and architecture type differ from the first two search spaces.
Unfortunately, Abdelfattah et al. (2021) provides no data for NAS-Bench-NLP, disabling us from using their results for calculations. Therefore, in Table 4, we give only values provided in the paper together with our epsilon metric (data for ficher is absent). We want to note that, unlike accuracy, perplexity used for language-related ML problems should be minimised. Therefore, the signs of correlations with scoring metrics should be reversed, which is not the case for numbers given in Abdelfattah et al. (2021).
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{Method} & CIFAR-10 & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{ImageNet16-120} \\ \cline{2-7} & validation & test & validation & test & validation & test \\ \hline \hline \multirow{4}{*}{REA} & \multirow{4}{*}{\(91.19\pm 0.31\)} & \multirow{4}{*}{\(93.92\pm 0.3\)} & State-of-the-art & & & \\ & & & \(71.81\pm 1.12\) & \(71.84\pm 0.99\) & \(45.15\pm 0.89\) & \(45.54\pm 1.03\) \\ \cline{1-1} \cline{2-7} Random Search & \(90.93\pm 0.36\) & \(93.92\pm 0.31\) & \(70.93\pm 1.09\) & \(71.04\pm 1.07\) & \(44.45\pm 1.1\) & \(44.57\pm 1.25\) \\ \cline{1-1} \cline{2-7} REINFORCE & \(91.09\pm 0.37\) & \(93.92\pm 0.32\) & \(71.61\pm 1.12\) & \(71.71\pm 1.09\) & \(45.05\pm 1.02\) & \(45.24\pm 1.18\) \\ \cline{1-1} \cline{2-7} BOHB & \(90.82\pm 0.53\) & \(93.92\pm 0.33\) & \(70.74\pm 1.29\) & \(70.85\pm 1.28\) & \(44.26\pm 1.36\) & \(44.42\pm 1.49\) \\ \hline \hline \multirow{4}{*}{Optimal} & \multicolumn{4}{c}{Baselines (N=1000)} & \multirow{4}{*}{\(91.34\pm 0.18\)} & \multirow{4}{*}{\(94.20\pm 0.13\)} & \multirow{4}{*}{\(72.53\pm 0.53\)} & \multirow{4}{*}{\(72.84\pm 0.41\)} & \multirow{4}{*}{\(45.93\pm 0.51\)} & \multirow{4}{*}{\(46.59\pm 0.34\)} \\ \cline{1-1} \cline{2-7} Random & \(84.11\pm 11.71\) & \(87.40\pm 11.94\) & & \(61.57\pm 11.305\) & \(61.67\pm 11.35\) & \(33.97\pm 8.68\) & \(33.67\pm 8.98\) \\ \hline \hline \multirow{4}{*}{naswot} & \multicolumn{4}{c}{Trainless (N=1000)} & \multirow{4}{*}{\(89.69\pm 0.73\)} & \multirow{4}{*}{\(92.96\pm 0.81\)} & \multirow{4}{*}{\(69.86\pm 1.21\)} & \multirow{4}{*}{\(69.98\pm 1.22\)} & \multirow{4}{*}{\(43.95\pm 2.05\)} & \multirow{4}{*}{\(44.44\pm 2.10\)} \\ \cline{1-1} \cline{2-7} synflow & \(89.91\pm 0.83\) & & \(90.12\pm 0.78\) & & \(70.35\pm 2.25\) & & \(70.37\pm 2.08\) & \(41.73\pm 3.91\) & \(42.11\pm 4.02\) \\ \cline{1-1} \cline{2-7} grad\_norm & \(88.13\pm 2.35\) & & \(88.42\pm 2.28\) & & \(66.35\pm 5.45\) & \(66.48\pm 5.32\) & \(33.88\pm 11.46\) & \(33.90\pm 11.74\) \\ \cline{1-1} \cline{2-7} grasp & \(87.85\pm 2.12\) & & \(88.17\pm 2.04\) & & \(65.36\pm 5.57\) & \(65.45\pm 5.48\) & \(32.23\pm 10.95\) & \(32.20\pm 11.23\) \\ \cline{1-1} \cline{2-7} snip & \(87.47\pm 2.19\) & & \(87.81\pm 2.12\) & & \(64.61\pm 5.52\) & \(64.74\pm 5.43\) & \(30.65\pm 11.32\) & \(30.55\pm 11.55\) \\ \cline{1-1} \cline{2-7} fisher & \(87.01\pm 2.31\) & & \(87.36\pm 2.23\) & & \(63.54\pm 5.69\) & \(63.67\pm 5.62\) & \(26.70\pm 10.83\) & \(29.56\pm 10.83\) \\ \cline{1-1} \cline{2-7} jacov & \(88.17\pm 1.67\) & & \(88.45\pm 1.69\) & & \(67.73\pm 2.69\) & & \(67.90\pm 2.77\) & \(31.58\pm 10.65\) & \(31.44\pm 10.83\) \\ \cline{1-1} epsilon & \(\mathbf{91.03\pm 0.42}\) & \(\mathbf{93.86\pm 0.43}\) & \(\mathbf{71.76\pm 0.90}\) & \(\mathbf{71.79\pm 0.86}\) & \(\mathbf{45.11\pm 0.99}\) & \(\mathbf{45.42\pm 1.21}\) \\ \hline \hline \multirow{4}{*}{grad\_sign} & \multirow{4}{*}{\(89.84\pm 0.61\)} & \multirow{4}{*}{\(93.31\pm 0.47\)} & \multirow{4}{*}{\(70.22\pm 1.32\)} & \multirow{4}{*}{\(70.33\pm 1.28\)} & \multirow{4}{*}{\(42.07\pm 2.78\)} & \multirow{4}{*}{\(42.42\pm 2.81\)} \\ \cline{1-1} \cline{2-7} epsilon & \(\mathbf{90.44\pm 0.97}\) & \(\mathbf{93.39\pm 0.82}\) & \(\mathbf{70.85\pm 1.30}\) & \(\mathbf{71.00\pm 1.26}\) & \(\mathbf{44.03\pm 2.02}\) & \(\mathbf{44.20\pm 2.04}\) \\ \hline \hline \multirow{4}{*}{tenas} & \multicolumn{4}{c}{Trainless (N/A)} & \multirow{4}{*}{\(71.24\pm 0.56\)} & \multirow{4}{*}{\(42.38\pm 0.46\)} \\ \cline{1-1} \cline{2-7} tenas & & \(93.9\pm 0.47\) & & & \(71.24\pm 0.56\) & & \(42.38\pm 0.46\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the trainless metrics performances against existing NAS algorithms on CIFAR-10, CIFAR-100 and ImageNet16-120 datasets. On the top, we list the best-performing methods that require training (REA Real et al. (2019), random search, REINFORCE Williams (1992), BOHB Falkner et al. (2018)). We report the average best-achieved test accuracy over \(500\) runs, with \(1{,}000\) architectures (\(100\) for grad_sign)sampled from the search space at random. For tenas, the results are reported for \(4\) random seeds. Random and optimal performances are given as baseline.
The performance of epsilon metric on the NAS-Bench-NLP space is not exceptional. While there is a trend towards better architectures with increasing metric value, the noise level is beyond acceptable. It might stem from the characteristics of the benchmark itself (factors like a relatively small sample of networks given vast space, chosen hyperparameters, dropout rates, and others may distort NAS performance). Nonetheless, the trend is visible enough to conclude that epsilon metric can apply to recurrent type architectures.
Figure 1: Zero-cost NAS epsilon metric performance illustration for NAS-Bench-201 search space evaluated on CIFAR-10, CIFAR-100 and ImageNet16-120 datasets. The horizontal axis shows test accuracy upon training. Each dot corresponds to an architecture; the darker the colour, the more parameters it contains. The figure represents the search space of \(15{,}625\) networks (excluding architectures with NaN scores).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{3}{c}{Spearman \(\rho\)} & \multicolumn{3}{c}{Kendall \(\tau\)} & Top-10\%/ & Top-64/ \\ \cline{2-10} & \multicolumn{2}{c}{global} & \multicolumn{2}{c}{top-10\%} & global & top-10\% & top-10\% & top-5\% \\ \hline \multicolumn{10}{c}{CIFAR-10} \\ \hline grad\_sign & 0.45 & & & & & & & & \\ synflow & 0.37 & **0.14** & & 0.25 & **0.10** & 22.67 & 23 & 4 & (12) \\ grad\_norm & -0.20 & -0.05 & (0.05) & -0.14 & -0.03 & 1.98 & 2 & 0 & (0) \\ grasp & 0.45 & -0.01 & & 0.31 & -0.01 & 25.60 & 26 & 0 & (6) \\ snip & -0.16 & & 0.01 & (-0.01) & -0.11 & 0.00 & 3.43 & 3 & 0 & (0) \\ fisher & -0.26 & & -0.07 & (0.07) & -0.18 & -0.05 & 2.65 & 3 & 0 & (0) \\ jacov & 0.38 & (0.38) & -0.08 & (0.08) & -0.05 & 0.05 & 1.66 & 2 & 0 & (0) \\ epsilon & **0.62** & & 0.12 & & **0.44** & 0.08 & **40.33** & & **10** & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-cost metrics performance evaluated on NAS-Bench-101 search space, CIFAR-10 dataset. Values from Abdelfattah et al. [2021] are given for reference between brackets. We highlight the best-performing metrics in bold.
Figure 2: Zero-cost NAS epsilon metric performance illustration for NAS-Bench-101 search space, CIFAR-10 dataset and NAS-Bench-NLP search space, PTB dataset. The horizontal axis shows test accuracy upon training. Each dot corresponds to an architecture; the darker the colour, the more parameters it contains. The figure shows \(423{,}624\) and \(14{,}322\) networks for NAS-Bench-101 and NAS-Bench-NLP, respectively (excluding architectures with NaN scores).
### Integration with other NAS methods
While it is possible to utilise zero-cost metrics independently, they are often implemented within other NAS. Here we provide examples of random search and ageing evolution algorithms when used in tandem with epsilon.
Similarly to Abdelfattah et al. (2021), we compare random search performance with and without warm-up. First, we create a warm-up pool of \(3\),\(000\) architectures. Then, during the first \(64\) steps of random search, the algorithm picks networks from the pool based on the highest trainless score and accordingly updates the best test performance.
For ageing evolution, the same principle applies. En plus, we report the results of implementation where every next parent is decided based on the highest epsilon score ("move"). In other words, the trainless scoring metric replaces validation accuracy in move mode. Finally, the child is created by parent mutation (within an edit distance of \(1\)) and added to the pool. Finally, the oldest network is removed from the pool.
For both described algorithms, we run the procedure until the number of trained architectures reaches \(300\) and perform \(100\) random rounds. Figure 3 shows that epsilon metric leads to considerable improvements in terms of time and precision. The best performance is achieved in combination with a warm-up. Figure 5 assembles warm-up performances for several trainless metrics.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{2}{c}{Spearman \(\rho\)} & \multicolumn{2}{c}{Kendall \(\tau\)} & Top-10\%/ & Top-64/ \\ \cline{2-6} & global & top-10\% & global & top-10\% & top-10\% & top-5\% \\ \hline \multicolumn{6}{c}{PTB} \\ \hline synflow & 0.34 & 0.10 & — & — & 22 & — \\ grad\_norm & -0.21 & 0.03 & — & — & 10 & — \\ grasp & 0.16 & **0.55** & — & — & 4 & — \\ snip & -0.19 & -0.02 & — & — & 10 & — \\ jacov & **0.38** & 0.04 & — & — & **38** & — \\ epsilon & -0.34 & -0.12 & -0.23 & -0.08 & 24.87 & 11 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-cost metrics performance evaluated on NAS-Bench-NLP search space, PTB dataset. We highlight the best-performing metrics in bold.
Figure 3: epsilon integration within ageing evolution (top) and random search (bottom) NAS algorithms for three datasets from NAS-Bench-201 search space.
## 4 Discussion
While epsilon metric shows solid empirical performance, the underlying reasons for this are unclear.
There are several hints towards its understanding. First, mathematically, epsilon represents the difference in the output distribution shapes between initialisations. The shape of the output is affected by layer widths, activation functions, batch normalisation, skip connections and other factors, which we generally refer to as network geometry. With constant shared weights, one can probe the effects of the geometry without being obstructed by the randomness of initialisation.
Second, during the weight ablation studies (Section A.1), we noticed that the best performance is achieved when the weights are set to the lowest and highest values that do not lead to excessive outputs explosion or vanishing. Therefore, epsilon measures the amplitude of the outputs' distribution shape change due to geometry.
Finally, during the synthetic data studies, we see that grey-scale solid images work reasonably well as inputs. The distribution over the input samples is uniform, which makes it easier to track the changes in its shape as the signal propagates through the network.
That said, a coherent theoretical foundation of epsilon is missing and should be developed in future.
## 5 Conclusions
This work presents a novel zero-cost NAS scoring metric epsilon. It consists of two network initialisations followed by two forward passes. The value of epsilon reflects how neural network outputs' distribution changes between low and high constant shared weight initialisations. It shows that the higher this difference, the better the network will perform upon training.
The metric does not require labels or gradient computation and is fast and lightweight. Evaluation takes \(0.1\sim 1\) seconds per architecture on a single GPU (depending on the size of the architecture and batch size) and can be realised on a CPU. epsilon can virtually apply for any ML problem (care should be taken with embedding initialisation, as explained in Section A.1.4).
This work evaluates epsilon on three staple NAS search spaces: NAS-Bench-201, NAS-Bench-101 and NAS-Bench-NLP. It shows good stable performance with each of them, regardless of the data set. It also significantly improves the performances of random and evolutionary NAS algorithms (see Section 3.2).
The only significant disadvantage of the method is that it requires the choice of constant weight values during initialisation. Our tests show that it must be set up individually for each search space. We plan to automate the weight selection process in our future work.
#### Acknowledgments
The authors would like to thank Dr. Ayako Nakata and Dr. Guillaume Lambard for their continuous support and advice.
|
2308.11736 | Smooth min-entropy lower bounds for approximation chains | For a state $\rho_{A_1^n B}$, we call a sequence of states $(\sigma_{A_1^k
B}^{(k)})_{k=1}^n$ an approximation chain if for every $1 \leq k \leq n$,
$\rho_{A_1^k B} \approx_\epsilon \sigma_{A_1^k B}^{(k)}$. In general, it is not
possible to lower bound the smooth min-entropy of such a $\rho_{A_1^n B}$, in
terms of the entropies of $\sigma_{A_1^k B}^{(k)}$ without incurring very large
penalty factors. In this paper, we study such approximation chains under
additional assumptions. We begin by proving a simple entropic triangle
inequality, which allows us to bound the smooth min-entropy of a state in terms
of the R\'enyi entropy of an arbitrary auxiliary state while taking into
account the smooth max-relative entropy between the two. Using this triangle
inequality, we create lower bounds for the smooth min-entropy of a state in
terms of the entropies of its approximation chain in various scenarios. In
particular, utilising this approach, we prove approximate versions of the
asymptotic equipartition property and entropy accumulation. In our companion
paper, we show that the techniques developed in this paper can be used to prove
the security of quantum key distribution in the presence of source
correlations. | Ashutosh Marwah, Frédéric Dupuis | 2023-08-22T18:55:16Z | http://arxiv.org/abs/2308.11736v2 | # Smooth min-entropy lower bounds for approximation chains
###### Abstract
For a state \(\rho_{A_{1}^{n}B}\), we call a sequence of states \((\sigma_{A_{1}^{k}B}^{(k)})_{k=1}^{n}\) an approximation chain if for every \(1\leq k\leq n\), \(\rho_{A_{1}^{k}B}\approx_{\epsilon}\sigma_{A_{1}^{k}B}^{(k)}\). In general, it is not possible to lower bound the smooth min-entropy of such a \(\rho_{A_{1}^{n}B}\), in terms of the entropies of \(\sigma_{A_{1}^{k}B}^{(k)}\) without incurring very large penalty factors. In this paper, we study such approximation chains under additional assumptions. We begin by proving a simple entropic triangle inequality, which allows us to bound the smooth min-entropy of a state in terms of the Renyi entropy of an arbitrary auxiliary state while taking into account the smooth max-relative entropy between the two. Using this triangle inequality, we create lower bounds for the smooth min-entropy of a state in terms of the entropies of its approximation chain in various scenarios. In particular, utilising this approach, we prove an approximate version of entropy accumulation and also provide a solution to the source correlation problem in quantum key distribution.
###### Contents
* 1 Introduction
* 2 Background and Notation
* 3 Triangle inequality for the smooth min-entropy
* 4 Approximately independent registers
* 4.1 Weak approximate asymptotic equipartition
* 4.2 Simple security proof for sequential device independent quantum key distribution
Approximate entropy accumulation
* 5.1 Divergence bound for approximately equal states
* 5.2 Bounding the channel divergence for two channels close to each other
* 5.3 Proof of the approximate entropy accumulation theorem
* 5.4 Limitations and further improvements
* 6 Source Correlations
* 6.1 Security proof for BB84 with source correlations
* 6.2 Imperfect measurements
* 6.3 Discussion and future work
* A Entropic triangle inequalities cannot be improved much
* B Bounds for \(D_{\alpha}^{\#}\) of the form in Lemma 5.3 necessarily diverge in the limit \(\alpha=1\)
* C Transforming lemmas for EAT from \(\tilde{H}_{\alpha}^{\downarrow}\) to \(\tilde{H}_{\alpha}^{\uparrow}\)
* D Dimension bounds for conditional Renyi entropies
* E Bounds on the size of the side information are necessary for the approximate entropy accumulation theorem
* F Classical approximate entropy accumulation
* G Proof of Theorem 6.3
## 1 Introduction
One-shot information theory investigates the behaviour of tasks in communication and cryptography under general unstructured processes, as opposed to independent and identically distributed (i.i.d) processes, where the states or the tasks themselves have a certain tensor product structure. This is crucial for information theoretically secure cryptography, where one cannot place any kind of assumption on the actions of the adversary (see, for example, [12, 13]). To prove security for such protocols, a common strategy is to show that some smooth min-entropy is sufficiently large. For this reason, the smooth min-entropy [14, 15] is one of the most important quantities in one-shot information theory.
The smooth min-entropy \(H^{\epsilon}_{\min}(K|E)_{\rho}\) for the classical-quantum state \(\rho=\sum_{k}p(k)\ket{k}\bra{k}\otimes\rho_{E|k}\) characterises the amount of randomness one can extract from the classical register \(K\) independent of the adversary's register \(E\)[16]. It behaves very differently from the von Neumann conditional entropy, which characterises tasks in the i.i.d setting, and the difference between the two can be very large. Roughly speaking, the smooth min-entropy places a much higher weight on the worst possible scenario of the conditioning register, whereas the von Neumann entropy places an equal weight on all possible scenarios.
An important and interesting argument, which works with the von Neumann conditional entropy but fails with the smooth min-entropy, is that of proving lower bounds on the entropy using an _approximation chain_. We call a sequence of states1\((\sigma^{(k)}_{A^{k}_{1}B})_{k=1}^{n}\) an \(\epsilon\)-approximation chain for the state \(\rho_{A^{n}_{1}B}\) if for every \(k\), we can approximate the partial state \(\rho_{A^{k}_{1}B}\) as \(\|\rho_{A^{k}_{1}B}-\sigma^{(k)}_{A^{k}_{1}B}\|_{1}\leq\epsilon\). If one can further prove that these states satisfy \(H(A_{k}|A^{k-1}_{1}B)_{\sigma^{(k)}}\geq c\) for some \(c>0\) sufficiently large, then the following simple argument shows that \(H(A^{n}_{1}|B)_{\rho}\) is large:
Footnote 1: For \(n\) quantum registers \((X_{1},X_{2},\cdots,X_{n})\), the notation \(X_{i}^{j}\) refers to the set of registers \((X_{i},X_{i+1},\cdots,X_{j})\).
\[H(A^{n}_{1}|B)_{\rho} =\sum_{k=1}^{n}H(A_{k}|A^{k-1}_{1}B)_{\rho}\] \[\geq\sum_{k=1}^{n}\left(H(A_{k}|A^{k-1}_{1}B)_{\sigma^{(k)}}-g( \epsilon)\right)\] \[\geq n(c-g(\epsilon))\]
where we used continuity of the von Neumann conditional entropy in the second line (\(g(\epsilon)=O(\epsilon\log\frac{|A|}{\epsilon})\) is a "small" function of \(\epsilon\)). It is well known that a similar argument is not possible with the smooth min-entropy. Consequently, identities for the smooth min-entropy, like the chain rules [17], are much more restrictive. Tools like entropy accumulation [14, 15] also seem quite rigid, in the sense that they cannot be applied unless certain (Markov chain or non-signalling) conditions apply. It is also not clear how one could relax the conditions for such tools. In this paper, we consider scenarios consisting of approximation chains, similar to the above, along with additional conditions and prove lower bounds on the appropriate smooth min-entropies.
We begin by considering the scenario of _approximately independent registers_, that is, a state \(\rho_{A^{n}_{1}B}\), which for every \(1\leq k\leq n\) satisfies
\[\frac{1}{2}\left\|\rho_{A^{k}_{1}B}-\rho_{A_{k}}\otimes\rho_{A^{k-1}_{1}B} \right\|_{1}\leq\epsilon. \tag{1}\]
for some small \(\epsilon>0\) and arbitrarily large \(n\) (in particular \(n\gg\frac{1}{\epsilon}\)). That is, for every \(k\), the system \(A_{k}\) is almost independent of the system \(B\) and everything else, which came before
it. For simplicity, let us further assume that for all \(k\) the state \(\rho_{A_{k}}=\rho_{A_{1}}\). Intuitively, one expects that the smooth min-entropy (with the smoothing parameter depending on \(\epsilon\) and not on \(n\))(2) for such a state will be large and close to \(\approx n(H(A_{1})-g^{\prime}(\epsilon))\) (for some small function \(g^{\prime}(\epsilon)\)). However, it is not possible to prove this result using techniques, which rely only on the triangle inequality and smoothing. The triangle inequality, in general, can only be used to bound the difference between \(\rho_{A_{1}^{n}B}\) and \(\otimes_{k=1}^{n}\rho_{A_{k}}\otimes\rho_{B}\) by \(n\epsilon\), which will result in a trivial bound when \(n\gg\frac{1}{\epsilon}\) (3). In this paper, we show how one can instead use a bound on the smooth max-relative entropy between these two states to prove a lower bound for the smooth min-entropy in this scenario.
Footnote (2): The smoothing parameter must depend on \(\epsilon\) in such a scenario. This can be seen by considering the probability distribution \(P_{A_{1}^{n}B}\) such that \(B\) is \(0\) with probability \(\epsilon\) and \(1\) otherwise and \(A_{1}^{n}\) is a random \(n\)-bit string if \(B=1\) and constant if \(B=0\).
While an upper bound of \(n\epsilon\) is trivial and meaningless for the trace distance for large \(n\), it is still a meaningful bound for the relative entropy between two states, which is unbounded in general. We can show that the above approximation conditions (Eq. 1) also imply that relative entropy distance between \(\rho_{A_{1}^{n}B}\) and \(\otimes_{k=1}^{n}\rho_{A_{k}}\otimes\rho_{B}\) is \(nf(\epsilon)\) for some small function \(f(\epsilon)\). The substate theorem [10] allows us to transform this relative entropy bound into a smooth max-relative entropy bound. For two general states \(\rho_{AB}\) and \(\eta_{AB}\), such that \(d\!:=D_{\max}^{\delta}(\rho_{AB}||\eta_{AB})\), we can easily bound the smooth min-entropy of \(\rho\) in terms of the min-entropy of \(\eta\) by observing that
\[\rho_{AB}\approx_{\delta}\tilde{\rho}_{AB}\leq 2^{d}\eta_{AB}\leq 2^{-(H_{\min }(A|B)_{\eta}-d)}\,\mathds{1}_{A}\otimes\sigma_{B}\]
for some state \(\sigma_{B}\), which satisfies \(D_{\max}(\eta_{AB}||\,\mathds{1}_{A}\otimes\sigma_{B})=-H_{\min}(A|B)_{\eta}\). This implies that
\[H_{\min}^{\delta}(A|B)_{\rho}\geq H_{\min}(A|B)_{\eta}-D_{\max}^{\delta}(\rho_ {AB}||\eta_{AB}).\]
We call this a _triangle inequality_, since it is based on the triangle inequality property of \(D_{\max}\). We can further improve this smooth min-entropy triangle inequality to (Lemma 3.5)
\[H_{\min}^{\epsilon+\delta}(A|B)_{\rho}\geq\tilde{H}_{\alpha}^{\dagger}(A|B)_ {\eta}-\frac{\alpha}{\alpha-1}D_{\max}^{\epsilon}(\rho_{AB}||\eta_{AB})-\frac {g_{1}(\delta,\epsilon)}{\alpha-1} \tag{2}\]
for some function \(g_{1}\), \(\epsilon+\delta<1\) and \(1<\alpha\leq 2\). Our general strategy for the scenarios considered in this paper is to first bound the "one-shot information theoretic" distance (the smooth max-relative entropy distance) between the real state \(\rho\) (\(\rho_{A_{1}^{n}B}\) in the above
scenario) and a virtual, but _nicer_ state, \(\eta\) (\(\otimes_{k=1}^{n}\rho_{A_{k}}\otimes\rho_{B}\) above) by \(nf(\epsilon)\) for some small \(f(\epsilon)\). Then, we use Eq. 2 above to reduce the problem of bounding the smooth min-entropy on state \(\rho\) to that of bounding a \(\alpha\)-Renyi entropy on the state \(\eta\). Using this strategy, in Corollary 4.4, we prove that for states satisfying the approximately independent registers assumptions, we have for \(\delta=O\left(\epsilon\log\frac{|A|}{\epsilon}\right)\) that
\[H_{\min}^{\delta\frac{1}{4}}(A_{1}^{n}|B)_{\rho}\geq n\left(H(A_{1})_{\rho}-O (\delta^{\frac{1}{4}})\right)-O\left(\frac{1}{\delta^{3/4}}\right). \tag{3}\]
In the second scenario, we consider approximate entropy accumulation. In the setting for entropy accumulation, a sequence of channels \(\mathcal{M}_{k}:R_{k-1}\to A_{k}B_{k}R_{k}\) for \(1\leq k\leq n\) sequentially act on a state \(\rho_{R_{0}E}\) to produce the state \(\rho_{A_{1}^{n}B_{1}^{n}E}=\mathcal{M}_{n}\circ\cdots\circ\mathcal{M}_{1}( \rho_{R_{0}E})\). It is assumed that the channels \(\mathcal{M}_{k}\) are such that the Markov chain \(A_{1}^{k-1}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}\) is satisfied for every \(k\). This ensures that the register \(B_{k}\) does not reveal any additional information about \(A_{1}^{k-1}\) than what was previously revealed by \(B_{1}^{k-1}E\). The entropy accumulation theorem [1], then provides a tight lower bound for the smooth min-entropy \(H_{\min}^{\delta}(A_{1}^{n}|B_{1}^{n}E)\). We consider an approximate version of the above setting where the channels \(\mathcal{M}_{k}\) themselves do not necessarily satisfy the Markov chain condition, but they can be \(\epsilon\)-approximated by a sequence of channels \(\mathcal{M}_{k}^{\prime}\), which satisfies certain Markov chain conditions. Such relaxations are important to understand the behaviour of cryptographic protocols, like device-independent quantum key distribution [1, 1], which are implemented with imperfect devices [1, 2]. Once again we can model this scenario as an approximation chain: for every \(1\leq k\leq n\), the state produced in the \(k\)th step satisfies
\[\rho_{A_{1}^{k}B_{1}^{k}ER_{k}}=\mathcal{M}_{k}(\rho_{A_{1}^{k-1}B_{1}^{k-1}ER _{k-1}})\approx_{\epsilon}\mathcal{M}_{k}^{\prime}(\rho_{A_{1}^{k-1}B_{1}^{k-1 }ER_{k-1}})\coloneqq\sigma_{A_{1}^{k}B_{1}^{k}ER_{k}}^{(k)}. \tag{4}\]
Moreover, the assumptions on the channel \(\mathcal{M}_{k}^{\prime}\) guarantee that the state \(\sigma_{A_{1}^{k}B_{1}^{k}ER_{k}}^{(k)}\) satisfies the Markov chain condition \(A_{1}^{k-1}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}\), and so the chain rules and bounds used for entropy accumulation apply for it too. Roughly speaking, we use the chain rules for divergences [11] to show that the divergence distance between the states \(\rho_{A_{1}^{n}B_{1}^{n}E}=\mathcal{M}_{n}\circ\cdots\circ\mathcal{M}_{1}( \rho_{R_{0}E})\) and the virtual state \(\sigma_{A_{1}^{n}B_{1}^{n}E}=\mathcal{M}_{n}^{\prime}\circ\cdots\circ\mathcal{ M}_{1}^{\prime}(\rho_{R_{0}E})\) is relatively small, and then reduce the problem of lower bounding the smooth min-entropy of \(\rho_{A_{1}^{n}B_{1}^{n}E}\) to that of lower bounding an \(\alpha\)-Renyi entropy of \(\sigma_{A_{1}^{n}B_{1}^{n}E}\), which can be done by using the chain rules developed for entropy accumulation4. In Theorem 5.1, we show the following smooth min-entropy lower bound for the state \(\rho_{A_{1}^{n}B_{1}^{n}E}\) for sufficiently small \(\epsilon\) and an arbitrary \(\delta>0\)
Footnote 4: The channel divergence bounds we are able to prove are too weak for this idea to work as stated here. The actual proof is more complicated. However, this idea works in the classical case.
\[H_{\min}^{\delta}(A_{1}^{n}|B_{1}^{n}E)_{\rho}\geq\sum_{k=1}^{n}\inf_{\omega_{ R_{k}\tilde{R}_{k}}}H(A_{k}|B_{k}\tilde{R}_{k})_{\mathcal{M}_{k}^{\prime}( \omega_{R_{k}\tilde{R}_{k}})}-nO(\epsilon^{\frac{1}{24}})-O\left(\frac{1}{ \epsilon^{\frac{1}{24}}}\right) \tag{5}\]
where the infimum is over all possible input states \(\omega_{R_{k}\tilde{R}_{k}}\), and the dimensions \(|A|\) and \(|B|\) are assumed constant while using the asymptotic notation.
We also use the techniques developed above to provide a solution for the source correlation problem in quantum key distribution (QKD) [12]. For security proofs of QKD protocols, it is assumed that the states produced by Alice's source are independent in each round. However, in practical implementations this is not entirely true, since physical devices have an internal quantum memory, which may cause the states across multiple rounds to be correlated with each other. The challenge is to prove security for QKD with such an imperfect and correlated source. We show that it is possible to securely implement QKD by simply measuring the output of the source in the preparation basis for a small random set of indices and conditioning on the relative deviation of the observed output being less than some small threshold \(\epsilon\) from the expected output. Using the results of [1], this source test guarantees with high probability that the relative frequency of errors, or the average error per round, in the conditioned state is \(\approx\epsilon\). We can once again show that the final state of the QKD protocol implemented on this state is only \(nf(\epsilon)\) (for some small function \(f(\epsilon)\)) far in smooth max-relative entropy distance from the final state of the protocol if it were conducted on perfect states. This allows us to reduce the security proof under a correlated source to that of the QKD protocol which uses perfect states. In Theorem 6.3, we show that a BB84 protocol, which tests its source in the above manner is secure and the error loss due to source correlations is \(O(\sqrt{h(\epsilon)})\) (where \(h\) is the binary entropy) per round. We also consider the source test with imperfect measurements and demonstrate how these may be taken into account in the analysis.
Lastly, we note that the sections on approximate entropy accumulation (Sec. 5) and source correlations (Sec. 6) are independent of each other and can be read as such.
## 2 Background and Notation
For \(n\) quantum registers \((X_{1},X_{2},\cdots,X_{n})\), the notation \(X_{i}^{j}\) refers to the set of registers \((X_{i},X_{i+1},\cdots,X_{j})\). We use the notation \([\mathrm{n}]\) to denote the set \(\{1,2,\cdots,n\}\). For a register \(A\), \(|A|\) represents the dimension of the underlying Hilbert space. If \(X\) and \(Y\) are Hermitian operators, then the operator inequality \(X\geq Y\) denotes the fact that \(X-Y\) is a positive semidefinite operator and \(X>Y\) denotes that \(X-Y\) is a strictly positive operator. A quantum state refers to a positive semidefinite operator with unit trace. We will denote the set of registers a quantum state describes (equivalently, its Hilbert space) using a subscript. For example, a quantum state on the register \(A\) and \(B\), will be written as \(\rho_{AB}\) and its partial states on registers \(A\) and \(B\), will be denoted as \(\rho_{A}\) and \(\rho_{B}\). The identity operator on register \(A\) is denoted using \(\mathds{1}_{A}\). A classical-quantum state on registers \(X\) and \(B\) is given by \(\rho_{XB}=\sum_{x}p(x)\ket{x}\bra{x}\otimes\rho_{B|x}\), where \(\rho_{B|x}\) are normalized quantum states on
register \(B\).
The term "channel" is used for completely positive trace preserving (CPTP) linear maps between two spaces of Hermitian operators. A channel \(\mathcal{N}\) mapping registers \(A\) to \(B\) will be denoted by \(\mathcal{N}_{A\to B}\). We write \(\operatorname{supp}(X)\) to denote the support of the Hermitian operator \(X\) and use \(X\ll Y\) to denote that \(\operatorname{supp}(X)\subseteq\operatorname{supp}(Y)\).
The trace norm is defined as \(\left\|X\right\|_{1}:=\operatorname{tr}\big{(}\left(X^{\dagger}X\right)^{ \frac{1}{2}}\big{)}\). The fidelity between two positive operators \(P\) and \(Q\) is defined as \(F(P,Q)=\left\|\sqrt{P}\sqrt{Q}\right\|_{1}^{2}\). The generalised fidelity between two subnormalised states \(\rho\) and \(\sigma\) is defined as
\[F_{*}(\rho,\sigma):=\left(\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{1}+\sqrt{( 1-\operatorname{tr}\rho)(1-\operatorname{tr}\sigma)}\right)^{2}. \tag{6}\]
The purified distance between two subnormalised states \(\rho\) and \(\sigma\) is defined as
\[P(\rho,\sigma)=\sqrt{1-F_{*}(\rho,\sigma)}. \tag{7}\]
We will also use the diamond norm distance as a measure of the distance between two channels. For a linear transform \(\mathcal{N}_{A\to B}\) from operators on register \(A\) to operators on register \(B\), the diamond norm distance is defined as
\[\left\|\mathcal{N}_{A\to B}\right\|_{\diamond}:=\max_{X_{AR}:\left\|X_{AR} \right\|_{1}\leq 1}\left\|\mathcal{N}_{A\to B}(X_{AR})\right\|_{1} \tag{8}\]
where the supremum is over all Hilbert spaces \(R\) (fixing \(|R|=|A|\) is sufficient) and operators \(X_{AR}\) such that \(\left\|X_{AR}\right\|_{1}\leq 1\).
Throughout this paper, we use base 2 for both the functions \(\log\) and \(\exp\). We follow the notation in Tomamichel's book [16] for Renyi entropies. For \(\alpha\in(0,1)\cup(1,2)\), the Petz \(\alpha\)-Renyi relative entropy between the positive operators \(P\) and \(Q\) is defined as
\[\tilde{D}_{\alpha}(P\|Q)=\begin{cases}\frac{1}{\alpha-1}\log\operatorname{tr} \frac{\left(P^{\alpha}Q^{1-\alpha}\right)}{\operatorname{tr}(P)}&\text{ if }(\alpha<1\text{ and }P\not\lhd Q)\text{ or }(P\ll Q)\\ \infty&\text{ else.}\end{cases} \tag{9}\]
The sandwiched \(\alpha\)-Renyi relative entropy for \(\alpha\in(0,1)\cup(1,\infty)\) between the positive operator \(P\) and \(Q\) is defined as
\[\tilde{D}_{\alpha}(P\|Q)=\begin{cases}\frac{1}{\alpha-1}\log\frac{\operatorname {tr}(Q^{-\frac{\alpha^{\prime}}{2}}PQ^{-\frac{\alpha^{\prime}}{2}})^{\alpha} }{\operatorname{tr}(P)}&\text{ if }(\alpha<1\text{ and }P\not\lhd Q)\text{ or }(P\ll Q)\\ \infty&\text{ else.}\end{cases} \tag{10}\]
where \(\alpha^{\prime}=\frac{\alpha-1}{\alpha}\). In the limit \(\alpha\to\infty\), the sandwiched divergence becomes equal to the max-relative entropy, \(D_{\max}\), which is defined as
\[D_{\max}(P\|Q):=\inf\left\{\lambda\in\mathbb{R}:P\leq 2^{\lambda}Q\right\}. \tag{11}\]
In the limit of \(\alpha\to 1\), both the Petz and the sandwiched relative entropies equal the quantum relative entropy, \(D(P\|Q)\), which is defined as
\[D(P\|Q):=\begin{cases}\frac{\operatorname{tr}(P\log P-P\log Q)}{ \operatorname{tr}(P)}&\text{ if }(P\ll Q)\\ \infty&\text{ else.}\end{cases} \tag{12}\]
Given any divergence \(\mathbb{D}\), we can define the (stabilised) channel divergence based on \(\mathbb{D}\) between two channels \(\mathcal{N}_{A\to B}\) and \(\mathcal{M}_{A\to B}\) as [1, 1]
\[\mathbb{D}(\mathcal{N}\|\,\mathcal{M}):=\sup_{\rho_{AR}}\mathbb{D }(\mathcal{N}_{A\to B}(\rho_{AR})\|\,\mathcal{M}_{A\to B}(\rho_{AR})) \tag{13}\]
where \(R\) is reference register of arbitrary size (\(|R|=|A|\) can be chosen when \(\mathbb{D}\) satisfies the data processing inequality).
We can use the divergences defined above to define the following conditional entropies for the subnormalized state \(\rho_{AB}\):
\[\bar{H}^{\uparrow}_{\alpha}(A|B)_{\rho} :=\sup_{\sigma_{B}}-\bar{D}_{\alpha}(\rho_{AB}\|\,\mathds{1}_{A} \otimes\sigma_{B})\] \[\bar{H}^{\uparrow}_{\alpha}(A|B)_{\rho} :=\sup_{\sigma_{B}}-\tilde{D}_{\alpha}(\rho_{AB}\|\,\mathds{1}_{A }\otimes\sigma_{B})\] \[\bar{H}^{\downarrow}_{\alpha}(A|B)_{\rho} :=-\bar{D}_{\alpha}(\rho_{AB}\|\,\mathds{1}_{A}\otimes\rho_{B})\] \[\bar{H}^{\downarrow}_{\alpha}(A|B)_{\rho} :=-\tilde{D}_{\alpha}(\rho_{AB}\|\,\mathds{1}_{A}\otimes\rho_{B})\]
for appropriate \(\alpha\) in the domain of the divergences. The supremum in the definition for \(\bar{H}^{\uparrow}_{\alpha}\) and \(\tilde{H}^{\uparrow}_{\alpha}\) is over all quantum states \(\sigma_{B}\) on register \(B\).
For \(\alpha\to 1\), all these conditional entropies are equal to the von Neumann conditional entropy \(H(A|B)\). \(\tilde{H}^{\uparrow}_{\infty}(A|B)_{\rho}\) is usually called the min-entropy. The min-entropy is usually denoted as \(H_{\min}(A|B)_{\rho}\) and for a subnormalised state can also be defined as
\[H_{\min}(A|B)_{\rho}:=\sup\left\{\lambda\in\mathbb{R}:\text{ there exists state }\sigma_{B}\text{ such that }\rho_{AB}\leq 2^{-\lambda}\,\mathds{1}_{A}\otimes\sigma_{B}\right\}. \tag{14}\]
For the purpose of smoothing, define the \(\epsilon\)-ball around the subnormalised state \(\rho\) as the set
\[B_{\epsilon}(\rho)=\{\tilde{\rho}\geq 0:P(\rho,\tilde{\rho})\leq \epsilon\text{ and }\operatorname{tr}\tilde{\rho}\leq 1\}. \tag{15}\]
We define the smooth max-relative entropy as
\[D^{\epsilon}_{\max}(\rho\|\sigma)=\min_{\tilde{\rho}\in B_{ \epsilon}(\rho)}D_{\max}(\tilde{\rho}\|\sigma) \tag{16}\]
The smooth min-entropy of \(\rho_{AB}\) is defined as
\[H^{\epsilon}_{\min}(A|B)_{\rho}=\max_{\tilde{\rho}\in B_{ \epsilon}(\rho)}H_{\min}(A|B)_{\tilde{\rho}}. \tag{17}\]
Triangle inequality for the smooth min-entropy
In this section, we derive a simple triangle inequality (Lemma 3.5) for the smooth min-entropy of the form in Eq. 2. This Lemma is a direct consequence of the following triangle inequality for \(\tilde{D}_{\alpha}\).
**Lemma 3.1**.: _Let \(\rho\) and \(\eta\) be subnormalised states and \(Q\) be a positive operator, then for \(\alpha>1\), we have_
\[\tilde{D}_{\alpha}(\rho\|Q)\leq\tilde{D}_{\alpha}(\eta\|Q)+\frac{\alpha}{ \alpha-1}D_{\max}(\rho\|\eta)+\frac{1}{\alpha-1}\log\frac{\operatorname{tr}( \eta)}{\operatorname{tr}(\rho)}\]
_and for \(\alpha<1\) if one of \(\tilde{D}_{\alpha}(\eta\|Q)\) and \(D_{\max}(\rho\|\eta)\) is finite (otherwise we cannot define their difference), we have_
\[\tilde{D}_{\alpha}(\rho\|Q)\geq\tilde{D}_{\alpha}(\eta\|Q)-\frac{\alpha}{1- \alpha}D_{\max}(\rho\|\eta)-\frac{1}{1-\alpha}\log\frac{\operatorname{tr}( \eta)}{\operatorname{tr}(\rho)}.\]
Proof.: If \(D_{\max}(\rho\|\eta)=\infty\), then both statements are true trivially. Otherwise, we have that \(\rho\leq 2^{D_{\max}(\rho\|\eta)}\eta\) and also \(\rho\ll\eta\). Now, if \(\rho\not\ll Q\) then \(\eta\not\ll Q\). Hence, for \(\alpha>1\) if \(\tilde{D}_{\alpha}(\rho\|Q)=\infty\), then \(\tilde{D}_{\alpha}(\eta\|Q)=\infty\), which means the Lemma is also satisfied in this condition. For \(\alpha<1\), if \(\tilde{D}_{\alpha}(\rho\|Q)=\infty\), then the Lemma is also trivially satisfied. For the remaining cases we have,
\[2^{(\alpha-1)\tilde{D}_{\alpha}(\rho\|Q)} =\frac{\operatorname{tr}\left(Q^{-\frac{\alpha-1}{2\alpha}}\rho Q ^{-\frac{\alpha-1}{2\alpha}}\right)^{\alpha}}{\operatorname{tr}(\rho)}\] \[\leq\frac{\operatorname{tr}\left(Q^{-\frac{\alpha-1}{2\alpha}}2^ {D_{\max}(\rho\|\eta)}\eta Q^{-\frac{\alpha-1}{2\alpha}}\right)^{\alpha}}{ \operatorname{tr}(\rho)}\] \[=\frac{\operatorname{tr}(\eta)}{\operatorname{tr}(\rho)}2^{ \alpha D_{\max}(\rho\|\eta)}2^{(\alpha-1)\tilde{D}_{\alpha}(\eta\|Q)}\]
where we used the fact that \(\operatorname{tr}(f(X))\) is monotone increasing if the function \(f\) is monotone increasing. Dividing by \((\alpha-1)\) now gives the result.
We define smooth \(\alpha\)-Renyi conditional entropy as follows to help us amplify the above inequality.
**Definition 3.2** (\(\epsilon\)-smooth \(\alpha\)-Renyi conditional entropy).: _For \(\alpha\in(1,\infty]\) and \(\epsilon\in[0,1]\), we define the \(\epsilon\)-smooth \(\alpha\)-Renyi conditional entropy as_
\[\tilde{H}_{\alpha,\epsilon}^{\dagger}(A|B)_{\rho}:=\max_{\tilde{\rho}_{AB}\in B _{\epsilon}(\rho_{AB})}\tilde{H}_{\alpha}^{\dagger}(A|B)_{\tilde{\rho}}. \tag{18}\]
**Lemma 3.3**.: _For \(\alpha\in(1,\infty\,]\) and \(\epsilon\in[0,1)\), and states \(\rho_{AB}\) and \(\eta_{AB}\) we have_
\[\tilde{H}^{\dagger}_{\alpha,\epsilon}(A|B)_{\rho}\geq\tilde{H}^{ \dagger}_{\alpha}(A|B)_{\eta}-\frac{\alpha}{\alpha-1}D^{\epsilon}_{\max}(\rho_{ AB}\|\eta_{AB})-\frac{1}{\alpha-1}\log\frac{1}{1-\epsilon^{2}}.\]
Proof.: Let \(\tilde{\rho}_{AB}\in B_{\epsilon}(\rho_{AB})\) be a subnormalised state such that \(D_{\max}(\tilde{\rho}_{AB}\|\eta_{AB})=D^{\epsilon}_{\max}(\rho_{AB}\|\eta_{ AB})\). Using Lemma 3.1 for \(\alpha>1\), we have that for every state \(\sigma_{B}\), we have
\[\tilde{D}_{\alpha}(\tilde{\rho}_{AB}\|\,\mathds{1}_{A}\otimes \sigma_{B})\leq\tilde{D}_{\alpha}(\eta_{AB}\|\,\mathds{1}_{A}\otimes\sigma_{B })+\frac{\alpha}{\alpha-1}D^{\epsilon}_{\max}(\rho_{AB}\|\eta_{AB})+\frac{1}{ \alpha-1}\log\frac{1}{1-\epsilon^{2}} \tag{19}\]
where we used the fact that \(\tilde{\rho}_{AB}\in B_{\epsilon}(\rho_{AB})\) which implies that \(\operatorname{tr}(\tilde{\rho}_{AB})\geq 1-\epsilon^{2}\). Since, the above bound is true for arbitrary states \(\sigma_{B}\), we can multiply it by \(-1\) and take the supremum to derive
\[\tilde{H}^{\dagger}_{\alpha}(A|B)_{\tilde{\rho}}\geq\tilde{H}^{ \dagger}_{\alpha}(A|B)_{\eta}-\frac{\alpha}{\alpha-1}D^{\epsilon}_{\max}(\rho_ {AB}\|\eta_{AB})-\frac{1}{\alpha-1}\log\frac{1}{1-\epsilon^{2}}.\]
The desired bound follows by using the fact that \(\tilde{H}^{\dagger}_{\alpha,\epsilon}(A|B)_{\rho}\geq\tilde{H}^{\dagger}_{ \alpha}(A|B)_{\tilde{\rho}}\).
**Lemma 3.4**.: _For a state \(\rho_{AB}\), \(\epsilon\in[0,1)\), and \(\delta\in(0,1)\) such that \(\epsilon+\delta<1\) and \(\alpha\in(1,2]\), we have_
\[H^{\epsilon+\delta}_{\min}(A|B)_{\rho}\geq\tilde{H}^{\dagger}_{ \alpha,\epsilon}(A|B)_{\rho}-\frac{g_{0}(\delta)}{\alpha-1}\]
_where \(g_{0}(x):=-\log(1-\sqrt{1-x^{2}})\)._
Proof.: First, note that
\[H^{\epsilon+\delta}_{\min}(A|B)_{\rho}\geq\sup_{\tilde{\rho}\in B _{\epsilon}(\rho_{AB})}H^{\delta}_{\min}(A|B)_{\tilde{\rho}}. \tag{20}\]
To prove this, consider a \(\tilde{\rho}_{AB}\in B_{\epsilon}(\rho_{AB})\) and \(\rho^{\prime}_{AB}\in B_{\delta}(\tilde{\rho}_{AB})\) such that \(H_{\min}(A|B)_{\rho^{\prime}}=H^{\delta}_{\min}(A|B)_{\tilde{\rho}}\). Then, using the triangle inequality for the purified distance, we have
\[P(\rho_{AB},\rho^{\prime}_{AB}) \leq P(\rho_{AB},\tilde{\rho}_{AB})+P(\tilde{\rho}_{AB},\rho^{ \prime}_{AB})\] \[\leq\epsilon+\delta\]
which implies that \(H^{\epsilon+\delta}_{\min}(A|B)_{\rho}\geq H_{\min}(A|B)_{\rho^{\prime}}=H^{ \delta}_{\min}(A|B)_{\tilde{\rho}}\). Since, this is true for all \(\tilde{\rho}\in B_{\epsilon}(\rho_{AB})\) the bound in Eq. 20 is true.
Using this, we have
\[H^{\epsilon+\delta}_{\min}(A|B)_{\rho} \geq\sup_{\tilde{\rho}\in B_{\epsilon}(\rho_{AB})}H^{\delta}_{ \min}(A|B)_{\tilde{\rho}}\] \[\geq\sup_{\tilde{\rho}\in B_{\epsilon}(\rho_{AB})}\left\{\tilde{H }^{\dagger}_{\alpha}(A|B)_{\tilde{\rho}}-\frac{g_{0}(\delta)}{\alpha-1}\right\}\] \[=\tilde{H}^{\dagger}_{\alpha,\epsilon}(A|B)_{\rho}-\frac{g_{0}( \delta)}{\alpha-1}\]
where we have used [13, Lemma B.10]5 (originally proven in [14]) in the second step.
Footnote 5: This Lemma is also valid for subnormalised states as long as \(\delta\in(0,\sqrt{2\operatorname{tr}(\tilde{\rho})-\operatorname{tr}(\tilde{\rho}) ^{2}})\) according to [13, Lemma B.4].
We can combine these two lemmas to derive the following result.
**Lemma 3.5**.: _For \(\alpha\in(1,2]\), \(\epsilon\in[0,1)\), and \(\delta\in(0,1)\) such that \(\epsilon+\delta<1\) and two states \(\rho\) and \(\eta\), we have_
\[H_{\min}^{\epsilon+\delta}(A|B)_{\rho}\geq\tilde{H}_{\alpha}^{ \dagger}(A|B)_{\eta}-\frac{\alpha}{\alpha-1}D_{\max}^{\epsilon}(\rho_{AB}\| \eta_{AB})-\frac{g_{1}(\delta,\epsilon)}{\alpha-1} \tag{21}\]
_where \(g_{1}(x,y)\coloneqq-\log(1-\sqrt{1-x^{2}})-\log(1-y^{2})\)._
Proof.: We can combine Lemmas 3.3 and 3.4 as follows to derive the bound in the Lemma:
\[H_{\min}^{\epsilon+\delta}(A|B)_{\rho} \geq\tilde{H}_{\alpha,\epsilon}^{\dagger}(A|B)_{\rho}-\frac{g_{ 0}(\delta)}{\alpha-1}\] \[\geq\tilde{H}_{\alpha}^{\dagger}(A|B)_{\eta}-\frac{\alpha}{ \alpha-1}D_{\max}^{\epsilon}(\rho_{AB}\|\eta_{AB})-\frac{1}{\alpha-1}\left(g_ {0}(\delta)+\log\frac{1}{1-\epsilon^{2}}\right).\]
We can use the asymptotic equipartition theorem for smooth min-entropy and max-relative entropy [14, 15, 16] to derive the following novel triangle inequality for the von Neumann conditional entropy. Although we do not use this inequality in this paper, we believe it is interesting and may prove useful in the future.
**Corollary 3.6**.: _For \(\alpha\in(1,2]\) and states \(\rho_{AB}\) and \(\eta_{AB}\), we have that_
\[H(A|B)_{\rho}\geq\tilde{H}_{\alpha}^{\dagger}(A|B)_{\eta}-\frac {\alpha}{\alpha-1}D(\rho_{AB}\|\eta_{AB}). \tag{22}\]
Proof.: Using Lemma 3.5 with \(\alpha\in(1,2]\), the states \(\rho_{AB}^{\otimes n}\), and \(\eta_{AB}^{\otimes n}\) and any \(\epsilon>0\) and \(\delta>0\) satisfying the conditions for the Lemma, we get
\[H_{\min}^{\epsilon+\delta}(A_{1}^{n}|B_{1}^{n})_{\rho^{\otimes n }}\geq\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n})_{\eta^{\otimes n}}- \frac{\alpha}{\alpha-1}D_{\max}^{\epsilon}(\rho_{AB}^{\otimes n}\|\eta_{AB}^ {\otimes n})-\frac{g_{1}(\delta,\epsilon)}{\alpha-1}\] \[\Rightarrow \frac{1}{n}H_{\min}^{\epsilon+\delta}(A_{1}^{n}|B_{1}^{n})_{\rho ^{\otimes n}}\geq\tilde{H}_{\alpha}^{\dagger}(A|B)_{\eta}-\frac{\alpha}{ \alpha-1}\frac{1}{n}D_{\max}^{\epsilon}(\rho_{AB}^{\otimes n}\|\eta_{AB}^{ \otimes n})-\frac{1}{n}\frac{g_{1}(\delta,\epsilon)}{\alpha-1}.\]
Taking the limit of the above for \(n\to\infty\), we get
\[\lim_{n\to\infty}\frac{1}{n}H_{\min}^{\epsilon+\delta}(A_{1}^{n}| B_{1}^{n})_{\rho^{\otimes n}}\geq\tilde{H}_{\alpha}^{\dagger}(A|B)_{\eta}- \lim_{n\to\infty}\frac{\alpha}{\alpha-1}\frac{1}{n}D_{\max}^{\epsilon}(\rho_{ AB}^{\otimes n}\|\eta_{AB}^{\otimes n})-\lim_{n\to\infty}\frac{1}{n}\frac{g_{1}( \delta,\epsilon)}{\alpha-1}\] \[\Rightarrow H(A|B)_{\rho}\geq\tilde{H}_{\alpha}^{\dagger}(A|B)_{\eta}-\frac{ \alpha}{\alpha-1}D(\rho_{AB}\|\eta_{AB})\]
which proves the claim.
Approximately independent registers
In this section, we introduce our technique for using the smooth min-entropy triangle inequality for considering approximations by studying a state \(\rho_{A_{1}^{n}B}\) such that for every \(k\in[n]\)
\[\left\|\rho_{A_{1}^{k}B}-\rho_{A_{k}}\otimes\rho_{A_{1}^{k-1}B} \right\|_{1}\leq\epsilon. \tag{23}\]
We assume that the registers \(A_{k}\) all have the same dimension equal to \(|A|\). One should think of the registers \(A_{k}\) as the secret information produced during some protocol, which also provides the register \(B\) to an adversary. We would like to prove that \(H_{\min}^{f(\epsilon)}(A_{1}^{n}|B)\) is large (lower bounded by \(\Omega(n)\)) under the above _approximate independence conditions_ for some reasonably small function \(f\) of \(\epsilon\) and close to \(nH(A_{1})\), if we assume the states \(\rho_{A_{k}}\) are identical. Let us first examine the case when the states above are completely classical. To show that in this case the smooth min-entropy is high, we will show that the set where the conditional probability \(\rho(a_{1}^{n}|b):=\frac{\rho(a_{1}^{n}b)}{\rho(b)}\) can be large, has a small probability using the Markov inequality. We will use the following lemma for this purpose.
**Lemma 4.1**.: _Suppose \(p,q\) are probability distributions on \(\mathcal{X}\) such that \(\frac{1}{2}\left\|p-q\right\|_{1}\leq\epsilon\), then \(S\subseteq\mathcal{X}\) defined as \(S:=\{x\in\mathcal{X}:p(x)\leq(1+\epsilon^{1/2})q(x)\}\) is such that \(q(S)\geq 1-\epsilon^{1/2}\) and \(p(S)\geq 1-\epsilon^{1/2}-\epsilon\)._
Proof.: For \(S^{c}:=\mathcal{X}\setminus S\), where \(S\) is the set defined above we have that
\[\epsilon\geq\frac{1}{2}\left\|p-q\right\|_{1} =\max_{H\subseteq\mathcal{X}}\left|p(H)-q(H)\right|\] \[\geq q(S^{c})\left|\frac{p(S^{c})}{q(S^{c})}-1\right|\] \[\geq q(S^{c})\left(\frac{p(S^{c})}{q(S^{c})}-1\right)\] \[=q(S^{c})\left(\frac{\sum_{x\in S^{c}}p(x)}{\sum_{x\in S^{c}}q(x) }-1\right)\] \[\geq q(S^{c})\left(\frac{\sum_{x\in S^{c}}(1+\epsilon^{\frac{1}{2 }})q(x)}{\sum_{x\in S^{c}}q(x)}-1\right)\] \[\geq q(S^{c})\epsilon^{\frac{1}{2}}\]
which implies that \(q(S^{c})\leq\epsilon^{\frac{1}{2}}\). Now, the statement of the Lemma follows.
We will also assume for the sake of simplicity that \(\rho_{A_{k}}\) are identical for all \(k\in[n]\)
Using the Lemma above, for every \(k\in[n]\), we know that the set
\[B_{k}: =\left\{(a_{1}^{n},b):\rho(a_{1}^{k},b)>(1+\sqrt{\epsilon})\rho(a_{1 }^{k-1},b)\rho(a_{k})\right\}\] \[=\left\{(a_{1}^{n},b):\rho(a_{k}|a_{1}^{k-1},b)>(1+\sqrt{\epsilon} )\rho(a_{k})\right\}\]
satisfies \(\Pr_{\rho}(B_{k})\leq 2\sqrt{\epsilon}\). We can now define \(L=\sum_{k=1}^{n}\chi_{B_{k}}\), which is a random variable that simply counts the number of bad sets \(B_{k}\) an element \((a_{1}^{n},b)\) belongs to. Using the Markov inequality, we have
\[\Pr_{\rho}\left[L>n\epsilon^{\frac{1}{4}}\right]\leq\frac{\mathbb{E}_{\rho}[L] }{n\epsilon^{\frac{1}{4}}}\leq 2\epsilon^{\frac{1}{4}}.\]
We can define the bad set \(\mathcal{B}:=\left\{(a_{1}^{n},b):L(a_{1}^{n},b)>n\epsilon^{\frac{1}{4}}\right\}\), then we can define the subnormalised distribution \(\tilde{\rho}_{A_{1}^{n}B}\) as
\[\tilde{\rho}_{A_{1}^{n}B}(a_{1}^{n},b)=\begin{cases}\rho_{A_{1}^{n}B}(a_{1}^{n },b)&(a_{1}^{n},b)\not\in\mathcal{B}\\ 0&\text{else}\end{cases}.\]
We have \(P(\tilde{\rho}_{A_{1}^{n}B},\rho_{A_{1}^{n}B})\leq\sqrt{2}\epsilon^{1/8}\). Further, note that for every \((a_{1}^{n},b)\not\in\mathcal{B}\), we have
\[\rho(a_{1}^{n}|b) =\prod_{k=1}^{n}\rho(a_{k}|a_{1}^{k-1},b)\] \[=\prod_{k:(a_{1}^{n},b)\not\in B_{k}}\rho(a_{k}|a_{1}^{k-1},b) \prod_{k:(a_{1}^{n},b)\in B_{k}}\rho(a_{k}|a_{1}^{k-1},b)\] \[\leq(1+\sqrt{\epsilon})^{n}\prod_{k:(a_{1}^{n},b)\not\in B_{k}} \rho_{A_{k}}(a_{k})\] \[\leq(1+\sqrt{\epsilon})^{n}2^{-n(1-\epsilon^{\frac{1}{4}})H_{ \min}(A_{1})}\]
where in the third line we have used the fact that if \((a_{1}^{n},b)\not\in B_{k}\), then \(\rho(a_{k}|a_{1}^{k-1}b)\leq(1+\sqrt{\epsilon})\rho_{A_{k}}(a_{k})\) and in the last line we have used the fact that for \((a_{1}^{k},b)\not\in\mathcal{B}\), we have \(|\{k\in[n]:(a_{1}^{n},b)\not\in B_{k}\}|=n-L(a_{1}^{n},b)\geq n(1-\epsilon^{ \frac{1}{4}})\), that all the states \(\rho_{A_{k}}\) are identical and \(2^{-H_{\min}(A_{k})}=\max_{a_{k}}\rho_{A_{k}}(a_{k})\). Note that we have essentially proven and used a \(D_{\max}\) bound above. This proves the following lower bound for the smooth min-entropy of \(\rho\)
\[H_{\min}^{\sqrt{2}\epsilon^{\frac{1}{8}}}(A_{1}^{n}|B)\geq n(1-\epsilon^{ \frac{1}{4}})H_{\min}(A_{1})-n\log(1+\sqrt{\epsilon}). \tag{24}\]
The right-hand side above can be improved to get the Shannon entropy \(H\) instead of the min-entropy \(H_{\min}\). However, we will not pursue this here, since this bound is sufficient for the purpose of our discussion.
Although, we are unable to generalise the classical argument above to the quantum case, it provides a great amount of insight into the approximately independent registers problem. Two important examples of distributions, which satisfy the approximate independence conditions above were mentioned in Footnotes (2) and (3) earlier. To create the first distribution, we flip a biased coin \(B\), which is \(0\) with probability \(\epsilon\) and \(1\) otherwise. If \(B=0\), then \(A_{1}^{n}\) is set to the constant all zero string otherwise it is sampled randomly and independently. For this distribution, once the bad event (\(B=0\)) is removed, the new distribution has a high min-entropy. On the other hand, for the second distribution, \(Q_{A_{1}^{2n}B_{1}^{2n}}\), we have that the random bits \(B_{i}\) are chosen independently, with each being equal to \(0\) with probability \(\epsilon\) and \(1\) otherwise. If the bit \(B_{i}\) is \(0\), then \(A_{i}\) is set equal to \(A_{i-1}\) otherwise it is sampled independently. In this case, there is no small probability (small as a function of \(\epsilon\)) event, that one can simply remove, so that the distribution becomes i.i.d. However, we expect that with high probability the number of \(B_{i}=0\) is close to \(2n\epsilon\). Given that the distribution samples all the other \(A_{i}\) independently, the smooth min-entropy for the distribution should be close to \(2n(1-\epsilon)H(A_{1})\). The above argument shows that any distribution satisfying the approximate independence conditions in Eq. 23 can be handled by combining the methods used for these two example distributions, that is, deleting the bad part of the distribution and recognising that the probability for every element in the rest of the space behaves independently on average.
The above classical argument is difficult to generalise to quantum states primarily because the quantum equivalents of Lemma 4.1 are not as nice and simple. Furthermore, quantum conditional probabilities themselves are also difficult to use. Fortunately, the substate theorem serves as the perfect tool for developing a smooth max-relative entropy bound, which we can then use with the min-entropy triangle inequality. The quantum substate theorem [11, 2] provides an upper bound on the smooth max relative entropy \(D_{\max}^{\epsilon}(\rho\|\sigma)\) between two states in terms of their relative entropy \(D(\rho\|\sigma)\).
**Theorem 4.2** (Quantum substate theorem [2]).: _Let \(\rho\) and \(\sigma\) be two states on the same Hilbert space. Then for any \(\epsilon\in(0,1)\), we have_
\[D_{\max}^{\sqrt{\epsilon}}(\rho\|\sigma)\leq\frac{D(\rho\|\sigma)+1}{\epsilon }+\log\frac{1}{1-\epsilon}. \tag{25}\]
In this section, we will also frequently use the multipartite mutual information [10, 11, 12]. For a state \(\rho_{X_{1}^{n}}\), the multipartite mutual information between the registers \((X_{1},X_{2},\cdots,X_{n})\) is defined as
\[I(X_{1}:X_{2}:\cdots:X_{n})_{\rho}:=D(\rho_{X_{1}^{n}}\|\rho_{X_{1}}\otimes \rho_{X_{2}}\otimes\cdots\otimes\rho_{X_{n}}). \tag{26}\]
In other words, it is the relative entropy between \(\rho_{X_{1}^{n}}\) and \(\rho_{X_{1}}\otimes\rho_{X_{2}}\otimes\cdots\otimes\rho_{X_{n}}\). It can
easily be shown that the multipartite mutual information satisfies the following identities:
\[I(X_{1}:X_{2}:\cdots:X_{n})_{\rho} =\sum_{k=1}^{n}H(X_{k})_{\rho}-H(X_{1}\cdots\,X_{n})_{\rho} \tag{27}\] \[=\sum_{k=2}^{n}I(X_{k}:X_{1}^{k-1}). \tag{28}\]
Going back to proving a bound for the quantum approximately independent registers problem, note that using the Alicki-Fannes-Winter (AFW) bound [1, 16] for mutual information [16, Theorem 11.10.4], Eq. 23 implies that for every \(k\in[n]\)
\[I(A_{k}:A_{1}^{k-1}B)_{\rho}\leq\epsilon\log|A|+g_{2}\left(\frac{\epsilon}{2}\right) \tag{29}\]
where \(g_{2}(x)\coloneqq(x+1)\log(x+1)-x\log(x)\). With this in mind, we can now focus our efforts on proving the following theorem.
**Theorem 4.3**.: _Let registers \(A_{k}\) have dimension \(|A|\) for all \(k\in[n]\). Suppose a quantum state \(\rho_{A_{1}^{n}B}\) is such that for every \(k\in[n],\) we have_
\[I(A_{k}:A_{1}^{k-1}B)_{\rho}\leq\epsilon \tag{30}\]
_for some \(0<\epsilon<1\). Then, we have that_
\[H_{\min}^{\epsilon^{\frac{1}{4}+\epsilon}}(A_{1}^{n}|B)_{\rho} \geq\sum_{k=1}^{n}H(A_{k})_{\rho}-3n\epsilon^{\frac{1}{4}}\log(1+2|A|)\] \[\qquad-\frac{2\log(1+2|A|)}{\epsilon^{3/4}}-\frac{2\log(1+2|A|)} {\epsilon^{1/4}}\left(\log(1-\sqrt{\epsilon})+g_{1}(\epsilon,\epsilon^{\frac{ 1}{4}})\right) \tag{31}\]
_where \(g_{1}(x,y)\coloneqq-\log(1-\sqrt{1-x^{2}})-\log(1-y^{2})\). In particular, when all the states \(\rho_{A_{k}}\) are identical, we have_
\[H_{\min}^{\epsilon^{\frac{1}{4}+\epsilon}}(A_{1}^{n}|B)_{\rho} \geq n\left(H(A_{1})_{\rho}-3\epsilon^{\frac{1}{4}}\log(1+2|A|)\right)\] \[\qquad-\frac{2\log(1+2|A|)}{\epsilon^{3/4}}-\frac{2\log(1+2|A|)} {\epsilon^{1/4}}\left(\log(1-\sqrt{\epsilon})+g_{1}(\epsilon,\epsilon^{\frac{ 1}{4}})\right). \tag{32}\]
Proof.: First note that we have,
\[I(A_{1}:A_{2}:\cdots:A_{n}:B) =D(\rho_{A_{1}^{n}B}||\bigotimes_{k=1}^{n}\rho_{A_{k}}\otimes \rho_{B})\] \[=\sum_{k=1}^{n}I(A_{k}:A_{1}^{k-1}B)\] \[\leq n\epsilon.\]
Using the substate theorem, we now have
\[D_{\max}^{\epsilon\frac{1}{4}}\left(\rho_{A_{1}^{n}B}\middle\| \bigotimes_{k=1}^{n}\rho_{A_{k}}\otimes\rho_{B}\right) \leq\frac{D(\rho_{A_{1}^{n}B}\|\bigotimes_{k=1}^{n}\rho_{A_{k}} \otimes\rho_{B})+1}{\sqrt{\epsilon}}-\log(1-\sqrt{\epsilon})\] \[\leq n\sqrt{\epsilon}+\frac{1}{\sqrt{\epsilon}}-\log(1-\sqrt{ \epsilon}). \tag{33}\]
We now define the auxiliary state \(\eta_{A_{1}^{n}B}:=\bigotimes_{k=1}^{n}\rho_{A_{k}}\otimes\rho_{B}\). Using Lemma 3.5, for \(\alpha\in(1,2)\), we can transform the smooth min-entropy into an \(\alpha\)-Renyi entropy on the auxiliary product state \(\eta_{A_{1}^{n}B}\) as follows:
\[H_{\min}^{\epsilon\frac{1}{4}+\epsilon}(A_{1}^{n}|B)_{\rho}\] \[\geq\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B)_{\eta}-\frac{ \alpha}{\alpha-1}D_{\max}^{\epsilon\frac{1}{4}}(\rho_{A_{1}^{n}B}\|\eta_{A_{1 }^{n}B})-\frac{g_{1}(\epsilon,\epsilon^{\frac{1}{4}})}{\alpha-1}\] \[=\sum_{k=1}^{n}\tilde{H}_{\alpha}^{\dagger}(A_{k})_{\rho}-\frac{ \alpha}{\alpha-1}D_{\max}^{\epsilon\frac{1}{4}}(\rho_{A_{1}^{n}B}\|\eta_{A_{1 }^{n}B})-\frac{g_{1}(\epsilon,\epsilon^{\frac{1}{4}})}{\alpha-1}\] \[\geq\sum_{k=1}^{n}H(A_{k})_{\rho}-n(\alpha-1)\log^{2}(1+2|A|)- \frac{\alpha}{\alpha-1}D_{\max}^{\epsilon\frac{1}{4}}(\rho_{A_{1}^{n}B}\|\eta_ {A_{1}^{n}B})-\frac{g_{1}(\epsilon,\epsilon^{\frac{1}{4}})}{\alpha-1}\] \[\geq\sum_{k=1}^{n}H(A_{k})_{\rho}-n(\alpha-1)\log^{2}(1+2|A|)- \frac{\alpha}{\alpha-1}n\sqrt{\epsilon}-\frac{\alpha}{\alpha-1}\frac{1}{\sqrt {\epsilon}}-\frac{\alpha}{\alpha-1}\log(1-\sqrt{\epsilon})-\frac{g_{1}( \epsilon,\epsilon^{\frac{1}{4}})}{\alpha-1}.\]
In the third line above, we have used [1, Lemma B.9] (which is an improvement of [1, Lemma 8]), which is valid as long as \(\alpha<1+\frac{1}{\log(1+2|A|)}\). We will select \(\alpha=1+\frac{\epsilon^{1/4}}{\log(1+2|A|)}\) for which the above \(\alpha\) bound is satisfied, this gives us
\[H_{\min}^{\epsilon\frac{1}{4}+\epsilon}(A_{1}^{n}|B)_{\rho} \geq\sum_{k=1}^{n}H(A_{k})_{\rho}-3n\epsilon^{\frac{1}{4}}\log(1+2|A|)- \frac{2\log(1+2|A|)}{\epsilon^{3/4}}\] \[\qquad-\frac{2\log(1+2|A|)}{\epsilon^{1/4}}\left(\log(1-\sqrt{ \epsilon})+g_{1}(\epsilon,\epsilon^{\frac{1}{4}})\right).\]
We can now plug the bound in Eq. 29 to derive the following Corollary.
**Corollary 4.4**.: _Let registers \(A_{k}\) have dimension \(|A|\) for all \(k\in[n]\). Suppose a quantum state \(\rho_{A_{1}^{n}B}\) is such that for every \(k\in[n],\) we have_
\[\left\|\rho_{A_{1}^{k}B}-\rho_{A_{k}}\otimes\rho_{A_{1}^{k-1}B} \right\|_{1}\leq\epsilon. \tag{34}\]
_Then, we have that for \(\delta=\epsilon\log|A|+g_{2}\left(\frac{\epsilon}{2}\right)\) such that \(0<\delta<1\),_
\[H_{\min}^{\delta^{\frac{1}{4}}+\delta}(A_{1}^{n}|B)_{\rho}\geq \sum_{k=1}^{n}H(A_{k})_{\rho}-3n\delta^{\frac{1}{4}}\log(1+2|A|)\] \[\qquad-\frac{2\log(1+2|A|)}{\delta^{3/4}}-\frac{2\log(1+2|A|)}{ \delta^{1/4}}\left(\log(1-\sqrt{\delta})+g_{1}(\delta,\delta^{\frac{1}{4}})\right) \tag{35}\]
_where \(g_{1}(x,y)=-\log(1-\sqrt{1-x^{2}})-\log(1-y^{2})\) and \(g_{2}(x)=(x+1)\log(x+1)-x\log(x)\). In particular, when all the states \(\rho_{A_{k}}\) are identical, we have_
\[H_{\min}^{\delta^{\frac{1}{4}}+\delta}(A_{1}^{n}|B)_{\rho}\geq n \left(H(A_{1})_{\rho}-3\delta^{\frac{1}{4}}\log(1+2|A|)\right)\] \[\qquad-\frac{2\log(1+2|A|)}{\delta^{3/4}}-\frac{2\log(1+2|A|)}{ \delta^{1/4}}\left(\log(1-\sqrt{\delta})+g_{1}(\delta,\delta^{\frac{1}{4}}) \right). \tag{36}\]
### Weak approximate asymptotic equipartition
We can modify the proof of Theorem 4.3 to prove a _weak_ approximate asymptotic equipartition property (AEP).
**Theorem 4.5**.: _Let registers \(A_{k}\) have dimension \(|A|\) for all \(k\in[n]\) and the registers \(B_{k}\) have dimension \(|B|\) for all \(k\in[n]\). Suppose a quantum state \(\rho_{A_{1}^{n}B_{1}^{n}E}\) is such that for every \(k\in[n],\) we have_
\[\left\|\rho_{A_{1}^{k}B_{1}^{k}E}-\rho_{A_{k}B_{k}}\otimes\rho_{A_{1}^{k-1}B_ {1}^{k-1}E}\right\|_{1}\leq\epsilon. \tag{37}\]
_Then, we have that for \(\delta=\epsilon\log\left(|A||B|\right)+g_{2}\left(\frac{\epsilon}{2}\right)\) such that \(0<\delta<1\),_
\[H_{\min}^{\delta^{\frac{1}{4}}+\delta}(A_{1}^{n}|B_{1}^{n}E)_{ \rho}\geq \sum_{k=1}^{n}H(A_{k}|B_{k})_{\rho}-3n\delta^{\frac{1}{4}}\log(1 +2|A|)\] \[\qquad-\frac{2\log(1+2|A|)}{\delta^{3/4}}-\frac{2\log(1+2|A|)}{ \delta^{1/4}}\left(\log(1-\sqrt{\delta})+g_{1}(\delta,\delta^{\frac{1}{4}})\right) \tag{38}\]
_where \(g_{1}(x,y)=-\log(1-\sqrt{1-x^{2}})-\log(1-y^{2})\) and \(g_{2}(x)=(x+1)\log(x+1)-x\log(x)\). In particular, when all the states \(\rho_{A_{k}B_{k}}\) are identical, we have_
\[H_{\min}^{\delta^{\frac{1}{4}}+\delta}(A_{1}^{n}|B_{1}^{n}E)_{ \rho}\geq n\left(H(A_{1}|B_{1})_{\rho}-3\delta^{\frac{1}{4}}\log(1+2|A|)\right)\] \[\qquad-\frac{2\log(1+2|A|)}{\delta^{3/4}}-\frac{2\log(1+2|A|)}{ \delta^{1/4}}\left(\log(1-\sqrt{\delta})+g_{1}(\delta,\delta^{\frac{1}{4}}) \right). \tag{39}\]
Proof.: To prove this, we use the auxiliary state \(\eta_{A_{1}^{n}B_{1}^{n}E}:=\otimes\rho_{A_{k}B_{k}}\otimes\rho_{E}\). Then, we have
\[D(\rho_{A_{1}^{n}B_{1}^{n}E}\|\eta_{A_{1}^{n}B_{1}^{n}E}) =I(A_{1}B_{1}:A_{2}B_{2}:\cdots:A_{n}B_{n}:E)_{\rho}\] \[=\sum_{k=1}^{n}I(A_{k}B_{k}:A_{1}^{k-1}B_{1}^{k-1}E)_{\rho}\] \[\leq n\left(\epsilon\log\left(|A\|B|\right)+g\left(\frac{\epsilon }{2}\right)\right)=n\delta\]
where we used the AFW bound for mutual information in the last line [20, Theorem 11.10.4]. The rest of the proof follows the proof of Theorem 4.3, only difference being that now we have \(\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\eta}=\sum_{k=1}^{n} \tilde{H}_{\alpha}^{\dagger}(A_{k}|B_{k})_{\rho}\).
We call this generalisation _weak_ because the smoothing term (\(\delta\)) depends on size of the side information \(|B|\). In Sec. E, we show that under the assumptions of the theorem, some sort of bound on the dimension of the registers \(B\) is necessary otherwise one cannot have a non-trivial bound on the smooth min-entropy.
### Simple security proof for sequential device independent quantum key distribution
The approximately independent register scenario and the associated min-entropy lower bound can be used to provide simple "proof of concept" security proofs for cryptographic protocols. In this section, we will briefly sketch a proof for sequential device independent quantum key distribution (DIQKD) to demonstrate this idea. The protocol for sequential DIQKD used in [1] is presented as Protocol 1.
We consider a simple model for DIQKD, where Eve (the adversary) distributes a state \(\rho_{E_{A}E_{B}E}^{(0)}\) between Alice and Bob at the beginning of the protocol. Alice and Bob then use their states sequentially as given in Protocol 1. The \(k\)th round of the protocol produces the questions \(X_{k},Y_{k}\) and \(T_{k}\), the answers \(A_{k}\) and \(B_{k}\) and transforms the shared state from \(\rho_{E_{A}E_{B}E}^{(k-1)}\) to \(\rho_{E_{A}E_{B}E}^{(k)}\).
Given the questions and answers of the previous rounds, the state shared between Alice and Bob and their devices in each round can be viewed as a device for playing the CHSH game. Suppose in the \(k\)th round, the random variables produced in the previous \(k-1\) rounds are \(r_{k-1}:=x_{1}^{k-1},y_{1}^{k-1},t_{1}^{k-1},a_{1}^{k-1},b_{1}^{k-1}\) and that the state shared between Alice and Bob is \(\rho_{E_{A}E_{B}E|r_{k-1}}^{(k-1)}\). We can then define \(\Pr[W_{k}|r_{k-1}]\) to be the winning probability of the CHSH game played by Alice and Bob using the state and their devices in the \(k\)th round. Note that Alice's device cannot distinguish whether the CHSH game is played in a round or is used for key generation. We can further take an average over all the previous round's
**Sequential DIQKD protocol**
**Parameters:**
* \(\omega_{\rm exp}\) is the expected winning probability for the honest implementation of the device
* \(n\geq 1\) is the number of rounds in the protocol
* \(\gamma\in(0,1]\) is the fraction of test rounds
**Protocol:**
1. For every \(0\leq i\leq n\) perform the following steps: 1. Alice chooses a random \(T_{i}\in\{0,1\}\) with \(\Pr[T_{i}=1]=\gamma\). 2. Alice sends \(T_{i}\) to Bob. 3. If \(T_{i}=0\), Alice and Bob set the questions \((X_{i},Y_{i})=(0,2)\), otherwise they sample \((X_{i},Y_{i})\) uniformly at random from \(\{0,1\}\). 4. Alice and Bob use their device with the questions \((X_{i},Y_{i})\) and obtain the outputs \(A_{i},B_{i}\).
2. Alice announces her questions \(X_{1}^{n}\) to Bob.
3. **Error correction:** Alice and Bob use an error correction procedure, which lets Bob obtain the raw key \(\tilde{A}_{1}^{n}\) (if the error correction protocol succeeds, then \(A_{1}^{n}=\tilde{A}_{1}^{n}\)). In case the error correction protocol aborts, they abort the QKD protocol too.
4. **Parameter Estimation:** Bob uses \(B_{1}^{n}\) and \(\tilde{A}_{1}^{n}\) to compute the average winning probability \(\omega_{\rm avg}\) on the test rounds. He aborts if \(\omega_{\rm avg}<\omega_{\rm exp}\)
5. **Privacy Amplification:** Alice and Bob use a privacy amplification protocol to create a secret key \(K\) from \(A_{1}^{n}\) (using \(\tilde{A}_{1}^{n}\) for Bob).
Protocol 1
random variables to derive the probability of winning the \(k\)th game
\[\Pr[W_{k}]=\mathbb{E}_{r_{k-1}}\left[\Pr[W_{k}|r_{k-1}]\right]. \tag{40}\]
Alice and Bob randomly sample a subset of the rounds (using the random variable \(T_{k}\)) and play the CHSH game on this subset. If the average winning probability of CHSH game on this subset is small, they abort the protocol. For simplicity and brevity, we will assume here that the state \(\rho_{E_{A}E_{B}E}^{(0)}\) distributed between Alice and Bob at the start of the protocol by Eve has an average winning probability at least \(\omega_{\mathrm{exp}}\), that is,
\[\frac{1}{n}\sum_{k=1}^{n}\Pr[W_{k}]\geq\omega_{\mathrm{exp}}-\delta \tag{41}\]
for some small \(\delta>0\). Using standard sampling arguments it can be argued that either this is true or the protocol aborts with high probability.
For any shared state \(\sigma_{E_{A}E_{B}E}\) (where \(E_{A}\) is held by Alice, \(E_{B}\) is held by Bob and \(E\) is held by the adversary) and local measurement devices, if Alice and Bob win the CHSH game with a probability \(\omega\in\left(\frac{3}{4},\frac{2+\sqrt{2}}{4}\right]\), then Alice's answer \(A\) to the game is random given the questions \(X,Y\) and the register \(E\) held by adversary. This is quantified by the following entropic bound [1] (see [1, Lemma 5.3] for the following form)
\[H(A|XYE)\geq f(\omega)=\begin{cases}1-h\left(\frac{1}{2}+\frac{1}{2}\sqrt{16 \omega(\omega-1)+3}\right)&\text{if }\omega\in[\frac{3}{4},\frac{2+\sqrt{2}}{4}]\\ 0&\text{if }\omega\in[0,\frac{3}{4})\end{cases} \tag{42}\]
where \(h(\cdot)\) is the binary entropy. The function \(f\) is convex over the interval \(\left[0,\frac{2+\sqrt{2}}{4}\right]\). We plot it in the interval \([\frac{3}{4},\frac{2+\sqrt{2}}{4}]\) in Figure 1.
For \(\epsilon>0\), we choose the parameter \(\omega_{\mathrm{exp}}\in[\frac{3}{4}+\delta,\frac{2+\sqrt{2}}{4}]\) to be large enough so that
\[1-f(\omega_{\mathrm{exp}}-\delta)=h\left(\frac{1}{2}+\frac{1}{2}\sqrt{16( \omega_{\mathrm{exp}}-\delta)(\omega_{\mathrm{exp}}-\delta-1)+3}\right)\leq \epsilon^{4}. \tag{43}\]
We will now use Eq. 42 to bound the von Neumann entropy of the answers given Eve's
information for the sequential DIQKD protocol. We have
\[H(A_{1}^{n}|X_{1}^{n}Y_{1}^{n}T_{1}^{n}E) =\sum_{k=1}^{n}H(A_{k}|A_{1}^{k-1}X_{1}^{n}Y_{1}^{n}T_{1}^{n}E)\] \[\overset{(1)}{=}\sum_{k=1}^{n}H(A_{k}|A_{1}^{k-1}X_{1}^{k}Y_{1}^{k }T_{1}^{k}E)\] \[\overset{(2)}{=}\sum_{k=1}^{n}H(A_{k}|X_{k}Y_{k}R_{k-1}E)\] \[=\sum_{k=1}^{n}\mathbb{E}_{r_{k-1}\sim R_{k-1}}\bigg{[}H(A_{k}|X_ {k}Y_{k}E)_{\rho_{|r_{k-1}}^{(k)}}\bigg{]}\] \[\overset{(3)}{\geq}\sum_{k=1}^{n}\mathbb{E}_{r_{k-1}\sim R_{k-1} }\left[f\left(\Pr[W_{k}|r_{k-1}]\right)\right]\] \[\geq nf\left(\frac{1}{n}\sum_{k=1}^{n}\Pr[W_{k}]\right)\] \[\geq nf(\omega_{\exp}-\delta)\geq n(1-\epsilon^{4})\]
where in (1) we have used the fact that the questions sampled in the rounds after the \(k\)th round are independent of the random variables in the previous rounds, in (2) we use the
fact that Alice's answers are independent of the random variable \(T_{k}\) given the question \(X_{k}\) and we also grouped the random variables generated in the previous round into the random variable \(R_{k-1}\coloneqq A_{1}^{k-1}B_{1}^{k-1}X_{1}^{k-1}Y_{1}^{k-1}T_{1}^{k-1}\), in (3) we use the bound in Eq. 42, and in the next two steps we use convexity of \(f\). If instead of the von Neumann entropy on the left-hand side above we had the smooth min-entropy, then the bound above would be sufficient to prove the security of DIQKD. However, this argument cannot be easily generalised to the smooth min-entropy because a chain rule like the one used in the first step does not exist for the smooth min-entropy (entropy accumulation [10, 11] generalises exactly such an argument). We can use the argument used for the approximately independent register case to transform this von Neumann entropy bound to a smooth min-entropy bound.
This bound results in the following bound on the multipartite mutual information
\[I(A_{1}:\cdots:A_{n}:X_{1}^{n}Y_{1}^{n}T_{1}^{n}E) =\sum_{k=1}^{n}H(A_{k})+H(X_{1}^{n}Y_{1}^{n}T_{1}^{n}E)-H(A_{1}^ {n}X_{1}^{n}Y_{1}^{n}T_{1}^{n}E)\] \[=\sum_{k=1}^{n}H(A_{k})-H(A_{1}^{n}|X_{1}^{n}Y_{1}^{n}T_{1}^{n}E)\] \[\leq n-n(1-\epsilon^{4})=n\epsilon^{4}\]
where we have used the dimension bound \(H(A_{k})\leq 1\) for every \(k\in[n]\). This is the same as the multipartite mutual information bound we derived while analysing approximately independent registers in Theorem 4.3. We can simply use the smooth min-entropy bound derived there here as well. This gives us the bound
\[H_{\min}^{2\epsilon}(A_{1}^{n}|X_{1}^{n}Y_{1}^{n}T_{1}^{n}E) \geq\sum_{k=1}^{n}H(A_{k})-3n\epsilon\log 5-O\left(\frac{1}{ \epsilon^{3}}\right)\] \[=n\big{(}1-3\epsilon\log 5\big{)}-O\left(\frac{1}{\epsilon^{3}}\right) \tag{44}\]
where we have used the fact that the answers \(A_{k}\) can always be assumed to be uniformly distributed [1, 1]. For every \(\epsilon>0\), we can choose a sufficiently large \(n\) so that this bound is large and positive.
We note that this method is only able to provide "proof of concept" or existence type security proofs. This proof method couples the value of the security parameter for privacy amplification \(\epsilon\) with the average winning probability, which is not desirable. The parameter \(\epsilon\) is chosen according to the security requirements of the protocol and is typically very small. For such values of \(\epsilon\), the average winning probability of the protocol will have to be extremely close to the maximum and we cannot realistically expect practical implementations to achieve such high winning probabilities. However, we do expect that this method will make it easier to create "proof of concept" type proofs for new cryptographic protocols in the future.
## 5 Approximate entropy accumulation
In general, it is very difficult to estimate the smooth min-entropy of states produced during cryptographic protocols. The entropy accumulation theorem (EAT) [1] provides a tight and simple lower bound for the smooth min-entropy \(H^{\epsilon}_{\min}(A^{n}_{1}|B^{n}_{1}E)_{\rho}\) of sequential processes, under certain Markov chain conditions. The state \(\rho_{A^{n}_{1}B^{n}_{1}E}\) in the setting for EAT is produced by a sequential process of the form shown in Figure 2. The parties implementing the protocol begin with the registers \(R_{0}\) and \(E\). In the context of a cryptographic protocol, the register \(R_{0}\) is usually held by the honest parties, whereas the register \(E\) is held by the adversary. Then, in each round \(k\in[n]\) of the process, a channel \(\mathcal{M}_{k}:R_{k-1}\to A_{k}B_{k}R_{k}\) is applied on the register \(R_{k-1}\) to produce the registers \(A_{k},B_{k}\) and \(R_{k}\). The registers \(A^{n}_{1}\) usually contain a partially secret raw key and the registers \(B^{n}_{1}\) contain the side information about \(A^{n}_{1}\) revealed to the adversary during the protocol. EAT requires that for every \(k\in[n]\), the side information \(B_{k}\) satisfies the Markov chain \(A^{k-1}_{1}\leftrightarrow B^{k-1}_{1}E\leftrightarrow B_{k}\), that is, the side information revealed in the \(k\)th round does not reveal anything more about the secret registers of the previous rounds than was already known to the adversary through \(B^{k-1}_{1}E\). Under this assumption, EAT provides the following lower bound for the smooth min-entropy
\[H^{\epsilon}_{\min}(A^{n}_{1}|B^{n}_{1}E)_{\rho}\geq\sum_{k=1}^{n}\inf_{\omega _{R_{k-1}R}}H(A_{k}|B_{k}R)_{\mathcal{M}_{k}(\omega_{R_{k-1}R})}-c\sqrt{n} \tag{45}\]
where the infimum is taken over all input states to the channels \(\mathcal{M}_{k}\) and \(c>0\) is a constant depending only on \(|A|\) (size of registers \(A_{k}\)) and \(\epsilon\). We will state and prove an approximate version of EAT. Consider the sequential process in Figure 2 again. Now, suppose that the channels \(\mathcal{M}_{k}\) do not necessarily satisfy the Markov chain conditions mentioned above, but each of the channels \(\mathcal{M}_{k}\) can be \(\epsilon\)-approximated by \(\mathcal{M}^{\prime}_{k}\) which satisfy the Markov chain \(A^{k-1}_{1}\leftrightarrow B^{k-1}_{1}E\leftrightarrow B_{k}\) for a certain collection of inputs. The approximate entropy accumulation theorem below provides a lower bound on the smooth min-entropy in such a setting. The proof of this theorem again uses the technique based on the smooth min
Figure 2: The setting for entropy accumulation and Theorem 5.1. For \(k\in[n]\), the channels \(\mathcal{M}_{k}\) are repeatedly applied to the registers \(R_{k-1}\) to produce the “secret” information \(A_{k}\) and the side information \(B_{k}\).
entropy triangle inequality developed in the previous section. In this setting too, we have a chain of approximations. For each \(k\in[n]\), we have
\[\rho_{A_{1}^{k}B_{1}^{k}E}=\operatorname{tr}_{R_{k}}\circ\mathcal{M}_{k}(\rho_{A _{1}^{k-1}B_{1}^{k-1}E})\approx_{\epsilon}\operatorname{tr}_{R_{k}}\circ \mathcal{M}_{k}^{\prime}(\rho_{A_{1}^{k-1}B_{1}^{k-1}E})=:\sigma_{A_{1}^{k}B_{ 1}^{k}E}^{(k)}.\]
According to the Markov chain assumption for the channels \(\mathcal{M}_{k}^{\prime}\), the state \(\sigma_{A_{1}^{k}B_{1}^{k}E}^{(k)}\), satisfies the Markov chain \(A_{1}^{k-1}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}\). Therefore, we expect that the register \(A_{k}\) adds some entropy to the smooth min-entropy \(H_{\min}^{\epsilon^{\prime}}(A_{1}^{n}|B_{1}^{n}E)_{\rho}\) and that the information leaked through \(B_{1}^{n}\) is not too large. We show that this is indeed the case in the approximate entropy accumulation theorem.
The approximate entropy accumulation theorem can be used to analyse and prove the security of cryptographic protocols under certain imperfections. For example, the entropy accumulation theorem can be used to prove the security of sequential device independent quantum key distribution (DIQKD) protocols [1]. In these protocols, the side information \(B_{k}\) produced during each of the rounds are the questions used during the round to play a non-local game, like the CHSH game. In the ideal case, these questions are sampled independently of everything which came before. As an example of an imperfection, we can imagine that some physical effect between the memory storing the secret bits \(A_{1}^{k-1}\) and the device producing the questions may lead to a small correlation between the side information produced during the \(k\)th round and the secret bits \(A_{1}^{k-1}\) (also see [11, 12]). The approximate entropy accumulation theorem below can be used to prove security of DIQKD under such imperfections. We do not, however, pursue this example here and leave the applications of this theorem for future work.
**Theorem 5.1**.: _For \(k\in[n]\), let the registers \(A_{k}\) and \(B_{k}\) be such that \(|A_{k}|=|A|\) and \(|B_{k}|=|B|\). For \(k\in[n]\), let \(\mathcal{M}_{k}\) be channels from \(R_{k-1}\to R_{k}A_{k}B_{k}\) and_
\[\rho_{A_{1}^{n}B_{1}^{n}E}=\operatorname{tr}_{R_{n}}\circ\mathcal{M}_{n} \circ\cdots\circ\mathcal{M}_{1}(\rho_{R_{0}E}) \tag{46}\]
_be the state produced by applying these maps sequentially. Suppose the channels \(\mathcal{M}_{k}\) are such that for every \(k\in[n]\), there exists a channel \(\mathcal{M}_{k}^{\prime}\) from \(R_{k-1}\to R_{k}A_{k}B_{k}\) such that_
1. \(\mathcal{M}_{k}^{\prime}\)__\(\epsilon\)_-approximates_ \(\mathcal{M}_{k}\) _in the diamond norm:_ \[\frac{1}{2}\left\|\mathcal{M}_{k}-\mathcal{M}_{k}^{\prime}\right\|_{\diamond}\leq\epsilon\] (47)
2. _For every choice of a sequence of channels_ \(\mathcal{N}_{i}\in\{\mathcal{M}_{i},\mathcal{M}_{i}^{\prime}\}\) _for_ \(i\in[k-1]\)_, the state_ \(\mathcal{M}_{k}^{\prime}\circ\mathcal{N}_{k-1}\circ\cdots\circ\mathcal{N}_{1} (\rho_{R_{0}E})\) _satisfies the Markov chain_ \[A_{1}^{k-1}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}.\] (48)
_Then, for \(0<\delta,\epsilon_{1},\epsilon_{2}<1\) such that \(\epsilon_{1}+\epsilon_{2}<1\), \(\alpha\in\left(1,1+\frac{1}{\log(1+2|A|)}\right)\) and \(\beta>1\), we have_
\[H_{\min}^{\epsilon_{1}+\epsilon_{2}}(A_{1}^{n}|B_{1}^{n}E)_{ \rho}\geq\sum_{k=1}^{n}\inf_{\omega_{R_{k}}\tilde{R}_{k}} H(A_{k}|B_{k}\tilde{R}_{k})_{\mathcal{M}_{k}^{\prime}(\omega_{R_ {k}}\hat{R}_{k})}-n(\alpha-1)\log^{2}(1+2|A|)\] \[-\frac{\alpha}{\alpha-1}n\log\left(1+\delta\left(2^{\frac{\alpha- 1}{\alpha}2\log(|A||B|)}-1\right)\right)\] \[-\frac{\alpha}{\alpha-1}nz_{\beta}(\epsilon,\delta)-\frac{1}{ \alpha-1}\left(g_{1}(\epsilon_{2},\epsilon_{1})+\frac{\alpha g_{0}(\epsilon_{1 })}{\beta-1}\right). \tag{49}\]
_where_
\[z_{\beta}(\epsilon,\delta):=\frac{\beta+1}{\beta-1}\log\left( \left(1+\sqrt{(1-\delta)\epsilon}\right)^{\frac{\beta}{\beta+1}}+\left(\frac{ \sqrt{(1-\delta)\epsilon}}{\delta^{\beta}}\right)^{\frac{1}{\beta+1}}\right) \tag{50}\]
_and \(g_{1}(x,y)=-\log(1-\sqrt{1-x^{2}})-\log(1-y^{2})\) and the infimum in Eq. 49 is taken over all input states to the channels \(\mathcal{M}_{k}^{\prime}\)._
For the choice of \(\beta=2\), \(\delta=\epsilon^{\frac{1}{8}}\), we have
\[z_{2}(\epsilon,\delta)\leq 3\log\left(\left(1+\epsilon^{\frac{1}{2}}\right)^{ \frac{2}{3}}+\epsilon^{\frac{1}{12}}\right).\]
We also have that
\[\log\left(1+\delta 2^{\frac{\alpha-1}{\alpha}2\log(|A||B|)}\right)\leq(|A||B |)^{2}\epsilon^{\frac{1}{8}}.\]
Finally, if we define \(\epsilon_{r}:=(|A||B|)^{2}\epsilon^{\frac{1}{8}}+3\log\left(\left(1+\epsilon^{ \frac{1}{2}}\right)^{\frac{2}{3}}+\epsilon^{\frac{1}{12}}\right)\), and choose \(\alpha=\sqrt{\epsilon_{r}}\), we get the bound
\[H_{\min}^{\epsilon_{1}+\epsilon_{2}}(A_{1}^{n}|B_{1}^{n}E)_{ \rho}\geq\sum_{k=1}^{n}\inf_{\omega_{R_{k}}\hat{R}_{k}} H(A_{k}|B_{k}\tilde{R}_{k})_{\mathcal{M}_{k}^{\prime}(\omega_{R_ {k}}\hat{R}_{k})}\] \[-n\sqrt{\epsilon_{r}}(\log^{2}(1+2|A|)+2)-\frac{1}{\sqrt{ \epsilon_{r}}}\left(g_{1}(\epsilon_{2},\epsilon_{1})+2g_{0}(\epsilon_{1})\right) \tag{51}\]
The entropy loss per round in the above bound behaves as \(\sim\epsilon^{\frac{1}{24}}\). This dependence on \(\epsilon\) is indeed very poor. In comparison, we can carry out a similar proof argument for classical probability distributions to get a dependence of \(O(\sqrt{\epsilon})\) (Theorem F.1). The exponent of \(\epsilon\) in our bound seems to be almost a factor of \(12\) off from the best possible bound. Roughly speaking, while carrying out the proof classically, we can bound the relevant channel divergences in the proof by \(O\left(\epsilon\right)\), whereas in Eq. 51, we were only able to bound the channel
divergence by \(\sim\epsilon^{1/12}\). This leads to the deterioration of performance we see here as compared to the classical case. We will discuss this further in Sec. 5.4.
In order to prove this theorem, we will use a channel divergence based chain rule. Recently proven chain rules for \(\alpha\)-Renyi relative entropy [13, Corollary 5.2] state that for \(\alpha>1\) and states \(\rho_{A}\) and \(\sigma_{A}\), and channels \(\mathcal{E}_{A\to B}\) and \(\mathcal{F}_{A\to B}\), we have
\[\tilde{D}_{\alpha}(\mathcal{E}_{A\to B}(\rho_{A})\|\mathcal{F}_{A \to B}(\sigma_{A}))\leq\tilde{D}_{\alpha}(\rho_{A}\|\sigma_{A})+\tilde{D}_{ \alpha}^{\rm reg}(\mathcal{E}_{A\to B}\|\mathcal{F}_{A\to B}) \tag{52}\]
where \(\tilde{D}_{\alpha}^{\rm reg}(\mathcal{E}_{A\to B}\|\mathcal{F}_{A \to B}):=\lim_{n\to\infty}\frac{1}{n}\tilde{D}_{\alpha}(\mathcal{E}_{A\to B}^{ \otimes n}\|\mathcal{F}_{A\to B}^{\otimes n})\) and \(\tilde{D}_{\alpha}(\cdot\|\cdot)\) is the channel divergence.
Now observe that if we were guaranteed that for the maps in Theorem 5.1 above, \(\tilde{D}_{\alpha}^{\rm reg}(\mathcal{M}_{k}\,\|\,\mathcal{M}_{k}^{\prime})\leq\epsilon\) for every \(k\) for some \(\alpha>1\). Then, we could use the chain rule in Eq. 52 as follows
\[\tilde{D}_{\alpha}(\mathcal{M}_{n} \circ\cdots\circ\mathcal{M}_{1}(\rho_{R_{0}E})\|\,\mathcal{M}_{n}^ {\prime}\circ\cdots\circ\mathcal{M}_{1}^{\prime}(\rho_{R_{0}E}))\] \[\leq\tilde{D}_{\alpha}(\mathcal{M}_{n-1}\circ\cdots\circ\mathcal{ M}_{1}(\rho_{R_{0}E})\|\,\mathcal{M}_{n-1}^{\prime}\circ\cdots\circ\mathcal{M}_{1}^ {\prime}(\rho_{R_{0}E}))+\tilde{D}_{\alpha}^{\rm reg}(\mathcal{M}_{n}\,\|\, \mathcal{M}_{n}^{\prime})\] \[\leq\cdots\] \[\leq\tilde{D}_{\alpha}(\rho_{R_{0}E}\|\rho_{R_{0}E})+\sum_{k=1}^{ n}\tilde{D}_{\alpha}^{\rm reg}(\mathcal{M}_{k}\,\|\,\mathcal{M}_{k}^{\prime})\] \[\leq n\epsilon.\]
Once we have the above result we can simply use the well known relation between smooth max-relative entropy and \(\alpha\)-Renyi relative entropy [14, Proposition 6.5] to get the bound
\[D_{\max}^{\epsilon^{\prime}}(\mathcal{M}_{n} \circ\cdots\circ\mathcal{M}_{1}(\rho_{R_{0}E})\|\,\mathcal{M}_{n}^ {\prime}\circ\cdots\circ\mathcal{M}_{1}^{\prime}(\rho_{R_{0}E}))\] \[\leq\tilde{D}_{\alpha}(\mathcal{M}_{n}\circ\cdots\circ\mathcal{M} _{1}(\rho_{R_{0}E})\|\,\mathcal{M}_{n}^{\prime}\circ\cdots\circ\mathcal{M}_{1} ^{\prime}(\rho_{R_{0}E}))+\frac{g_{0}(\epsilon^{\prime})}{\alpha-1}\] \[\leq n\epsilon+O(1).\]
This bound can subsequently be used in Lemma 3.5 to relate the smooth min-entropy of the real state \(\mathcal{M}_{n}\circ\cdots\circ\mathcal{M}_{1}(\rho_{R_{0}E})\) with the \(\alpha-\)Renyi conditional entropy of the auxiliary state \(\mathcal{M}_{n}^{\prime}\circ\cdots\circ\mathcal{M}_{1}^{\prime}(\rho_{R_{0}E})\), for which we can use the original entropy accumulation theorem.
In order to prove Theorem 5.1, we broadly follow this idea. However, the condition \(\|\mathcal{M}_{k}-\mathcal{M}_{k}^{\prime}\|_{\diamond}\leq\epsilon\) does not lead to any kind of bound on \(\tilde{D}_{\alpha}^{\rm reg}\) or any other channel divergence. We will get around this issue by instead using mixed channels \(\mathcal{M}_{k}^{\delta}:=(1-\delta)\,\mathcal{M}_{k}^{\prime}+\delta\,\mathcal{ M}_{k}\). Also, instead of trying to bound channel divergence in terms of \(D_{\alpha}^{\rm reg}\), we will bound the \(D_{\alpha}^{\#}\) (defined in the next section) channel divergence and use its chain rule. We develop the relevant \(\alpha\)-Renyi divergence bounds for this divergence in the next two subsections and then prove the theorem above in Sec 5.3.
### Divergence bound for approximately equal states
We will use the sharp Renyi divergence \(D_{\alpha}^{\#}\) defined in Ref. [11] (see [1] for the following equivalent definition) in this section. For \(\alpha>1\) and two positive operators \(P\) and \(Q\), it is defined
\[D_{\alpha}^{\#}(P\|Q)\coloneqq\min_{A\geq P}\hat{D}_{\alpha}(A\|Q) \tag{53}\]
where \(\hat{D}_{\alpha}(A\|Q)\) is the \(\alpha\)-Renyi geometric divergence [10]. For \(\alpha>1\), it is defined as
\[\hat{D}_{\alpha}(A\|Q)=\begin{cases}\frac{1}{\alpha-1}\log\operatorname{tr} \left(Q\left(Q^{-\frac{1}{2}}AQ^{-\frac{1}{2}}\right)^{\alpha}\right)&\text{if }A \ll Q\\ \infty&\text{otherwise.}\end{cases} \tag{54}\]
\(A\) in the optimisation above is any operator \(A\geq P\). In general, such an operator \(A\) is unnormalised. We will prove a bound on \(D_{\alpha}^{\#}\) between two states in terms of the distance between them and their max-relative entropy. In order to prove this bound, we require the following simple generalisation of the pinching inequality (see for example [14, Section 2.6.3]).
**Lemma 5.2** (Asymmetric pinching).: _For \(t>0\), a positive semidefinite operator \(X\geq 0\) and orthogonal projections \(\Pi\) and \(\Pi_{\perp}=\mathds{1}\operatorname{-}\Pi\), we have that_
\[X\leq(1+t)\Pi X\Pi+\left(1+\frac{1}{t}\right)\Pi_{\perp}X\Pi_{\perp}. \tag{55}\]
Proof.: We will write the positive matrix \(X\) as the block matrix
\[X=\begin{pmatrix}X_{1}&X_{2}\\ X_{2}^{*}&X_{3}\end{pmatrix}\]
where the blocks are partitioned according to the direct sum \(\operatorname{im}(\Pi)\oplus\operatorname{im}(\Pi_{\perp})\). Then, the statement in the Lemma is equivalent to proving that
\[\begin{pmatrix}X_{1}&X_{2}\\ X_{2}^{*}&X_{3}\end{pmatrix}\leq\begin{pmatrix}(1+t)X_{1}&0\\ 0&0\end{pmatrix}+\begin{pmatrix}0&0\\ 0&\left(1+\frac{1}{t}\right)X_{3}\end{pmatrix}\]
which is equivalent to proving that
\[0\leq\begin{pmatrix}tX_{1}&-X_{2}\\ -X_{2}^{*}&\frac{1}{t}X_{3}\end{pmatrix}.\]
This is true because
\[\begin{pmatrix}tX_{1}&-X_{2}\\ -X_{2}^{*}&\frac{1}{t}X_{3}\end{pmatrix}=\begin{pmatrix}-t^{1/2}&0\\ 0&t^{-1/2}\end{pmatrix}\begin{pmatrix}X_{1}&X_{2}\\ X_{2}^{*}&X_{3}\end{pmatrix}\begin{pmatrix}-t^{1/2}&0\\ 0&t^{-1/2}\end{pmatrix}\geq 0\]
since \(X\geq 0\).
**Lemma 5.3**.: _Let \(\epsilon>0\) and \(\alpha\in(1,\infty)\), \(\rho\) and \(\sigma\) be two normalized quantum states on the Hilbert space \(\mathbb{C}^{n}\) such that \(\frac{1}{2}\left\|\rho-\sigma\right\|_{1}\leq\epsilon\) and also \(D_{\max}(\rho\|\sigma)\leq d<\infty\), then we have the bound_
\[D_{\alpha}^{\#}(\rho\|\sigma)\leq\frac{\alpha+1}{\alpha-1}\log \left(\left(1+\sqrt{\epsilon}\right)^{\frac{\alpha}{\alpha+1}}+\left(2^{ \alpha d}\sqrt{\epsilon}\right)^{\frac{1}{\alpha+1}}\right). \tag{56}\]
**Note:** For a fixed \(\alpha\in(1,\infty)\), this upper bound tends to zero as \(\epsilon\to 0\). On the other hand, for a fixed \(\epsilon\in(0,1)\), the upper bound tends to infinity as \(\alpha\to 1\) (that is, the bound becomes trivial). In Appendix B, we show that a bound of this form for \(D_{\alpha}^{\#}\) necessarily diverges for \(\epsilon>0\) as \(\alpha\to 1\).
Proof.: Since, \(D_{\max}(\rho\|\sigma)<\infty\), we have that \(\rho\ll\sigma\). We can assume that \(\sigma\) is invertible. If it was not, then we could always restrict our vector space to the subspace \(\operatorname{supp}(\sigma)\).
Let \(\rho-\sigma=P-Q\), where \(P\geq 0\) is the positive part of the matrix \(\rho-\sigma\) and \(Q\geq 0\) is its negative part. We then have that \(\operatorname{tr}(P)=\operatorname{tr}(Q)\leq\epsilon\).
Further, let
\[\sigma^{-\frac{1}{2}}P\sigma^{-\frac{1}{2}}=\sum_{i=1}^{n}\lambda _{i}\left|x_{i}\right\rangle\left\langle x_{i}\right| \tag{57}\]
be the eigenvalue decomposition of \(\sigma^{-\frac{1}{2}}P\sigma^{-\frac{1}{2}}\). Define the real vector \(q\in\mathbb{R}^{n}\) as
\[q(i):=\left\langle x_{i}\right|\sigma\left|x_{i}\right\rangle.\]
Note that \(q\) is a probability distribution. Observe that
\[\mathbb{E}_{I\sim q}\left[\lambda_{I}\right] =\sum_{i=1}^{n}\lambda_{i}\left\langle x_{i}\right|\sigma\left|x_ {i}\right\rangle\] \[=\operatorname{tr}\left(\sigma\sum_{i=1}^{n}\lambda_{i}\left|x_ {i}\right\rangle\left\langle x_{i}\right|\right)\] \[=\operatorname{tr}\left(\sigma\sigma^{-\frac{1}{2}}P\sigma^{- \frac{1}{2}}\right)\] \[=\operatorname{tr}(P)\] \[\leq\epsilon.\]
Also, observe that \(\lambda_{i}\geq 0\) for all \(i\in[n]\) because \(\sigma^{-\frac{1}{2}}P\sigma^{-\frac{1}{2}}\geq 0\). Let's define
\[S:=\{i\in[n]:\lambda_{i}\leq\sqrt{\epsilon}\}. \tag{58}\]
Since, \(\lambda_{i}\geq 0\) for all \(i\in[n]\), we can use the Markov inequality to show:
\[\Pr_{q}(I\in S^{c}) =\Pr_{q}(\lambda_{I}>\sqrt{\epsilon})\] \[\leq\frac{\mathbb{E}_{I\sim q}\left[\lambda_{I}\right]}{\sqrt{ \epsilon}}\] \[\leq\sqrt{\epsilon}.\]
Thus, if we define the projectors \(\Pi:=\sum_{i\in S}\left|x_{i}\right\rangle\left\langle x_{i}\right|\) and \(\Pi_{\perp}:=\sum_{i\in S^{c}}\left|x_{i}\right\rangle\left\langle x_{i}\right| =\mathds{1}-\Pi\), we have
\[\operatorname{tr}(\sigma\Pi_{\perp}) =\sum_{i\in S^{c}}\left\langle x_{i}\right|\sigma\left|x_{i}\right\rangle\] \[=\Pr_{q}(I\in S^{c})\] \[\leq\sqrt{\epsilon}. \tag{59}\]
Moreover, by the definition of set \(S\) (Eq. 58) we have
\[\Pi\sigma^{-\frac{1}{2}}P\sigma^{-\frac{1}{2}}\Pi =\sum_{i\in S}\lambda_{i}\left|x_{i}\right\rangle\left\langle x_{i}\right|\] \[\leq\sqrt{\epsilon}\Pi \tag{60}\]
and using \(D_{\max}(\rho\|\sigma)\leq d\), we have that
\[\sigma^{-\frac{1}{2}}\rho\sigma^{-\frac{1}{2}}\leq 2^{d}\,\mathds{1}\,. \tag{61}\]
Now, observe that since \(\sigma^{-\frac{1}{2}}\rho\sigma^{-\frac{1}{2}}\geq 0\), for an arbitrary \(t>0\), using Lemma 5.2 we have
\[\sigma^{-\frac{1}{2}}\rho\sigma^{-\frac{1}{2}} \leq(1+t)\Pi\sigma^{-\frac{1}{2}}\rho\sigma^{-\frac{1}{2}}\Pi+ \left(1+\frac{1}{t}\right)\Pi_{\perp}\sigma^{-\frac{1}{2}}\rho\sigma^{-\frac{ 1}{2}}\Pi_{\perp}\] \[\leq(1+t)\Pi\left(\mathds{1}+\sigma^{-\frac{1}{2}}P\sigma^{-\frac {1}{2}}\right)\Pi+\left(1+\frac{1}{t}\right)2^{d}\Pi_{\perp}\] \[\leq(1+t)(1+\sqrt{\epsilon})\Pi+\left(1+\frac{1}{t}\right)2^{d}\Pi _{\perp}\]
where we have used \(\rho\leq\sigma+P\) to bound the first term and Eq. 61 to bound the second term in the second line, and Eq. 60 to bound \(\Pi\sigma^{-\frac{1}{2}}P\sigma^{-\frac{1}{2}}\Pi\) in the last step.
We will define \(A_{t}:=(1+t)(1+\sqrt{\epsilon})\sigma^{\frac{1}{2}}\Pi\sigma^{\frac{1}{2}}+ \left(1+\frac{1}{t}\right)2^{d}\sigma^{\frac{1}{2}}\Pi_{\perp}\sigma^{\frac{1 }{2}}\). Above, we have shown that \(A_{t}\geq\rho\) for every \(t>0\). Therefore, for each \(t>0\), \(D_{\alpha}^{\#}(\rho\|\sigma)\leq\hat{D}_{\alpha}(A_{t}\|\sigma)\). We will now
bound \(\hat{D}_{\alpha}(A_{t}\|\sigma)\) for \(\alpha\in(1,\infty)\) as:
\[\hat{D}_{\alpha}(A_{t}\|\sigma) =\frac{1}{\alpha-1}\log\operatorname{tr}\left(\sigma\left(\sigma^{- \frac{1}{2}}A_{t}\sigma^{-\frac{1}{2}}\right)^{\alpha}\right)\] \[=\frac{1}{\alpha-1}\log\operatorname{tr}\left(\sigma\left((1+t)( 1+\sqrt{\epsilon})\Pi+\left(1+\frac{1}{t}\right)2^{d}\Pi_{\perp}\right)^{ \alpha}\right)\] \[=\frac{1}{\alpha-1}\log\operatorname{tr}\left(\sigma\left((1+t)^ {\alpha}(1+\sqrt{\epsilon})^{\alpha}\Pi+\left(1+\frac{1}{t}\right)^{\alpha}2^{ d\alpha}\Pi_{\perp}\right)\right)\] \[=\frac{1}{\alpha-1}\log\left((1+t)^{\alpha}(1+\sqrt{\epsilon})^{ \alpha}\operatorname{tr}\left(\sigma\Pi\right)+\left(1+\frac{1}{t}\right)^{ \alpha}2^{d\alpha}\operatorname{tr}\left(\sigma\Pi_{\perp}\right)\right)\] \[\leq\frac{1}{\alpha-1}\log\left((1+t)^{\alpha}(1+\sqrt{\epsilon} )^{\alpha}+\left(1+\frac{1}{t}\right)^{\alpha}2^{d\alpha}\sqrt{\epsilon}\right)\]
where in the last line we use \(\operatorname{tr}(\sigma\Pi)\leq 1\) and \(\operatorname{tr}(\sigma\Pi_{\perp})\leq\sqrt{\epsilon}\) (Eq. 59). Finally, since \(t>0\) was arbitrary, we can choose the \(t>0\) which minimizes the right-hand side. For this choice of \(t_{\min}=\left(\frac{2^{\alpha d}\sqrt{\epsilon}}{\left(1+\sqrt{\epsilon} \right)^{\alpha}}\right)^{\frac{1}{\alpha+1}}\), we get
\[\hat{D}_{\alpha}(A_{t_{\min}}\|\sigma)\leq\frac{\alpha+1}{\alpha-1}\log\left( (1+\sqrt{\epsilon})^{\frac{\alpha}{\alpha+1}}+2^{\frac{\alpha}{\alpha+1}}d \epsilon^{\frac{1}{2(\alpha+1)}}\right)\]
which proves the required bound.
### Bounding the channel divergence for two channels close to each other
Suppose there are two channels \(\mathcal{N}\) and \(\mathcal{M}\) mapping registers from the space \(A\) to \(B\) such that \(\frac{1}{2}\left\|\mathcal{N}-\mathcal{M}\right\|_{\diamond}\leq\epsilon\). In general, the channel divergence between two such channels can be infinite because there may be states \(\rho\) such that \(\mathcal{N}(\rho)\nless\mathcal{M}(\rho)\). In order to get around this issue, we will use the \(\delta-\)mixed channel, \(\mathcal{M}_{\delta}\). For \(\delta\in(0,1)\), we define \(\mathcal{M}_{\delta}\) as
\[\mathcal{M}_{\delta}:=(1-\delta)\mathcal{M}+\delta\mathcal{N}.\]
This guarantees that \(D_{\max}(\mathcal{N}\left\|\mathcal{M}_{\delta})\leq\log\frac{1}{\delta}\), which is enough to ensure that the divergences we are interested in are finite. Moreover, by mixing \(\mathcal{M}\) with \(\mathcal{N}\), we only decrease the distance:
\[\frac{1}{2}\left\|\mathcal{M}_{\delta}-\mathcal{N}\right\|_{\diamond} =\frac{1}{2}\left\|(1-\delta)\mathcal{M}+\delta\mathcal{N}-\mathcal{ N}\right\|_{\diamond}\] \[=(1-\delta)\frac{1}{2}\left\|\mathcal{M}-\mathcal{N}\right\|_{\diamond}\] \[\leq(1-\delta)\epsilon. \tag{62}\]
We will now show that \(D_{\alpha}^{\#}(\mathcal{N}\left\|\mathcal{M}_{\delta})\) is small for an appropriately chosen \(\delta\). By the definition of channel divergence, we have that
\[D_{\alpha}^{\#}(\mathcal{N}\left\|\mathcal{M}_{\delta})=\sup_{\rho_{AR}}D_{ \alpha}^{\#}(\mathcal{N}(\rho_{AR})\|\mathcal{M}_{\delta}(\rho_{AR}))\]
where \(R\) is an arbitrary reference system \((\mathcal{N},\mathcal{M}_{\delta}\) map register \(A\) to register \(B\)). We will show that for every \(\rho_{AR}\), \(D_{\alpha}^{\#}(\mathcal{N}(\rho_{AR})\|\,\mathcal{M}_{\delta}(\rho_{AR}))\) is small. Note that
\[\mathcal{M}_{\delta}(\rho_{AR}) =(1-\delta)\,\mathcal{M}(\rho_{AR})+\delta\,\mathcal{N}(\rho_{AR})\] \[\geq\delta\,\mathcal{N}(\rho_{AR})\]
which implies that \(D_{\max}(\mathcal{N}(\rho_{AR})\|\,\mathcal{M}_{\delta}(\rho_{AR}))\leq\log \frac{1}{\delta}\). Also, using Eq. 62 have that
\[\frac{1}{2}\left\|\mathcal{M}_{\delta}(\rho_{AR})-\mathcal{N}(\rho_{AR}) \right\|_{1}\leq(1-\delta)\epsilon.\]
Using Lemma 5.3, we have for every \(\alpha\in(1,\infty)\)
\[D_{\alpha}^{\#}(\mathcal{N}(\rho_{AR})\|\,\mathcal{M}_{\delta}(\rho_{AR})) \leq\frac{\alpha+1}{\alpha-1}\log\left(\left(1+\sqrt{(1-\delta)\epsilon} \right)^{\frac{\alpha}{\alpha+1}}+\left(\frac{\sqrt{(1-\delta)\epsilon}}{ \delta^{\alpha}}\right)^{\frac{1}{\alpha+1}}\right).\]
Since, this is true for all \(\rho_{AR}\), for every \(\alpha\in(1,\infty)\) we have
\[D_{\alpha}^{\#}(\mathcal{N}\|\,\mathcal{M}_{\delta})\leq\frac{\alpha+1}{ \alpha-1}\log\left(\left(1+\sqrt{(1-\delta)\epsilon}\right)^{\frac{\alpha}{ \alpha+1}}+\left(\frac{\sqrt{(1-\delta)\epsilon}}{\delta^{\alpha}}\right)^{ \frac{1}{\alpha+1}}\right).\]
Note that since \(\delta\) was arbitrary, we can choose it appropriately to make sure that the above bound is small, for example by choosing \(\delta=\epsilon^{\frac{1}{4\alpha}}\), we get the bound
\[D_{\alpha}^{\#}(\mathcal{N}\|\,\mathcal{M}_{\delta})\leq\frac{\alpha+1}{ \alpha-1}\log\left(\left(1+\sqrt{\epsilon}\right)^{\frac{\alpha}{\alpha+1}}+ \epsilon^{\frac{1}{4(\alpha+1)}}\right)\]
which is a small function of \(\epsilon\) in the sense that it tends to \(0\) as \(\epsilon\to 0\). We summarise the bound derived above in the following lemma.
**Lemma 5.4**.: _Let \(\epsilon>0\). Suppose channels \(\mathcal{N}\) and \(\mathcal{M}\) from register \(A\) to \(B\) are such that \(\frac{1}{2}\left\|\mathcal{N}-\mathcal{M}\right\|_{\diamond}\leq\epsilon\). For \(\delta\in(0,1)\), we can define the mixed channel \(\mathcal{M}_{\delta}:=(1-\delta)\mathcal{M}+\delta\mathcal{N}\). Then, for every \(\alpha\in(1,\infty)\), we have the following bound on the channel divergence_
\[D_{\alpha}^{\#}(\mathcal{N}\|\,\mathcal{M}_{\delta})\leq\frac{\alpha+1}{ \alpha-1}\log\left(\left(1+\sqrt{(1-\delta)\epsilon}\right)^{\frac{\alpha}{ \alpha+1}}+\left(\frac{\sqrt{(1-\delta)\epsilon}}{\delta^{\alpha}}\right)^{ \frac{1}{\alpha+1}}\right). \tag{63}\]
### Proof of the approximate entropy accumulation theorem
We use the mixed channels defined in the previous section to define the auxiliary state \(\mathcal{M}_{n}^{\delta}\circ\cdots\circ\mathcal{M}_{1}^{\delta}(\rho_{R_{0}E})\) for our proof. It is easy to show using the divergence bounds in Sec. 5.2 and the chain rule for \(D_{\alpha}^{\#}\) entropies that the relative entropy distance between the real state and this choice of the auxiliary state is small. However, the state \(\mathcal{M}_{n}^{\delta}\circ\cdots\circ\mathcal{M}_{1}^{\delta}(\rho_{R_{0}E})\) does not necessarily satisfy the Markov chain conditions required for entropy accumulation. Thus, we also need to reprove the entropy lower bound on this state by modifying the approach used in the proof of the original entropy accumulation theorem.
Proof of Theorem 5.1.: Using Lemma 5.4, for every \(\delta\in(0,1)\) and for each \(k\in[n]\) we have that for every \(\beta>1\), the mixed maps \(\mathcal{M}_{k}^{\delta}:=(1-\delta)\,\mathcal{M}_{k}^{\prime}+\delta\, \mathcal{M}_{k}\) satisfy
\[D_{\beta}^{\#}(\mathcal{M}_{k}\,\|\,\mathcal{M}_{k}^{\delta}) \leq\frac{\beta+1}{\beta-1}\log\left(\left(1+\sqrt{(1-\delta) \epsilon}\right)^{\frac{\beta}{\beta+1}}+\left(\frac{\sqrt{(1-\delta)\epsilon }}{\delta^{\beta}}\right)^{\frac{1}{\beta+1}}\right)\] \[:=z_{\beta}(\epsilon,\delta) \tag{64}\]
where we defined the right-hand side above as \(z_{\beta}(\epsilon,\delta)\). This can be made "small" by choosing \(\delta=\epsilon^{\frac{1}{4\beta}}\) as was shown in the previous section. We use these maps to define the auxiliary state as
\[\sigma_{A_{1}^{n}B_{1}^{n}E}:=\mathcal{M}_{n}^{\delta}\circ\cdots \circ\mathcal{M}_{1}^{\delta}(\rho_{R_{0}E}). \tag{65}\]
Now, we have that for \(\beta>1\) and \(\epsilon_{1}>0\)
\[D_{\max}^{\epsilon_{1}} (\rho_{A_{1}^{n}B_{1}^{n}E}\|\sigma_{A_{1}^{n}B_{1}^{n}E})\] \[\leq\tilde{D}_{\beta}(\rho_{A_{1}^{n}B_{1}^{n}E}\|\sigma_{A_{1}^{ n}B_{1}^{n}E})+\frac{g_{0}(\epsilon_{1})}{\beta-1}\] \[\leq D_{\beta}^{\#}(\rho_{A_{1}^{n}B_{1}^{n}E}\|\sigma_{A_{1}^{n} B_{1}^{n}E})+\frac{g_{0}(\epsilon_{1})}{\beta-1}\] \[=D_{\beta}^{\#}(\mathcal{M}_{n}\circ\cdots\circ\mathcal{M}_{1}( \rho_{R_{0}E})\|\,\mathcal{M}_{n}^{\delta}\circ\cdots\circ\mathcal{M}_{1}^{ \delta}(\rho_{R_{0}E}))+\frac{g_{0}(\epsilon_{1})}{\beta-1}\] \[\leq D_{\beta}^{\#}(\mathcal{M}_{n-1}\circ\cdots\circ\mathcal{M} _{1}(\rho_{R_{0}E})\|\,\mathcal{M}_{n-1}^{\delta}\circ\cdots\circ\mathcal{M}_{ 1}^{\delta}(\rho_{R_{0}E}))+D_{\beta}^{\#}(\mathcal{M}_{n}\,\|\,\mathcal{M}_{n }^{\delta})+\frac{g_{0}(\epsilon_{1})}{\beta-1}\] \[\leq\cdots\] \[\leq\sum_{k=1}^{n}D_{\beta}^{\#}(\mathcal{M}_{k}\,\|\,\mathcal{M} _{k}^{\delta})+\frac{g_{0}(\epsilon_{1})}{\beta-1}\] \[\leq nz_{\beta}(\epsilon,\delta)+\frac{g_{0}(\epsilon_{1})}{\beta -1} \tag{66}\]
where the first line follows from [16, Proposition 6.5], the second line follows from [17, Proposition 3.4], fourth line follows from the chain rule for \(D_{\beta}^{\#}\)[17, Proposition 4.5], and the last line follows from Eq. 64.
For \(\epsilon_{2}>0\) and \(\alpha\in(1,1+\frac{1}{\log(1+2|A|)})\), we can plug the above in the bound provided by Lemma 3.5 to get
\[H_{\min}^{\epsilon_{1}+\epsilon_{2}}(A_{1}^{n}|B_{1}^{n}E)_{\rho} \geq\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\sigma}- \frac{\alpha}{\alpha-1}nz_{\beta}(\epsilon,\delta)\] \[\qquad-\frac{1}{\alpha-1}\left(g_{1}(\epsilon_{2},\epsilon_{1})+ \frac{\alpha g_{0}(\epsilon_{1})}{\beta-1}\right). \tag{67}\]
We have now reduced our problem to lower bounding \(\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\sigma}\). Note that we cannot directly use the entropy accumulation here, since the mixed maps \(\mathcal{M}_{k}^{\delta}=(1-\delta)\,\mathcal{M}_{k}^{\prime}+\delta\, \mathcal{M}_{k}\), which means that with \(\delta\) probability the \(B_{k}\) register may be correlated with \(A_{1}^{k-1}\) even given \(B_{1}^{k-1}E\), and it may not satisfy the Markov chain required for entropy accumulation.
The application of the maps \(\mathcal{M}_{k}^{\delta}\) can be viewed as applying the channel \(\mathcal{M}_{k}^{\prime}\) with probability \(1-\delta\) and the channel \(\mathcal{M}_{k}\) with probability \(\delta\). We can define the channels \(\mathcal{N}_{k}\) which map the registers \(R_{k-1}\) to \(R_{k}A_{k}B_{k}C_{k}\), where \(C_{k}\) is a binary register. The action of \(\mathcal{N}_{k}\) can be defined as:
1. Sample the classical random variable \(C_{k}\in\{0,1\}\) independently. \(C_{k}=1\) with probability \(1-\delta\) and \(0\) otherwise.
2. If \(C_{k}=1\) apply the map \(\mathcal{M}_{k}^{\prime}\) on \(R_{k-1}\), else apply \(\mathcal{M}_{k}\) on \(R_{k-1}\).
Let us call \(\theta_{A_{1}^{n}B_{1}^{n}C_{1}^{n}E}=\mathcal{N}_{n}\circ\cdots\circ\mathcal{ N}_{1}(\rho_{R_{0}E})\). Clearly \(\operatorname{tr}_{C_{1}^{n}}\left(\theta_{A_{1}^{n}B_{1}^{n}C_{1}^{n}E} \right)=\sigma_{A_{1}^{n}B_{1}^{n}E}\). Thus, we have
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\sigma} =\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\theta}\] \[\geq\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}C_{1}^{n}E)_{ \theta}. \tag{68}\]
We will now focus on lower bounding \(\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}C_{1}^{n}E)_{\theta}\). Using [16, Proposition 5.1], we have that
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}C_{1}^{n}E)_{\theta}=\frac{ \alpha}{1-\alpha}\log\sum_{c_{1}^{n}}\theta(c_{1}^{n})\exp\left(\frac{1-\alpha }{\alpha}\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\theta_{|c_{1}^{n }}}\right).\]
We will show that for a given \(c_{1}^{n}\), the conditional entropy \(\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\theta_{|c_{1}^{n}}}\) accumulates whenever the "good" map \(\mathcal{M}_{k}^{\prime}\) is used and loses some entropy for the rounds where the "bad" map \(\mathcal{M}_{k}\) is used. The fact that \(c_{1}^{n}\) contains far more 1s than 0s with a large probability then allows us to prove a lower bound on \(\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}C_{1}^{n}E)_{\theta}\).
**Claim 5.5**.: _Define \(h_{k}:=\inf_{\omega_{R_{k}\bar{R}_{k}}}\tilde{H}^{1}_{\alpha}(A_{k}|B_{k}\tilde{R} _{k})_{\mathcal{M}^{\prime}_{k}(\omega_{R_{k}\bar{R}_{k}})}\) where the infimum is over all states \(\omega_{R_{k}\bar{R}_{k}}\) for a register \(\tilde{R}_{k}\), which is isomorphic to \(R_{k}\), and \(s:=\log(|A||B|^{2})\). Then, we have_
\[\tilde{H}^{\dagger}_{\alpha}(A_{1}^{n}|B_{1}^{n}E)_{\theta_{|c_{1}^{n}}}\geq \sum_{k=1}^{n}\left(\delta(c_{k},1)h_{k}-\delta(c_{k},0)s\right) \tag{69}\]
_where \(\delta(x,y)\) is the Kronecker delta function (\(\delta(x,y)=1\) if \(x=y\) and \(0\) otherwise)._
Proof.: We will prove the statement
\[\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k}|B_{1}^{k}E)_{\theta_{|c_{1}^{k}}}\geq \tilde{H}^{\dagger}_{\alpha}(A_{1}^{k-1}|B_{1}^{k-1}E)_{\theta_{|c_{1}^{k-1}} }+(\delta(c_{k},1)h_{k}-\delta(c_{k},0)s)\]
then the claim will follow inductively. We will consider two cases: when \(c_{k}=0\) and when \(c_{k}=1\). First suppose, \(c_{k}=0\) then \(\theta_{A_{1}^{k}B_{1}^{k}E|c_{1}^{k}}=\operatorname{tr}_{R_{k}}\circ\mathcal{ M}_{k}^{R_{k-1}\to R_{k}A_{k}B_{k}}\left(\theta_{R_{k-1}A_{1}^{k-1}B_{1}^{k-1}E|c_{1} ^{k}}\right)\). In this case, we have
\[\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k}|B_{1}^{k}E)_{\theta_{|c_{1 }^{k}}} \geq\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k-1}|B_{1}^{k}E)_{\theta_{ |c_{1}^{k}}}-\log|A|\] \[\geq\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k-1}|B_{1}^{k-1}E)_{ \theta_{|c_{1}^{k}}}-\log\left(|A||B|^{2}\right)\] \[=\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k-1}|B_{1}^{k-1}E)_{\theta_{ |c_{1}^{k-1}}}-s\]
where in the first line we have used the dimension bound in Lemma D.1, in the second line we have used the dimension bound in Lemma D.3 and in the last line we have used \(\theta_{A_{1}^{k-1}B_{1}^{k-1}E|c_{1}^{k}}=\theta_{A_{1}^{k-1}B_{1}^{k-1}E|c_ {1}^{k-1}}\).
Now, suppose that \(c_{k}=1\). In this case, we have that \(\theta_{A_{1}^{k}B_{1}^{k}E|c_{1}^{k}}=\operatorname{tr}_{R_{k}}\circ\mathcal{ M}_{k}^{\prime}\left(\theta_{R_{k-1}A_{1}^{k-1}B_{1}^{k-1}E|c_{1}^{k}}\right)\) and since \(\theta_{R_{k-1}A_{1}^{k-1}B_{1}^{k-1}E|c_{1}^{k}}=\Phi_{k-1}\circ\Phi_{k-2} \cdots\circ\Phi_{1}(\rho_{R_{0}E})\) where each of the \(\Phi_{i}\in\{\mathcal{M}_{i},\mathcal{M}_{i}^{\prime}\}\), using the hypothesis of the theorem we have that the state \(\theta_{A_{1}^{k}B_{1}^{k}E|c_{1}^{k}}=\mathcal{M}_{k}^{\prime}\left(\theta_{R _{k-1}A_{1}^{k-1}B_{1}^{k-1}E|c_{1}^{k}}\right)\) satisfies the Markov chain
\[A_{1}^{k-1}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}.\]
Now, using Corollary C.5 (the \(\tilde{H}^{\dagger}_{\alpha}\) counterpart for [13, Corollary 3.5], which is the main chain rule used for proving entropy accumulation), we have
\[\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k}|B_{1}^{k}E)_{\theta_{|c_{1 }^{k}}} \geq\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k-1}|B_{1}^{k-1}E)_{\theta _{|c_{1}^{k}}}+\inf_{\omega_{R_{k}\bar{R}_{k}}}\tilde{H}^{\downarrow}_{\alpha}(A _{k}|B_{k}\tilde{R}_{k})_{\mathcal{M}_{k}^{\prime}(\omega_{R_{k}\bar{R}_{k}})}\] \[=\tilde{H}^{\dagger}_{\alpha}(A_{1}^{k-1}|B_{1}^{k-1}E)_{\theta_{ |c_{1}^{k-1}}}+h_{k}\]
where in the last line we have again used \(\theta_{A_{1}^{k-1}B_{1}^{k-1}E|c_{1}^{k}}=\theta_{A_{1}^{k-1}B_{1}^{k-1}E|c_{1}^{ k-1}}\). Combining these two cases, we have
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}^{k}|B_{1}^{k}E)_{\theta_{|c_{1}^{k}}}\geq \tilde{H}_{\alpha}^{\dagger}(A_{1}^{k-1}|B_{1}^{k-1}E)_{\theta_{|c_{1}^{k-1}}} +\left(\delta(c_{k},1)h_{k}-\delta(c_{k},0)s\right). \tag{70}\]
Using this bound \(n\) times starting with \(\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\theta_{|c_{1}^{n}}}\) gives us the bound required in the claim.
For the sake of clarity let \(l_{k}(c_{k}):=(\delta(c_{k},1)h_{k}-\delta(c_{k},0)s)\). We will now evaluate
\[\sum_{c_{1}^{n}}\theta(c_{1}^{n})\exp\left(\frac{1-\alpha}{\alpha }\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{\theta_{|c_{1}^{n}}}\right) \leq\sum_{c_{1}^{n}}\theta(c_{1}^{n})\exp\left(\frac{1-\alpha}{ \alpha}\sum_{k=1}^{n}l_{k}(c_{k})\right)\] \[=\sum_{c_{1}^{n}}\prod_{k=1}^{n}\theta(c_{k})2^{\frac{1-\alpha}{ \alpha}l_{k}(c_{k})}\] \[=\prod_{k=1}^{n}\sum_{c_{k}}\theta(c_{k})2^{\frac{1-\alpha}{ \alpha}l_{k}(c_{k})}. \tag{71}\]
Then, we have
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}C_{1}^{n}E)_{\theta} =\frac{\alpha}{1-\alpha}\log\sum_{c_{1}^{n}}\theta(c_{1}^{n})\exp _{2}\left(\frac{1-\alpha}{\alpha}\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^ {n}E)_{\theta_{|c_{1}^{n}}}\right).\] \[\geq\frac{\alpha}{1-\alpha}\sum_{k=1}^{n}\log\sum_{c_{k}}\theta( c_{k})2^{\frac{1-\alpha}{\alpha}l_{k}(c_{k})}\] \[=\frac{\alpha}{1-\alpha}\sum_{k=1}^{n}\log\left((1-\delta)2^{ \frac{1-\alpha}{\alpha}h_{k}}+\delta 2^{-\frac{1-\alpha}{\alpha}s}\right)\] \[=\sum_{k=1}^{n}h_{k}-\frac{\alpha}{\alpha-1}\sum_{k=1}^{n}\log \left(1-\delta+\delta 2^{\frac{\alpha-1}{\alpha}(s+h_{k})}\right)\] \[\geq\sum_{k=1}^{n}h_{k}-\frac{\alpha}{\alpha-1}n\log\left(1+ \delta\left(2^{\frac{\alpha-1}{\alpha}(s+\log|A|)}-1\right)\right)\]
where in the second line we have used Eq. 71 and in the last line we have used the fact that \(h_{k}\leq\log|A|\) for all \(k\in[n]\).
We restricted the choice of \(\alpha\) to the region \(\left(1,1+\frac{1}{\log(1+2|A|)}\right)\) in the theorem, so that we can now use [1, Lemma B.9] to transform the above to
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}C_{1}^{n}E)_{ \theta} \geq\sum_{k=1}^{n}\inf_{\omega_{R_{k}}\tilde{R}_{k}}H(A_{k}|B_{k} \tilde{R}_{k})_{\mathcal{M}_{k}^{\prime}(\omega_{R_{k}}\tilde{R}_{k})}-n( \alpha-1)\log^{2}(1+2|A|)\] \[\qquad\qquad\qquad-\frac{\alpha}{\alpha-1}n\log\left(1+\delta \left(2^{\frac{\alpha-1}{\alpha}2\log(|A||B|)}-1\right)\right). \tag{72}\]
Putting Eq. 67, Eq. 68, and Eq. 72 together, we have
\[H_{\min}^{\epsilon_{1}+\epsilon_{2}}(A_{1}^{n}|B_{1}^{n}E)_{\rho} \geq\sum_{k=1}^{n}\inf_{\omega_{R_{k}}\tilde{R}_{k}}H(A_{k}|B_{k} \tilde{R}_{k})_{\mathcal{M}_{k}^{\prime}(\omega_{R_{k}}\tilde{R}_{k})}-n( \alpha-1)\log^{2}(1+2|A|)\] \[\qquad\qquad-\frac{\alpha}{\alpha-1}n\log\left(1+\delta\left(2^{ \frac{\alpha-1}{\alpha}2\log(|A||B|)}-1\right)\right)\] \[\qquad\qquad-\frac{\alpha}{\alpha-1}nz_{\beta}(\epsilon,\delta) \frac{1}{\alpha-1}\left(g_{1}(\epsilon_{2},\epsilon_{1})+\frac{\alpha g_{0}( \epsilon_{1})}{\beta-1}\right).\]
### Limitations and further improvements
As we pointed out previously, the dependence of the entropy loss per round on \(\epsilon\) is very poor (behaves as \(\sim\epsilon^{1/24}\)) in this theorem. The classical version of this theorem has a much better dependence of \(O(\sqrt{\epsilon})\) on \(\epsilon\) (see Theorem F.1). The reason for the poor performance of the quantum version is that our bound on the channel divergence (Lemma 5.4) is very weak compared to the bound we can use classically. It should be noted, however, that if Lemma 5.4 were to be improved in the future, one could simply plug the new bound into our proof and derive an improvement for Theorem 5.1.
A better bound on the channel divergence would have an additional benefit. It could simplify the proof and the Markov chain assumption in our theorem. In particular, it would be much easier to carry out the proof if the mixed channels \(\mathcal{M}_{k}^{\delta}\) were defined as \((1-\delta)\,\mathcal{M}_{k}^{\prime}+\delta\tau_{A_{k}B_{k}}\otimes\mathrm{tr }_{A_{k}B_{k}}\circ\mathcal{M}_{k}\) (which is what is done classically), where \(\tau_{A_{k}B_{k}}\) is the completely mixed state on registers \(A_{k}B_{k}\). Here, instead of mixing the channel \(\mathcal{M}_{k}^{\prime}\) with \(\mathcal{M}_{k}\), we mix it with \(\tau_{A_{k}B_{k}}\otimes\mathrm{tr}_{A_{k}B_{k}}\circ\mathcal{M}_{k}\), which also keeps \(D_{\max}(\mathcal{M}_{k}\,\|\,\mathcal{M}_{k}^{\delta})\) small enough. Moreover, this definition ensures that the registers \(B_{k}\) produced by the map \(\mathcal{M}_{k}^{\delta}\) always satisfy the Markov chain conditions. If it were possible to show that the divergence between the real state \(\mathcal{M}_{n}\circ\cdots\circ\mathcal{M}_{1}(\rho_{R_{0}E})\) and the auxiliary state \(\mathcal{M}_{n}^{\delta}\circ\cdots\circ\mathcal{M}_{1}^{\delta}(\rho_{R_{0}E})\) is small for this definition of \(\mathcal{M}_{k}^{\delta}\), then one could directly use the entropy accumulation theorem for lower bounding the entropy for the auxiliary state. We cannot do this in our proof as this definition of the mixed channel \(\mathcal{M}_{k}^{\delta}\) also increases the distance from the original channel \(\mathcal{M}_{k}\) to \(\epsilon+2\delta\) and this makes the upper bound in Lemma 5.3 large (finite even in the limit \(\epsilon\to 0\)).
It seems that it should be possible to weaken the assumptions for approximate entropy accumulation. The classical equivalent of this theorem (Theorem F.1) for instance can be proven very easily and requires a much weaker approximation assumption. It would be interesting if one could remove the "memory" registers \(R_{k}\) from the assumptions required for approximate entropy accumulation, since these are not typically accessible to the users in applications.
Another troubling feature of the approximate entropy accumulation theorem seems to be that it assumes that the size of the side information registers \(B_{k}\) is constant. One might wonder if this is necessary, since continuity bounds like the Alicki-Fannes-Winter (AFW) inequality do not depend on the size of the side information. It turns out that a bound on the side information size is indeed necessary in this case. We show a simple classical example to demonstrate this in Appendix E.
## 6 Source Correlations
Protocols in quantum cryptography often require an honest party to produce multiple independent quantum states. As an example, quantum key distribution (QKD) protocols [1, 13] and bit commitment protocols [10, 14] all require the honest participant, Alice to produce an independently chosen quantum state from a set of states in every round of the protocol. The security proofs for these protocols also rely on the fact that the quantum state produced in each round of the protocol is independent of the other rounds. However, this is a difficult property to enforce practically. All physical devices have an internal memory, which is difficult to characterise and control. This memory can cause the quantum states produced in different rounds to be correlated with one another. For example, when implementing BB84 states using the polarisation of light, if the polariser is in the horizontal polarisation (\(|0\rangle\)) for round \(k\), and it is switched to the \(\Pi/4\)-diagonal polarisation (\(|+\rangle\)) in the \((k+1)\)th round, then it is plausible that the state produced in the \((k+1)\)th round is "tilted" towards the horizontal (that is, has a larger component along \(|0\rangle\) than \(|1\rangle\)) simply due to the inertia of switching the polariser. Such correlations between different rounds caused by an imperfect source are called _source correlations_. Security proofs for cryptographic protocols need to consider such correlations in order to be relevant in the real world.
We will consider the BB84-QKD protocol, which uses such an imperfect source, here. An extensive line of research has led to techniques for proving the security of such a QKD protocol [11, 13, 12, 13]. However, almost all of these techniques rely on _source purification_6- the fact that the security of this protocol is equivalent to one where Alice sends out one half of a Bell state in each round and randomly measures her half. When the states produced by Alice's source are correlated across different rounds, this equivalence step fails and one can no longer use the above methods. In this section, we will use the triangle inequality of Lemma 3.5 to reduce the security of the BB84-QKD protocol with source correlations to that of the BB84 protocol with a perfect source. Then, one can simply use the security analysis methods developed for such protocols to complete
the security proof. Although we focus on the BB84-QKD protocol here, our technique is quite general, and we believe it can also be applied to other cryptographic protocols.
Our method relies on randomly testing the output of the source on a small sample of the rounds by measuring it in the preparation basis and conditioning on the relative deviation of the observed output being less than some small threshold \(\epsilon\) from the expected output. We do not have to place any assumptions on the source, except that it passes this source test with a non-negligible probability. We also demonstrate how our analysis can be modified to incorporate imperfect measurements. In comparison, [20], which is one of the most comprehensive treatments of source imperfections and source correlation, makes multiple complex assumptions about Alice's source (also see [21]). Among these, it assumes that the state produced by Alice in the \(k\)th round can only be correlated to the states produced in the \(\ell_{c}\) rounds preceding it, where \(\ell_{c}\) is some known constant. Moreover, it also assumes that Alice's quantum states are not entangled across different rounds. These are both, as noted by the authors in [20], very strong assumptions, which cannot be guaranteed in practical setups. Importantly, it is also not possible to estimate the parameter \(\ell_{c}\) experimentally. In contrast, we provide a simpler and more general technique, which reduces the security proof to that of a noisy version of the underlying protocol(7), and itself only requires assumptions for the measurements employed for testing. Moreover, these measurements are used at a far smaller rate than the source itself during the protocol.
Footnote (7): Following this reduction, any security proof technique for QKD which can bound \(\tilde{H}_{\alpha}^{\dagger}\) of Alice’s raw key given Eve’s side information can be used to complete the proof. The assumptions for the security of the protocol will be a combination of the assumptions required for this security proof and the assumptions used during the testing procedure presented in Protocol 3.
### Security proof for BB84 with source correlations
The BB84 QKD protocol has been described in Protocol 2. In Table 1, we list all the variables, we use for our proof and their definitions.
At the beginning of every round of the QKD protocol, Alice prepares the classical registers \(X_{i}\) and \(\Theta_{i}\), and a corresponding qubit in the register \(A_{i}\). If Alice's quantum source were perfect, she would produce the following state during each round of the protocol
\[\hat{\rho}_{X\Theta A}:=\sum_{x\in\mathcal{X},\theta\in\Theta}p(x,\theta)\left| x,\theta\right\rangle\left\langle x,\theta\right|_{X\Theta}\otimes H^{\theta} \left|x\right\rangle\left\langle x\right|_{A}H^{\theta} \tag{73}\]
where \(H\) is the Hadamard gate and
\[p(x,\theta)=\begin{cases}\frac{1-\mu}{|\mathcal{X}|}&\text{if }\theta=0\\ \frac{\mu}{|\mathcal{X}|}&\text{if }\theta=1.\end{cases}\]
**BB84 QKD protocol**
**Parameters:**
* \(n\) is the number of qubits sent by Alice.
* \(\mu\in\left(0,1\right)\) is the probability of encoding and measurement in the \(X\) basis \(\left\{\left|+\right\rangle,\left|-\right\rangle\right\}\).
* \(e\in\left(0,\frac{1}{2}\right)\) is the maximum error tolerated.
* \(r\in\left(0,1\right)\) is the key rate of the protocol.
**Protocol:**
1. For every \(1\leq i\leq n\) perform the following steps: 1. Alice chooses a random bit \(X_{i}\in_{R}\left\{0,1\right\}\) and with probability \(1-\mu\) encodes it in the \(Z\) basis and with probability \(\mu\) in the \(X\) basis. 2. Alice sends her encoded qubit to Bob. 3. Bob measures the qubit in the \(Z\) basis with probability \(1-\mu\) and \(X\) basis with probability \(\mu\). He records the output as \(Y_{i}\).
2. **Sifting:** Alice and Bob share their choice of bases for all the rounds and discard the rounds where their choices are different. We denote the remaining rounds by the set \(S\).
3. **Error correction:** Alice and Bob use an error correction procedure, which lets Bob obtain a guess \(\hat{X}_{S}\) for Alice's raw key \(X_{S}\). In case the error correction protocol aborts, they abort the QKD protocol too.
4. **Parameter estimation:** Let \(S_{X}\) be the set of rounds where Alice prepared the qubit in the \(X\) basis and Bob measured the qubit in \(X\) basis. Bob computes \(\hat{e}=\frac{1}{\left|S_{X}\right|}\left\{i\in S_{X}:\hat{X}_{i}\neq Y_{i}\right\}\). They abort if \(\hat{e}>e\).
5. **Privacy Amplification:** Alice chooses a random function \(F\) from a set of two universal hash functions from \(\left|S\right|\) bits to \(\left|rn\right|\) bits and announces it Bob. Alice and Bob compute the final key as \(F(X_{S})\) and \(F(\hat{X}_{S})\) respectively.
Protocol 2
\begin{table}
\begin{tabular}{|l|l|} \hline Variable & Definition \\ \hline \hline \(\mathcal{X}\) & The set \(\{0,1\}\); alphabet for Alice’s random string. \\ \hline \(\Theta\) & The set \(\{0,1\}\); alphabet for the basis string. \\ \hline \(X_{1}^{n}\) & The random string chosen by Alice at the beginning of the protocol \\ \hline \(\Theta_{1}^{n}\) & Alice’s choice of randomly chosen basis. \(\Theta_{i}=0\) if Alice chooses \(Z\) basis and \(\Theta_{i}=1\) if she chooses \(X\) basis \\ \hline \(A_{1}^{n}\) & The quantum registers sent by Alice to Bob \\ \hline \(\hat{\Theta}_{1}^{n}\) & Bob’s choice of randomly chosen basis. \(\hat{\Theta}_{i}=0\) if Bob chooses \(Z\) basis and \(\hat{\Theta}_{i}=1\) if he chooses \(X\) basis \\ \hline \(Y_{1}^{n}\) & Bob’s outcomes of measuring \(A_{1}^{n}\) in \(\hat{\Theta}_{1}^{n}\) basis \\ \hline \(S\) & The set \(\{i\in[n]:\hat{\Theta}_{i}=\Theta_{i}\}\) \\ \hline \(\hat{X}_{S}\) & Bob’s guess of \(X_{S}\), produced at the end of the error correction step. \\ \hline \(T\) & Transcript for error correction \\ \hline \(\bar{X}_{1}^{n}\) & For \(i\in[n]\), \(\bar{X}_{i}=X_{i}\) if \(\Theta_{i}=\hat{\Theta}_{i}\) else \(\bar{X}_{i}=\perp\) \\ \hline \(\bar{Y}_{1}^{n}\) & For \(i\in[n]\), \(\bar{Y}_{i}=Y_{i}\) if \(\Theta_{i}=\hat{\Theta}_{i}=1\) else \(\bar{Y}_{i}=\perp\) \\ \hline \(C_{1}^{n}\) & For \(i\in[n]\), \(C_{i}=X_{i}\oplus Y_{i}\) if \(\Theta_{i}=\hat{\Theta}_{i}=1\) else \(C_{i}=\perp\) \\ \hline \(\bar{C}_{1}^{n}\) & For \(i\in[n]\), \(\bar{C}_{i}=\hat{X}_{i}\oplus Y_{i}\) if \(\Theta_{i}=\hat{\Theta}_{i}=1\) else \(\bar{C}_{i}=\perp\) \\ \hline \(E\) & Eve’s register created after Eve processes and forwards the states \(A_{1}^{n}\) to Bob \\ \hline \(\Upsilon\) & The event that the protocol does not abort, i.e., \(\text{freq}(\bar{C}_{1}^{n})(1)\leq e\) and \(\text{hash}(X_{S})=\text{hash}(\hat{X}_{S})\). \\ \hline \(\Upsilon^{\prime}\) & The event that \(X_{S}=\hat{X}_{S}\) \\ \hline \(\Upsilon^{\prime\prime}\) & The event that \(\text{freq}(C_{1}^{n})(1)\leq e\). \\ \hline \end{tabular}
\end{table}
Table 1: Definition of variables for QKD
Consider the case, where Alice only has access to an imperfect quantum source to prepare qubits for the QKD protocol above. We will assume here that the classical randomness used by Alice is perfect. Suppose Alice and Bob use \(n\) rounds for the BB84 protocol. In order, to perform the QKD protocol with the imperfect source, we require that Alice uses her source to first perform the source testing protocol given in Protocol 3 with \((n+m)\)-total rounds. This source testing protocol randomly selects \(m\) rounds of the source output, measures the qubit \(A_{i}\) in the basis given by \(\Theta_{i}\) and compares the result with the encoded bit \(X_{i}\) for these rounds. The source passes the test if the fraction of errors is lesser than \(\epsilon\), which is a source error threshold chosen by Alice. Subsequently, Alice uses the \(n\) remaining rounds produced by the source for the BB84 protocol provided the source test does not abort. The complete protocol is depicted in Figure 4. It should be noted that Alice can actually run Protocol 3 concurrently with the BB84 protocol. She does not need to create all the \((n+m)\)-rounds at once and store them in a memory in order to carry out this protocol. She can classically sample a random set \(\Gamma\) of size \(m\) at the start of the BB84 protocol and for every round \(i\), she can use the round as source test round if \(i\in\Gamma\) or forward the state produced to Bob if \(i\not\in\Gamma\). For theoretical purposes, this concurrent approach is equivalent to one where Alice begins by using her source to produce all the \((n+m)\)-rounds and for our arguments we assume this is the case.
Let \(\rho_{X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\) be the state produced by the imperfect source, \(\Omega\) denote the event that the source test (Protocol 3) does not abort and let the output of the source test protocol conditioned on \(\Omega\) be the state \(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\cap\Omega}\) (or the subnormalised state \(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\cap\Omega}\) depending on the context).
We will use the analysis of quantum sampling techniques in [1] to examine the source test. The main result of this paper has been summarised as the Theorem below.
**Theorem 6.1** (Quantum sampling [1]).: _The relative weight of a string \(x_{1}^{k}\) is defined as \(\omega(x_{1}^{k})\coloneqq\frac{1}{k}|\{i\in[k]:x_{i}\neq 0\}|\). Let \(\Psi\) be a sampling strategy which takes a string \(a_{1}^{n}\), selects a random subset \(\Gamma\subseteq[n]\) with probability \(p_{\Gamma}\), a random seed \(K\) with probability \(p_{K}\) and produces an estimate \(f(\Gamma,a_{\Gamma},K)\) for the relative weight of the rest of the string \(a_{\bar{\Gamma}}\). We can define the set of strings for which this strategy provides a \(\delta\)-correct estimate for
Figure 3: To perform the BB84 protocol with a perfect source, Alice simply uses her source to produce the quantum state required and uses it for the protocol.
**Source testing protocol**
**Parameters:**
* \(\epsilon\) is the source error tolerated.
* \(m\) is the number of rounds on which the source is tested.
* \(n\) is the number of rounds produced by the source for use in subsequent protocols.
**Protocol:**
1. The source produces the state \(\rho_{X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\).
2. Choose a random subset \(\Gamma\subseteq[n+m]\) of size \(m\), measure the quantum registers \(A_{i}\) in the basis given by \(\Theta_{i}\) and let the result be \(\hat{X}_{i}\).
3. Abort the protocol (and any subsequent protocols) if the observed error \(\frac{1}{m}|\{k\in\Gamma:\hat{X}_{k}\neq X_{k}\}|>\epsilon\).
4. Output the registers corresponding to the remaining rounds.
Protocol 3
Figure 4: To perform the BB84 protocol with an imperfect source, Alice first uses Protocol 3 to test the output of her source and then uses the output of this protocol as the source for the BB84 protocol.
\(\delta>0\) given the choices \(\Gamma=\gamma\) and \(K=\kappa\) as_
\[B^{\delta}_{\gamma\kappa}(\Psi):=\left\{a_{1}^{n}:\left|\omega(a_{\bar{\gamma}}) -f(\gamma,a_{\gamma},\kappa)\right|<\delta\right\}. \tag{74}\]
_The classical maximum error probability for this strategy \(\Psi\) is defined as_
\[\epsilon_{cl}^{\delta}:=\max_{a_{1}^{n}}\Pr_{\Gamma K}[a_{1}^{n}\not\gets B _{\Gamma K}^{\delta}(\Psi)]. \tag{75}\]
_Define the projectors \(\Pi_{A_{1}^{n}}^{\delta|\gamma\kappa}:=\sum_{a_{1}^{n}\epsilon B_{\gamma\kappa }^{\delta}(\Psi)}\left|a_{1}^{n}\right.\left\rangle\left\langle a_{1}^{n} \right|_{A_{1}^{n}}\right.\). Then, for a quantum state \(\rho_{A_{1}^{n}E}\), we have that the state_
\[\tilde{\rho}_{\Gamma KA_{1}^{n}E}\coloneqq\sum_{\gamma\kappa}p\!\left(\gamma \kappa\right)\left|\gamma\kappa\right\rangle\left\langle\gamma\kappa\right|_{ \Gamma K}\otimes\frac{\Pi_{A_{1}^{n}}^{\delta|\gamma\kappa}\rho_{A_{1}^{n}E} \Pi_{A_{1}^{n}}^{\delta|\gamma\kappa}}{\operatorname{tr}\left(\Pi_{A_{1}^{n}}^ {\delta|\gamma\kappa}\rho_{A_{1}^{n}E}\right)} \tag{76}\]
_is \(\epsilon_{qu}^{\delta}=\sqrt{\epsilon_{cl}^{\delta}}\) close to the state \(\rho_{\Gamma KA_{1}^{n}E}\coloneqq\sum_{\gamma\kappa}p(\gamma\kappa)\left| \gamma\kappa\right\rangle\left(\gamma\kappa\right|_{\Gamma K}\otimes\rho_{A_{1 }^{n}E}\) in trace distance._
Using the above theorem, we prove that \(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega}\) has a relatively small smooth max-relative entropy with a depolarised version of the perfect source. The entropic triangle inequality then allows us to prove security of the BB84 protocol, which uses this state as its source state (Figure 4).
We begin by creating the smooth max-relative entropy bound described above. Define the unitaries,
\[V_{A}^{x,\theta} :=H^{\theta}X^{x} \tag{77}\] \[V_{X\Theta A} :=\sum_{x,\theta}\left|x,\theta\right\rangle\left\langle x,\theta \right|_{X\Theta}\otimes V_{A}^{x,\theta} \tag{78}\]
so that \(V_{X\Theta A}\left|x,\theta\right\rangle\left|0\right\rangle\) gives the perfect encoding of the BB84 state given \(x\) and \(\theta\). We also define the state
\[\nu_{X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}:=\bigotimes_{i=1}^{n+m}V_{X_{i} \Theta_{i}A_{i}}^{\dagger}\ \rho_{X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\ \bigotimes_{i=1}^{n+m}V_{X_{i}\Theta_{i}A_{i}}. \tag{79}\]
Note that if \(\rho\) were perfectly encoded, then \(\nu\) would be the state \(\rho_{X_{1}^{n+m}\Theta_{1}^{n+m}}\otimes(\left|0\right\rangle\left\langle 0 \right|)^{\otimes(n+m)}\). Let the register \(\Gamma\) represent the choice of the random subset for sampling following the notation in Theorem 6.1. The state produced by measuring the subset \(\gamma\) of the \(A\) registers of \(\nu\) in the computational \((\left\{\left|0\right\rangle,\left|1\right\rangle\right\})\) basis can equivalently be produced by measuring the subset \(\gamma\) of the \(A\) registers of \(\rho\) in the basis given by the corresponding \(\Theta\) registers, adding \((\text{mod }2)\) the corresponding \(X\) register to the result and applying the unitaries \(V_{X\Theta A}\) on the remaining indices. Conditioning on the sampled qubits of \(\rho\) being incorrectly encoded at most an \(\epsilon\) fraction of the rounds is equivalent to measuring the corresponding random
subset of the qubits of \(\nu\) in the computational basis and conditioning on the relative weight of the result being less than \(\epsilon\) (up to unitaries on the remaining registers; formal expression is given in Eq. 89). Given this equivalence, we can simply work with the state \(\nu\) and transform the results back to the state \(\rho\) at the end.
Using Theorem 6.1, we have that for every \(x_{1}^{n+m},\theta_{1}^{n+m}\) there exists \(\eta_{\Gamma A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}\) such that
\[\frac{1}{2}\left\|\nu_{\Gamma A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}-\eta_{ \Gamma A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}\right\|_{1}\leq\epsilon_{\rm qu }^{\delta} \tag{80}\]
and
\[\eta_{\Gamma A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}:=\sum_{ \gamma}p(\gamma)\left|\gamma\right\rangle\left\langle\gamma\right|\otimes \eta_{A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}^{(\gamma)} \tag{81}\]
where \(p(\gamma)\) is the uniform distribution over all size \(m\) subsets of \([n+m]\), and the state \(\eta_{A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}^{(\gamma)}\) satisfies
\[\eta_{A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}^{(\gamma)}=\Pi_{A_{1}^{n+m}}^{ \delta|\gamma}\eta_{A_{1}^{n+m}|x_{1}^{n+m},\theta_{1}^{n+m}}^{(\gamma)}\Pi_{ A_{1}^{n+m}}^{\delta|\gamma} \tag{82}\]
for the projectors \(\Pi_{A_{1}^{n+m}}^{\delta|\gamma}\) defined as in Theorem 6.1 (our sampling procedure does not require a random seed \(\kappa\), so we omit it in our analysis). Note that using Hoeffding's bound the classical error probability for our sampling strategy is \(2\exp(-\frac{n\delta^{2}}{n+2}m)\), which implies that \(\epsilon_{\rm qu}^{\delta}=\sqrt{2}\exp(-\frac{n\delta^{2}}{2(n+2)}m)\). We can also define the extended state \(\eta_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\) as
\[\eta_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}:=\sum_{\gamma,x_{1}^{n+m},\theta_{1}^{n+m}}p(\gamma)p(x_{1}^{n+m},\theta_{1}^{n+m})\] \[\left|\gamma,x_{1}^{n+m},\theta_{1}^{n+m}\right\rangle\left\langle \gamma,x_{1}^{n+m},\theta_{1}^{n+m}\right|\otimes\eta_{A_{1}^{n+m}|x_{1}^{n+m },\theta_{1}^{n+m}}^{(\gamma)} \tag{83}\]
where \(p(x_{1}^{n+m},\theta_{1}^{n+m})=\prod_{i=1}^{n+m}p(x_{i},\theta_{i})\). Since, \(\nu_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\) and \(\eta_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\) have the same distributions on \(X_{1}^{n+m}\) and \(\Theta_{1}^{n+m}\), we also have that
\[\frac{1}{2}\left\|\nu_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}-\eta_{ \Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\right\|\leq\epsilon_{\rm qu}^{ \delta}. \tag{84}\]
Define \(\Omega^{\prime}\) to be the event that result produced by measuring the subset of registers \(A_{\gamma}\) in the computational basis, where \(\gamma\) is given by the \(\Gamma\) register, has a relative weight less than \(\epsilon\). Let \(\bar{\nu}_{\Gamma X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega^{\prime}}\) be the state produced when the relative weight of the registers \(A_{\gamma}\) of \(\nu_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\) is measured and conditioned on \(\Omega^{\prime}\), the registers \(X_{\gamma}\) and \(\Theta_{\gamma}\) are traced over, and the remaining \(X,\Theta\) and \(A\) registers are relabelled between \(1\) and \(n\). Also, let \(\bar{\eta}_{\Gamma X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega^{\prime}}\) be the state produced when this same subnormalised channel is instead
applied to \(\eta_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\). Let us consider the action of this map on a general state \(\ket{\gamma}\bra{\gamma}\otimes\sigma_{A_{1}^{n+m}}^{(\gamma)}\), which satisfies the condition \(\sigma_{A_{1}^{n+m}}^{(\gamma)}=\Pi_{A_{1}^{n+m}}^{\delta\ket{\gamma}}\sigma_{ A_{1}^{n+m}}^{(\gamma)}\Pi_{A_{1}^{n+m}}^{\delta\ket{\gamma}}\). For such a state, we have
\[\sigma_{A_{1}^{n+m}}^{(\gamma)}=\sum_{a_{1}^{n+m},\bar{a}_{1}^{n+m}\in B_{ \gamma}^{\delta}}\sigma^{(\gamma)}(a_{1}^{n+m},\bar{a}_{1}^{n+m})\ket{a_{1}^{n +m}}\bra{\bar{a}_{1}^{n+m}}.\]
Let \(\hat{P}_{A_{1}^{m}}:=\sum_{a_{1}^{m}\omega(a_{1}^{m})\leq\epsilon}\ket{a_{1}^{ m}}\bra{a_{1}^{m}}\) be the (perfect) measurement operator for conditioning on the event \(\Omega^{\prime}\). Then, the state after applying the measurement and conditioning on the \(\Omega^{\prime}\) is
\[\operatorname{tr}_{A_{\gamma}}\left(\hat{P}_{A_{\gamma}}\sigma_{A_{1}^{n+m}}^ {(\gamma)}\right)=\sum_{a_{\gamma}\omega(a_{\gamma})\leq\epsilon}\sum_{ \begin{subarray}{c}a_{\gamma}^{\prime},\bar{a}_{\bar{\gamma}}\in\{x_{1}^{n}: \\ \ket{\omega(x_{1}^{n})-\omega(a_{\gamma})}<\delta\end{subarray}}}\sigma^{( \gamma)}(a_{\gamma}a_{\bar{\gamma}}^{\prime},a_{\gamma}\bar{a}_{\bar{\gamma}} )\ket{a_{\bar{\gamma}}^{\prime}}\bra{\bar{a}_{\bar{\gamma}}}.\]
We can relabel the remaining registers to get the state \(\bar{\sigma}_{A_{1}^{n}\wedge\Omega^{\prime}}^{(\gamma)}\) which can be put into the form
\[\bar{\sigma}_{A_{1}^{n}\wedge\Omega^{\prime}}^{(\gamma)}=\sum_{a_{1}^{n},\bar {a}_{1}^{n}:\in\left\{x_{1}^{n}:\ \omega(x_{1}^{n})<\epsilon+\delta\right\}}\bar{\sigma}^{(\gamma)}(a_{1}^{n}, \bar{a}_{1}^{n})\ket{a_{1}^{n}}\bra{\bar{a}_{1}^{n}}. \tag{85}\]
Let \(Q_{A_{1}^{n}}^{w}\) be the projector on the set \(\operatorname{span}\{\ket{x_{1}^{n}}:\ \text{for}\ x_{1}^{n}\ \text{such that}\ \omega(x_{1}^{n})<w\}\) (note that these vectors are perpendicular). Then, we have that
\[\bar{\sigma}_{A_{1}^{n}\wedge\Omega^{\prime}}^{(\gamma)}=Q_{A_{1}^{n}}^{ \epsilon+\delta}\bar{\sigma}_{A_{1}^{n}\wedge\Omega^{\prime}}^{(\gamma)}Q_{A_{ 1}^{n}}^{\epsilon+\delta} \tag{86}\]
which implies that \(\bar{\sigma}_{A_{1}^{n}\wedge\Omega^{\prime}}^{(\gamma)}\leq Q_{A_{1}^{n}}^{ \epsilon+\delta}\), since \(\bar{\sigma}_{A_{1}^{n}\wedge\Omega^{\prime}}^{(\gamma)}\) is subnormalised.
By considering \(\sigma_{A_{1}^{n+m}}^{(\gamma)}=\eta_{A_{1}^{n+m}\ket{x_{1}^{n+m}\theta_{1}^{n+ m}}}^{(\gamma)}\), we see that \(\bar{\eta}\) satisfies
\[\bar{\eta}_{\Gamma X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega^{ \prime}} =\sum_{\gamma,x_{1}^{n},\theta_{1}^{n}}p(\gamma)p(x_{1}^{n}\theta_{ 1}^{n})\ket{\gamma x_{1}^{n}\theta_{1}^{n}}\bra{\gamma x_{1}^{n}\theta_{1}^{n }}\otimes\bar{\eta}_{A_{1}^{n}\ket{x_{1}^{n}\theta_{1}^{n}\wedge\Omega^{ \prime}}}^{(\gamma)}\] \[\leq\sum_{\gamma,x_{1}^{n},\theta_{1}^{n}}p(\gamma)p(x_{1}^{n} \theta_{1}^{n})\ket{\gamma x_{1}^{n}\theta_{1}^{n}}\bra{\gamma x_{1}^{n}\theta_ {1}^{n}}\otimes Q_{A_{1}^{n}}^{\epsilon+\delta}\] \[=\rho_{\Gamma}\otimes\rho_{X\theta}^{8n}\otimes Q_{A_{1}^{n}}^{ \epsilon+\delta}.\]
Using the data processing inequality, we also have that
\[\frac{1}{2}\left\|\bar{\nu}_{\Gamma X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge \Omega^{\prime}}-\bar{\eta}_{\Gamma X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge \Omega^{\prime}}\right\|_{1}\leq\epsilon_{\text{qu}}^{\delta} \tag{87}\]
Let \(\hat{\eta}_{A}^{(\epsilon+\delta)}:=\left(1-\epsilon-\delta\right)\left|0\right> \left<0\right|+\left(\epsilon+\delta\right)\left|1\right>\left<1\right|\) or equivalently the \(\hat{\eta}_{A}^{(\epsilon+\delta)}\) is the classical probability distribution over \(\{0,1\}\) which is \(1\) with probability \(\left(\epsilon+\delta\right)\). For this distribution, a simple calculation shows that
\[\min_{z_{1}^{n}\omega\left(z_{1}^{n}\right)<\epsilon+\delta}\left<z_{1}^{n} \right|\left(\hat{\eta}_{A}^{(\epsilon+\delta)}\right)^{\otimes n}\left|z_{1} ^{n}\right>\geq 2^{-nh\left(\epsilon+\delta\right)}\]
which implies that
\[Q_{A_{1}^{n}}^{\epsilon+\delta}\leq 2^{nh\left(\epsilon+\delta\right)}(\hat{ \eta}_{A}^{(\epsilon+\delta)})^{\otimes n}.\]
Thus, we have
\[\bar{\eta}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega^{\prime}}\leq 2^{nh \left(\epsilon+\delta\right)}\left(\rho_{X\Theta}\otimes\hat{\eta}_{A}^{( \epsilon+\delta)}\right)^{\otimes n}. \tag{88}\]
As noted earlier, the state produced by measuring the registers \(A_{\gamma}\) of \(\nu\) in the computational basis is the same as the state produced by measuring the same registers on the real state \(\rho\) in the basis given by \(\Theta_{i}\), adding \(X_{i}\) to the result (mod 2), and transforming the remaining registers with \(\bigotimes_{k=1}^{n}V_{X_{i}\Theta_{i}A_{i}}^{\dagger}\). Under this correspondence, we have that the state produced by the source test satisfies
\[\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega}=\bigotimes_{i=1}^{n} V_{X_{i}\Theta_{i}A_{i}}\bar{\nu}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge \Omega^{\prime}}\bigotimes_{i=1}^{n}V_{X_{i}\Theta_{i}A_{i}}^{\dagger}. \tag{89}\]
Further, for the state defined as
\[\bar{\bar{\rho}}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega} :=\bigotimes_{i=1}^{n}V_{X_{i}\Theta_{i}A_{i}}\bar{\eta}_{X_{1}^ {n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega^{\prime}}\bigotimes_{i=1}^{n}V_{X_{i} \Theta_{i}A_{i}}^{\dagger}\] \[\leq 2^{nh\left(\epsilon+\delta\right)}\bigotimes_{i=1}^{n} \left(V_{X_{i}\Theta_{i}A_{i}}\ \rho_{X_{i}\Theta_{i}}\otimes\hat{\eta}_{A_{i}}^{(\epsilon+\delta)}V_{X_{i} \Theta_{i}A_{i}}^{\dagger}\right)\] \[=2^{nh\left(\epsilon+\delta\right)}\left(\hat{\rho}_{X\Theta A}^ {(\epsilon+\delta)}\right)^{\otimes n} \tag{90}\]
where \(\hat{\rho}_{X\Theta A}^{(\epsilon+\delta)}:=\left(1-2(\epsilon+\delta)\right) \hat{\rho}_{X\Theta A}+2(\epsilon+\delta)\hat{\rho}_{X\Theta}\otimes\tau_{A}\) for the mixed state \(\tau_{A}\) on register \(A\). Using Eq. 87, we also have
\[\frac{1}{2}\left\|\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n} \wedge\Omega}-\bar{\bar{\rho}}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\wedge\Omega }\right\|_{1}\leq\epsilon_{\text{qu}}^{\delta}. \tag{91}\]
The following Lemma helps use relate distances while conditioning states.
**Lemma 6.2**.: _Suppose \(\rho_{XA}=\sum_{x\in\mathcal{X}}p(x)\left|x\right>\left<x\right|\otimes\rho_{ A|x}\) and \(\tilde{\rho}_{XA}=\sum_{x\in\mathcal{X}}\tilde{p}(x)\left|x\right>\left<x\right| \otimes\tilde{\rho}_{A|x}\) are classical-quantum states such that \(\frac{1}{2}\left\|\rho_{XA}-\tilde{\rho}_{XA}\right\|_{1}\leq\epsilon\). Then, for \(x\in\mathcal{X}\) such that \(p(x)>0\), we have_
\[\frac{1}{2}\left\|\rho_{A|x}-\tilde{\rho}_{A|x}\right\|_{1}\leq\frac{2\epsilon }{p(x)} \tag{92}\]
Proof.: \[\frac{1}{2}\left\|\rho_{XA}-\tilde{\rho}_{XA}\right\|_{1}=\frac{1}{2} \sum_{x\in\mathcal{X}}\left\|p(x)\rho_{A|x}-\tilde{p}(x)\tilde{\rho}_{A|x} \right\|_{1}\leq\epsilon\]
This implies that for \(x\in\mathcal{X}\)
\[\frac{1}{2}\left\|p(x)\rho_{A|x}-\tilde{p}(x)\tilde{\rho}_{A|x} \right\|_{1}\leq\epsilon\]
and
\[\frac{1}{2}|p(x)-\tilde{p}(x)|\leq\epsilon.\]
Using these inequalities, we have
\[\frac{1}{2}\left\|\rho_{A|x}-\tilde{\rho}_{A|x}\right\|_{1} \leq\frac{1}{2}\left\|\rho_{A|x}-\frac{\tilde{p}(x)}{p(x)}\tilde{ \rho}_{A|x}\right\|_{1}+\frac{1}{2}\left|1-\frac{\tilde{p}(x)}{p(x)}\right| \left\|\tilde{\rho}_{A|x}\right\|_{1}\] \[=\frac{1}{p(x)}\frac{1}{2}\left\|p(x)\rho_{A|x}-\tilde{p}(x) \tilde{\rho}_{A|x}\right\|_{1}+\frac{1}{p(x)}\frac{1}{2}\left|p(x)-\tilde{p}( x)\right|\] \[\leq\frac{2\epsilon}{p(x)}.\]
Using the Lemma above, we have
\[\frac{1}{2}\left\|\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}| \Omega}-\bar{\bar{\rho}}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega}\right\|_{1} \leq\frac{2\epsilon_{\mathrm{qu}}^{\delta}}{\Pr_{\rho}(\Omega)} \tag{93}\]
where \(\Pr_{\rho}(\Omega):=\mathrm{tr}\left(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1 }^{n}\wedge\Omega}\right)\) is the probability of the event \(\Omega\) when the testing procedure is applied to the state \(\rho\), and
\[\bar{\bar{\rho}}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega} \leq\frac{2^{nh(\epsilon+\delta)}}{\Pr_{\tilde{\rho}}(\Omega)} \left(\hat{\rho}_{X\Theta A}^{(\epsilon+\delta)}\right)^{\otimes n}\] \[\leq\frac{2^{nh(\epsilon+\delta)}}{\Pr_{\rho}(\Omega)-\epsilon_{ \mathrm{qu}}^{\delta}}\left(\hat{\rho}_{X\Theta A}^{(\epsilon+\delta)}\right) ^{\otimes n} \tag{94}\]
where \(\Pr_{\tilde{\rho}}(\Omega):=\mathrm{tr}\left(\bar{\tilde{\rho}}_{X_{1}^{n} \Theta_{1}^{n}A_{1}^{n}\wedge\Omega}\right)\) is defined similar to \(\Pr_{\rho}(\Omega)\). Together these imply that
\[D_{\max}^{\epsilon_{f}}(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1 }^{n}|\Omega}\|\left(\hat{\rho}_{X\Theta A}^{(\epsilon+\delta)}\right)^{ \otimes n})\leq nh(\epsilon+\delta)+\log\frac{1}{\Pr_{\rho}(\Omega)-\epsilon_ {\mathrm{qu}}^{\delta}} \tag{95}\]
where \(\epsilon_{f}=2\sqrt{\frac{\epsilon_{\text{qu}}^{\delta}}{\Pr_{\rho}(\Omega)}}\).
We now give an outline for bounding the smooth min-entropy for a BB84-QKD protocol, which uses an imperfect source. We give a complete formal proof in Section G. Let \(\Phi_{\text{QKD}}\) be the CPTP map denoting the action of the entire QKD protocol on the source states produced by Alice. In order to prove security for QKD, informally speaking, it is sufficient to prove a linear lower bound for8
Footnote 8: We also need to condition on the QKD protocol not aborting. We do this in Section G
\[H_{\min}^{\epsilon_{f}+\epsilon^{\prime}}(X_{S}|ET\Theta_{1}^{n}\hat{\Theta}_{ 1}^{n})_{\Phi_{\text{QKD}}(\bar{\rho}_{|\Omega})}.\]
Let us define the virtual state \(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}\coloneqq\left(\hat{\rho}_{X\Phi A}^{ (\epsilon+\delta)}\right)^{\otimes n}\). This state can be viewed as the state produced when each of the qubits produced by Alice is passed through a depolarising channel. Using Lemma 3.5, for an arbitrary \(\epsilon^{\prime}>0\), we have
\[H_{\min}^{\epsilon_{f}+\epsilon^{\prime}}(X_{S}|ET\Theta_{1}^{n }\hat{\Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\bar{\rho}_{|\Omega})} \geq\tilde{H}_{\alpha}^{\dagger}(X_{S}|ET\Theta_{1}^{n}\hat{\Theta}_{1}^{n}) _{\Phi_{\text{QKD}}(\sigma)}\] \[\qquad\qquad\qquad-\frac{\alpha}{\alpha-1}D_{\max}^{\epsilon_{f}} (\Phi_{\text{QKD}}(\bar{\rho}_{|\Omega})\|\Phi_{\text{QKD}}(\sigma))-\frac{g_{ 1}(\epsilon^{\prime},\epsilon_{f})}{\alpha-1}\] \[\geq\tilde{H}_{\alpha}^{\dagger}(X_{S}|ET\Theta_{1}^{n}\hat{ \Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\sigma)}-\frac{\alpha}{\alpha-1}nh( \epsilon+\delta)-\frac{O(1)}{\alpha-1}.\]
Thus, it is sufficient to bound the \(\alpha\)-Renyi conditional entropy \(\tilde{H}_{\alpha}^{\dagger}(X_{S}|ET\Theta_{1}^{n}\hat{\Theta}_{1}^{n})\) for the QKD protocol running on a noisy version of the perfect source. We can now simply use standard techniques developed for the security proofs of QKD to show a linear lower bound for this conditional entropy. In particular, source purification can be used for the source \(\sigma\). In Sec. G, we show how one can modify the security proof for BB84-QKD based on entropy accumulation to get the following bound.
**Theorem 6.3**.: _Suppose Alice uses the output of Protocol 3 (with error threshold \(\epsilon\)) as her source for the BB84 QKD protocol. Let \(\delta>0\) and assume that \(h(\epsilon+\delta)<\frac{1}{\sqrt{2}}\). Then, for_
\[\epsilon_{qu}^{\delta} =\sqrt{2}\exp\left(-\frac{n\delta^{2}}{2(n+2)}m\right) \tag{96}\] \[\epsilon_{pa} =2\left(\frac{2\epsilon_{qu}^{\delta}}{P_{\bar{\rho}}(\Omega \wedge\Upsilon^{\prime\prime})}\right)^{1/2} \tag{97}\]
_and \(\epsilon^{\prime}>0\), we have the following lower bound on the smooth min-entropy for the raw key
produced during the BB84 protocol_
\[H_{\min}^{\epsilon_{pa}+\epsilon^{\prime}}(X_{S}|E\Theta_{1}^{n} \hat{\Theta}_{1}^{n}T)_{\Phi_{QKD}(\bar{\rho})_{|\Omega\Lambda\Upsilon^{\prime \prime}}}\] \[\quad\geq n(1-2\mu-h(e)-V\sqrt{2h(\epsilon+\delta)})-\sqrt{n} \left(\mu^{2}\ln(2)+2\log\frac{1}{\Pr_{\bar{\rho}}(\Omega\wedge\Upsilon^{ \prime\prime})}+g_{0}\left(\frac{\epsilon^{\prime}}{8}\right)\right)\] \[\quad-\frac{V}{\sqrt{2h(\epsilon+\delta)}}\left(\log\frac{1}{\Pr_ {\bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})-2\epsilon_{qu}^{\delta}}+1 \right)-\frac{g_{1}(\frac{\epsilon^{\prime}}{2},\epsilon_{pa})}{2\sqrt{2h( \epsilon+\delta)}}V-\log|T|-3g_{0}\left(\frac{\epsilon^{\prime}}{8}\right) \tag{98}\]
_where \(V:=\frac{2}{\mu^{2}}\log\frac{1-e}{e}+2\log(1+2|\mathcal{X}|^{2})\), \(\Pr_{\bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})\) is the probability of the event \(\Omega\wedge\Upsilon^{\prime\prime}\) for the state \(\Phi_{QKD}(\bar{\rho})\) and it is assumed that \(\Pr_{\bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})>2\epsilon_{qu}^{\delta}\), \(g_{0}(x)=-\log(1-\sqrt{1-x^{2}})\) and \(g_{1}(x,y)=-\log(1-\sqrt{1-x^{2}})-\log(1-y^{2})\)._
We see above that the asymptotic key rate for the BB84 protocol using an imperfect source is \(V\sqrt{2h(\epsilon+\delta)}\) lesser than a protocol, which uses a perfect source.
### Imperfect measurements
In our analysis above, we assumed that the measurements used in the source testing procedure are perfect. It should be noted that if the source produces states at a rate \(r_{s}\), then the measurement device is only used at an average rate \(\frac{m}{n+m}r_{s}\), which is much smaller than \(r_{s}\). So, the measurement devices have a much longer relaxation time than the source. As such, it should be easier to create almost "perfect" measurement devices than it is to create perfect sources.
In this section, we will show how measurement imperfections can also be incorporated in our analysis. Let \(\Lambda\left(\leq\epsilon|\gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{\gamma}}\) be the POVM element associated with the source test passing, i.e., with measuring a relative weight less than \(\epsilon\) with respect to the encoded random bits given the choice of random subset \(\gamma\), encoded random bits \(x_{\gamma}\), and basis choice \(\theta_{\gamma}\). Informally speaking, in this subsection, we assume that this measurement measures the relative weight with an error at most \(\epsilon_{m}\) with high probability. To formally state our assumption, define
\[\hat{P}_{A_{\gamma}}^{x_{\gamma},\theta_{\gamma}}:=\bigotimes_{i \epsilon\gamma}V_{A_{i}}^{x_{i},\theta_{i}}\left(\sum_{a_{\gamma}\omega(a_{ \gamma})\leq\epsilon+\epsilon_{m}}|a_{\gamma}\rangle\left\langle a_{\gamma} \right|_{A_{\gamma}}\right)\bigotimes_{j\epsilon\gamma}\left(V_{A_{j}}^{x_{j},\theta_{j}}\right)^{\dagger} \tag{99}\] \[\hat{P}_{A_{\gamma}}^{\downarrow|x_{\gamma},\theta_{\gamma}}:= \mathds{1}_{A_{1}^{n}}-\hat{P}_{A_{\gamma}}^{x_{\gamma},\theta_{\gamma}} \tag{100}\]
to be the projectors on the subspace with relative weight at most \(\epsilon+\epsilon_{m}\), and at least \(\epsilon+\epsilon_{m}\) with respect to \(x_{\gamma}\) in the basis \(\theta_{\gamma}\). Here the parameter \(\epsilon\) is the same as the source error threshold in the previous section and \(\epsilon_{m}>0\) is a small parameter quantifying the measurement device error. The projector \(\hat{P}_{A_{\gamma}}^{x_{\gamma},\theta_{\gamma}}\) is the rotated version of projector \(\hat{P}\), which was
used for the measurement map in the previous section. In this section, we need to use the rotated version because the real measurements in an implementation will depend on the inputs \(\gamma,x_{\gamma}\) and \(\theta_{\gamma}\).
We assume that for some fixed small \(\xi>0\) the measurement elements \(\left\{\Lambda\left(\leq\epsilon|\gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{ \gamma}}\right\}_{\gamma,x_{\gamma},\theta_{\gamma}}\) satisfy the following for every collection of states \(\left\{\sigma_{A_{\gamma}|x_{\gamma},\theta_{\gamma}}^{(\gamma)}\right\}_{ \gamma,x_{\gamma},\theta_{\gamma}}\):
\[\sum_{\gamma}p(\gamma)\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma},\theta_{ \gamma})\operatorname{tr}\left(\Lambda\left(\leq\epsilon|\gamma,x_{\gamma}, \theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{\perp|x_{\gamma}, \theta_{\gamma}}\sigma_{A_{\gamma}|x_{\gamma}\theta_{\gamma}}^{(\gamma)}\hat{P }_{A_{\gamma}}^{\perp|x_{\gamma},\theta_{\gamma}}\right)\leq\xi. \tag{101}\]
Stated in words, we require that for any collection of states \(\left\{\sigma_{A_{\gamma}|x_{\gamma},\theta_{\gamma}}^{(\gamma)}\right\}_{ \gamma,x_{\gamma},\theta_{\gamma}}\) with a relative weight larger than \(\epsilon+\epsilon_{m}\) (lying in the subspace corresponding to the projector \(\hat{P}_{A_{\gamma}}^{\perp|x_{\gamma},\theta_{\gamma}}\)), the probability that a weight lesser than \(\epsilon\) is measured is smaller than \(\xi\) when averaged over the choice of the random set \(\gamma\) and \(x_{\gamma},\theta_{\gamma}\). Using this assumption on the measurements, we will again derive a smooth max-relative entropy bound similar to Eq. 95. The smoothing parameter of this relative entropy, however, will depend on \(\xi\), which in turn implies that the privacy amplification error of the subsequent QKD protocol will be lower bounded by a function of \(\xi\). It does not seem that this dependence of the smoothing parameter on \(\xi\) can be avoided. For example, if the measurements measure a small weight for a set of large weight states and the source emits those states, then they can be exploited by Eve to extract additional information during the QKD protocol. It also seems that we cannot use some kind of joint test for the source and measurement device (similar to Protocol 3) without an additional assumption to ensure that the weight measured by the measurement device is almost correct, since the source can always embed its information using an arbitrary unitary and the measurement can always decode that information using the same unitary.
I.I.D measurements with error \(\epsilon_{m}^{\prime}\) or more generally measurements, which are guaranteed to measure each input qubit correctly with probability at least \((1-\epsilon_{m}^{\prime})\) independent of the previous rounds (both these examples consider measurements which measure the qubits \(A_{\gamma}\) in the provided basis \(\theta_{\gamma}\) to produce the results \(\hat{x}_{\gamma}\) and then use these results to test if \(\omega(x_{\gamma}\oplus\hat{x}_{\gamma})\leq\epsilon\) or not), satisfy the above assumption for the choice of some \(\delta^{\prime}>0\), \(\epsilon_{m}=\epsilon_{m}^{\prime}+\delta^{\prime}\) and \(\xi=e^{-2m\delta^{\prime}{}^{2}}\) (using the Chernoff-Hoeffding bound). Additionally, since we average over the random set \(\gamma\) as well, it is possible to guarantee with high probability that for most test measurements the relaxation time of the measurement device is large. We leave the details for the specific measurement model for future work.
For every \(x_{1}^{n+m}\) and \(\theta_{1}^{n+m}\), we define the following appropriately rotated versions of the projector \(\Pi_{A_{1}^{n+m}}^{\delta|\gamma}\) given by Theorem 6.1, so that we can compare the relative weight with
the string \(x_{1}^{n+m}\) in the basis given by \(\theta_{1}^{n+m}\).
\[\bar{\Pi}_{A_{1}^{n+m}}^{\delta|\gamma,x_{1}^{n+m},\theta_{1}^{n+m}}\coloneqq \bigotimes_{i=1}^{n+m}V_{A_{i}}^{x_{i},\theta_{i}}\Pi_{A_{1}^{n+m}}^{\delta| \gamma}\bigotimes_{j=1}^{n+m}(V_{A_{i}}^{x_{i},\theta_{i}})^{\dagger} \tag{102}\]
We use the state \(\eta\) defined in the previous section to define the state
\[\tilde{\rho}_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\coloneqq\bigotimes _{i=1}^{n+m}V_{X_{i}\Theta_{i}A_{i}}\eta_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_ {1}^{n+m}}\bigotimes_{i=1}^{n+m}V_{X_{i}\Theta_{i}A_{i}}^{\dagger}. \tag{103}\]
Using the distance bound proven in Eq. 84 and the definition of \(\nu\) in Eq. 79, we have
\[\frac{1}{2}\left\|\rho_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}-\tilde {\rho}_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\right\|_{1}\leq\epsilon _{\rm qu}^{\delta}\]
The conditional states \(\tilde{\rho}_{A_{1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{(\gamma)}\) satisfy
\[\bar{\Pi}_{A_{1}^{n+m}}^{\delta|\gamma,x_{1}^{n+m},\theta_{1}^{n+ m}}\tilde{\rho}_{A_{1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{(\gamma)}\bar{\Pi}_{A_{1} ^{n+m}}^{\delta|\gamma,x_{1}^{n+m},\theta_{1}^{n+m}}\] \[\quad=\bigotimes_{i=1}^{n+m}V_{A_{i}}^{x_{i},\theta_{i}}\Pi_{A_{1 }^{n+m}}^{\delta|\gamma}\bigotimes_{j=1}^{n+m}(V_{A_{i}}^{x_{i},\theta_{i}})^{ \dagger}\tilde{\rho}_{A_{1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{(\gamma)} \bigotimes_{i=1}^{n+m}V_{A_{i}}^{x_{i},\theta_{i}}\Pi_{A_{1}^{n+m}}^{\delta| \gamma}\bigotimes_{j=1}^{n+m}(V_{A_{i}}^{x_{i},\theta_{i}})^{\dagger}\] \[\quad=\bigotimes_{i=1}^{n+m}V_{A_{i}}^{x_{i},\theta_{i}}\Pi_{A_{1 }^{n+m}}^{\delta|\gamma}\eta_{A_{1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{(\gamma )}\Pi_{A_{1}^{n+m}}^{\delta|\gamma}\bigotimes_{j=1}^{n+m}(V_{A_{i}}^{x_{i}, \theta_{i}})^{\dagger}\] \[\quad=\bigotimes_{i=1}^{n+m}V_{A_{i}}^{x_{i},\theta_{i}}\eta_{A_{ 1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{(\gamma)}\bigotimes_{j=1}^{n+m}(V_{A_{ i}}^{x_{i},\theta_{i}})^{\dagger}\] \[\quad=\tilde{\rho}_{A_{1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{( \gamma)}\]
where we have used the definition of \(\tilde{\rho}_{A_{1}^{n+m}|x_{1}^{n+m}\theta_{1}^{n+m}}^{(\gamma)}\) (Eq. 103) in the second equality, and Eq. 82 for the fourth line.
Let the event that the imperfect measurements measure a relative weight less than \(\epsilon\) be denoted by \(\Omega_{\rm im}\). We call the state produced after performing the (imperfect) measurements on the states \(\rho_{\Gamma X_{I}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\), conditioning on the event \(\Omega_{\rm im}\) and tracing over the registers \(X_{\Gamma}\) and \(\Theta_{\Gamma}\) as \(\rho_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{\Gamma}\Lambda\Omega_{\rm im}}\). Similarly, we let \(\tilde{\rho}_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{\Gamma}\Lambda\Omega_{\rm im}}^ {\prime}\) denote the state produced
when this subnormalised map is applied to \(\hat{\rho}_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\). We have that
\[\hat{\rho}_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{\Gamma\Lambda\Omega_ {\mathrm{im}}}}^{\prime} =\sum_{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}p(\gamma)p(x_{ \bar{\gamma}},\theta_{\bar{\gamma}})\ket{\gamma,x_{\bar{\gamma}},\theta_{\bar {\gamma}}}\bra{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}\otimes\] \[\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma},\theta_{\gamma}) \operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq\epsilon|\gamma,x_{\gamma },\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{\left(\gamma\right)} \hat{P}_{A_{\gamma}A_{\bar{\gamma}}|x_{1}^{n+m}\theta_{1}^{n+m}}^{\left(\gamma \right)}\right)\] \[\leq 2\sum_{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}p( \gamma)p(x_{\bar{\gamma}},\theta_{\bar{\gamma}})\ket{\gamma,x_{\bar{\gamma}}, \theta_{\bar{\gamma}}}\bra{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}\otimes\] \[\qquad\qquad\qquad\left[\sum_{x_{\gamma},\theta_{\gamma}}p(x_{ \gamma},\theta_{\gamma})\operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq \epsilon|\gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{ \gamma}}^{x_{\gamma},\theta_{\gamma}}\hat{\rho}_{A_{\gamma}A_{\bar{\gamma}}|x _{1}^{n+m}\theta_{1}^{n+m}}^{\left(\gamma\right)}\hat{P}_{A_{\gamma}}^{x_{ \gamma},\theta_{\gamma}}\right)\right.\] \[\qquad\qquad\qquad+\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma}, \theta_{\gamma})\operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq\epsilon| \gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{ \left|x_{\gamma},\theta_{\gamma}\right)}\tilde{\rho}_{A_{\gamma}A_{\bar{\gamma }}|x_{1}^{n+m}\theta_{1}^{n+m}}^{\left(\gamma\right)}\hat{P}_{A_{\gamma}}^{ \lambda|x_{\gamma},\theta_{\gamma}}\right)\right]\] \[\leq 2\sum_{\gamma,x_{\bar{\gamma}},\theta_{\gamma}}p(\gamma)p(x_{ \bar{\gamma}},\theta_{\bar{\gamma}})\ket{\gamma,x_{\bar{\gamma}},\theta_{\bar {\gamma}}}\bra{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}\otimes\] \[\qquad\qquad\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma}, \theta_{\gamma})\operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq\epsilon| \gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{x_{ \gamma},\theta_{\gamma}}\tilde{\rho}_{A_{\gamma}A_{\bar{\gamma}}|x_{1}^{n+m} \theta_{1}^{n+m}}^{\left(\gamma\right)}\hat{P}_{A_{\gamma}}^{x_{\gamma}, \theta_{\gamma}}\right)\] \[\qquad\qquad\qquad+2\xi\mu_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{ \Gamma}}\]
where we have used the pinching inequality (Lemma 5.2 with \(t=1\)) in the second line, defined the state \(\mu_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{\Gamma}}\) as the normalization of the state
\[\sum_{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}p(\gamma)p(x_{ \bar{\gamma}},\theta_{\bar{\gamma}})\ket{\gamma,x_{\bar{\gamma}},\theta_{\bar {\gamma}}}\bra{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}\otimes\] \[\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma},\theta_{\gamma}) \operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq\epsilon|\gamma,x_{\gamma}, \theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{\perp|x_{\gamma}, \theta_{\gamma}}\hat{\rho}_{A_{\gamma}A_{\bar{\gamma}}|x_{1}^{n+m}\theta_{1}^ {n+m}}^{\left(\gamma\right)|x_{\gamma},\theta_{\gamma}}\right)\]
and used
\[\operatorname{tr}\left(\sum_{\gamma,x_{\bar{\gamma}},\theta_{ \gamma}}p(\gamma)p(x_{\bar{\gamma}},\theta_{\bar{\gamma}})\ket{\gamma,x_{\bar{ \gamma}},\theta_{\bar{\gamma}}}\bra{\gamma,x_{\bar{\gamma}},\theta_{\bar{ \gamma}}}\otimes\right.\] \[\qquad\qquad\left.\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma}, \theta_{\gamma})\operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq\epsilon| \gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{ \perp|x_{\gamma},\theta_{\gamma}}\hat{\rho}_{A_{\gamma}A_{\bar{\gamma}}|x_{1}^ {n+m}\theta_{1}^{n+m}}^{\left(\gamma\right)}\hat{P}_{A_{\gamma}}^{\perp|x_{ \gamma},\theta_{\gamma}}\right)\right)\] \[=\sum_{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}p(\gamma)p(x_ {\bar{\gamma}},\theta_{\bar{\gamma}})\sum_{x_{\gamma},\theta_{\gamma}}p(x_{ \gamma},\theta_{\gamma})\operatorname{tr}\left(\Lambda\left(\leq\epsilon| \gamma,x_{\gamma},\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}_{A_{\gamma}}^{ \perp|x_{\gamma},\theta_{\gamma}}\left(\sum_{x_{\gamma},\theta_{\gamma}}p(x_{ \bar{\gamma}},\theta_{\bar{\gamma}})\hat{\rho}_{A_{\gamma}|x_{1}^{n+m}\theta_{1 }^{n+m}}^{\left(\gamma\right)}\hat{P}_{A_{\gamma}}^{\perp|x_{\gamma},\theta_{ \gamma}}\right)\right.\] \[\leq\xi,\]
which follows from our assumption about the measurements (Eq. 101). Therefore, we have
\[\tilde{\rho}^{\prime}_{\Gamma X_{\Gamma}}\Theta_{\Gamma A_{\Gamma} \Lambda\Omega_{\mathrm{im}}} \leq 2\sum_{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}p(\gamma)p (x_{\bar{\gamma}},\theta_{\bar{\gamma}})\left|\gamma,x_{\bar{\gamma}},\theta_ {\bar{\gamma}}\right\rangle\left\langle\gamma,x_{\bar{\gamma}},\theta_{\bar{ \gamma}}\right|\otimes\] \[\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma},\theta_{\gamma}) \operatorname{tr}_{A_{\gamma}}\left(\Lambda\left(\leq\epsilon|\gamma,x_{\gamma },\theta_{\gamma}\right)_{A_{\gamma}}\hat{P}^{x_{\gamma},\theta_{\gamma}}_{A_{ \gamma}}\tilde{\rho}^{(\gamma)}_{A_{\gamma}A_{\bar{\gamma}}|x_{1}^{n+m}\theta_ {1}^{n+m}}\hat{P}^{x_{\gamma},\theta_{\gamma}}_{A_{\gamma}}\right)\] \[\qquad\qquad+2\xi\mu_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{\Gamma}}\] \[\leq 2\sum_{\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}}p( \gamma)p(x_{\bar{\gamma}},\theta_{\bar{\gamma}})\left|\gamma,x_{\bar{\gamma}},\theta_{\bar{\gamma}}\right\rangle\left\langle\gamma,x_{\bar{\gamma}},\theta_ {\bar{\gamma}}\right|\otimes\] \[\sum_{x_{\gamma},\theta_{\gamma}}p(x_{\gamma},\theta_{\gamma}) \operatorname{tr}_{A_{\gamma}}\left(\hat{P}^{x_{\gamma},\theta_{\gamma}}_{A_{ \gamma}}\tilde{\rho}^{(\gamma)}_{A_{\gamma}A_{\bar{\gamma}}|x_{1}^{n+m}\theta_ {1}^{n+m}}\right)+2\xi\mu_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{\Gamma}}\] \[\leq 2\tilde{\rho}^{(\epsilon+\epsilon_{m})}_{\Gamma X_{\Gamma} \Theta_{\Gamma}A_{\Gamma}\Lambda\Omega_{\mathrm{im}}}+2\xi\mu_{\Gamma X_{ \Gamma}\Theta_{\Gamma}A_{\Gamma}}\]
where the state \(\tilde{\rho}^{(\epsilon+\epsilon_{m})}_{\Gamma X_{\Gamma}\Theta_{\Gamma}A_{ \Gamma}\Lambda\Omega_{\mathrm{im}}}\) is the state produced when the perfect measurement is used to measure \(A_{\gamma}\) and condition the state \(\tilde{\rho}_{\Gamma X_{1}^{n+m}\Theta_{1}^{n+m}A_{1}^{n+m}}\) on the event that the relative weight of the measured results is lesser than \(\epsilon+\epsilon_{m}\) from the string contained in \(X_{\gamma}\). This is the state, which was used in the previous section to derive the smooth max-relative entropy bound. The only difference being that the threshold for the relative weight of the perfect measurement in the last section was \(\epsilon\). Thus, we can use the previously derived bound in Eq. 90 for this state by simply replacing \(\epsilon\) with \(\epsilon+\epsilon_{m}\). Relabelling the remaining registers between \(1\) and \(n\), tracing over the \(\Gamma\) register and using the Eq. 90, we get
\[\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\Lambda \Omega_{\mathrm{im}}} \leq 2\tilde{\rho}^{(\epsilon+\epsilon_{m})}_{X_{1}^{n}\Theta_{1}^ {n}A_{1}^{n}\Lambda\Omega_{\mathrm{im}}}+2\xi\mu_{X_{1}^{n}\Theta_{1}^{n}A_{1} ^{n}}\] \[\leq 2^{nh\left(\epsilon+\epsilon_{m}+\delta\right)+1}\left( \hat{\rho}^{(\epsilon+\epsilon_{m}+\delta)}_{X\Theta A}\right)^{\otimes n}+2 \xi\mu_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}} \tag{104}\]
where \(\hat{\rho}^{(\epsilon+\epsilon_{m}+\delta)}_{X\Theta A}:=\left(1-2(\epsilon+ \epsilon_{m}+\delta)\right)\hat{\rho}_{X\Theta A}+2(\epsilon+\epsilon_{m}+ \delta)\hat{\rho}_{X\Theta}\otimes\tau_{A}\). As before using the data processing inequality, we have
\[\frac{1}{2}\left\|\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\Lambda \Omega_{\mathrm{im}}}-\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n} \Lambda\Omega_{\mathrm{im}}}\right\|_{1}\leq\epsilon^{\delta}_{\mathrm{qu}}. \tag{105}\]
Using Lemma 6.2, the conditional states satisfy
\[\frac{1}{2}\left\|\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega_{ \mathrm{im}}}-\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega_ {\mathrm{im}}}\right\|_{1}\leq\frac{2\epsilon^{\delta}_{\mathrm{qu}}}{P_{ \rho}(\Omega_{\mathrm{im}})} \tag{106}\]
for \(P_{\rho}(\Omega_{\mathrm{im}}):=\operatorname{tr}\left(\rho^{\prime}_{X_{1}^{n} \Theta_{1}^{n}A_{1}^{n}\Lambda\Omega_{\mathrm{im}}}\right)\), defined as the probability that the Protocol 3 does not abort with the imperfect measurements and
\[\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega_{m}}\leq\frac{2^ {nh\left(\epsilon+\epsilon_{m}+\delta\right)+1}}{P_{\tilde{\rho}}(\Omega_{m})} \left(\hat{\rho}^{(\epsilon+\epsilon_{m}+\delta)}_{X\Theta A}\right)^{\otimes n }+\frac{4\xi}{P_{\tilde{\rho}}(\Omega_{m})}\frac{\mu_{X_{1}^{n}\Theta_{1}^{n}A_ {1}^{n}}}{2}. \tag{107}\]
where \(P_{\tilde{\rho}}(\Omega_{\rm im}):={\rm tr}\left(\tilde{\rho}^{\prime}_{X_{1}^{n} \Theta_{1}^{n}A_{1}^{n}\Lambda\Omega_{\rm im}}\right)\). For \(0<\mu<1\), the hypothesis testing relative entropy [21] is defined as
\[D_{h}^{\mu}(\rho\|\sigma):=-\inf\left\{\log{\rm tr}(\sigma Q):0\leq\mu Q\leq 1,\ \text{and}\ \operatorname{tr}(\rho Q)\geq 1\right\}. \tag{108}\]
Equivalently, using semidefinite programming duality (see [21]) it can be shown that
\[D_{h}^{\mu}(\rho\|\sigma) =-\sup\left\{\log(\lambda-{\rm tr}(Y)):Y\geq 0,\lambda\geq 0,\ \text{and}\ \lambda\rho\leq\sigma+\mu Y\right\} \tag{109}\] \[=\inf\left\{\log\lambda^{\prime}-\log(1-{\rm tr}(Z)):Z\geq 0, \lambda^{\prime}\geq 0,\ \text{and}\ \rho\leq\lambda^{\prime}\sigma+\mu Z\right\}. \tag{110}\]
Thus, Eq. 107 implies
\[D_{h}^{\mu}(\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega_{ m}}\|(\hat{\rho}^{(\epsilon+\epsilon_{m}+\delta)}_{X\Theta A})^{\otimes n}) \leq nh(\epsilon+\epsilon_{m}+\delta)+2+\log\frac{1}{P_{\tilde{\rho}}(\Omega_ {m})} \tag{111}\]
for \(\mu:=\frac{4\xi}{P_{\tilde{\rho}}(\Omega_{m})}\). Using [21, Theorem 5.11] (originally proven in [1]), this implies that9
Footnote 9: The smoothing for \(D_{\rm max}^{\epsilon}(\rho\|\sigma)\) in [21] is defined using the trace distance instead of purified distance, which we use here. It can, however, be verified that the proof there also works with purified distance.
\[D_{\rm max}^{\sqrt{\mu}}(\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1} ^{n}|\Omega_{m}}\|(\hat{\rho}^{(\epsilon+\epsilon_{m}+\delta)}_{X\Theta A})^{ \otimes n})\leq nh(\epsilon+\epsilon_{m}+\delta)+2+\log\frac{1}{P_{\tilde{ \rho}}(\Omega_{m})}+\log\frac{1}{\mu(1-\mu)} \tag{112}\]
Using the triangle inequality, we can state this in terms of the real state \(\tilde{\rho}^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega_{m}}\)
\[D_{\rm max}^{\epsilon f}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A _{1}^{n}|\Omega_{m}}\|\left(\hat{\rho}^{(\epsilon+\epsilon_{m}+\delta)}_{X \Theta A}\right)^{\otimes n}) \leq nh(\epsilon+\epsilon_{m}+\delta)+2+\log\frac{1}{P_{\rho}( \Omega_{m})-\epsilon_{\rm qu}^{\delta}}\] \[+\log\frac{1}{4\xi(P_{\rho}(\Omega_{m})-\epsilon_{\rm qu}^{ \delta}-4\xi)} \tag{113}\]
for \(\epsilon_{f}:=\frac{2\xi^{1/2}}{\sqrt{P_{\rho}(\Omega_{m})-\epsilon_{\rm qu}^{ \delta}}}+2\sqrt{\frac{\epsilon_{\rm qu}^{\delta}}{P_{\rho}(\Omega_{m})}}\). Note that if \(\xi=\exp(-\Omega(m))\), then the last term in the bound above adds \(O(m)\) to the smooth max-relative entropy, so it cannot be chosen to be too small (This seems to be an artifact of the bound in [21, Theorem 5.11], and it seems that it should be possible to improve this dependence). One can use the above bound in place of Eq. 95 to prove a smooth min-entropy lower bound for the QKD protocol.
### Discussion and future work
Theorem 6.3 gives a simple bound on the smooth min-entropy relevant for the BB84 protocol in Protocol 2. With a source error of \(\epsilon\), the rate of the QKD protocol decreases by \(O((\epsilon\log\frac{1}{\epsilon})^{1/2})\) and the privacy amplification error can be made arbitrarily small assuming perfect measurements are used for the source test. For imperfect measurements,
under a very broad assumption, we showed that the rate decrease is similar to the perfect case and the privacy amplification error depends on the error of the measurements. The measurement error too can be made arbitrarily small under further reasonable physical assumptions, like independence of the measurement errors. We leave the details of such a physical model and its relation to our assumption on the measurements for future work. It should also be noted that if the source is known to pass the source test with a high probability (which can be made arbitrarily close to 1), say \(1-\epsilon_{s}\), then the source test need not even be performed before the QKD protocol. The error \(\epsilon_{s}\) can simply be added to the QKD security parameter.
## Acknowledgments
We would like to thank Omar Fawzi for interesting discussions and for pointing out Lemma B.2. We also thank Ernest Tan and Amir Arapand, whose observations helped improve Theorem 5.1. We are also grateful to Ernest Tan and Shlok Ashok Nahar, who explained the source correlation problem for QKD to us during and after the QKD Security Proof Workshop at the Institute of Quantum Computing, Waterloo and also for their comments on a manuscript of this paper. AM was supported by the J.A. DeSeve Foundation and by course d'excellence Google. This work was also supported by the Natural Sciences and Engineering Research Council of Canada.
## Appendix A Entropic triangle inequalities cannot be improved much
In this section, we will construct a classical counterexample to show that it is not possible to improve Lemma 3.5 to get a result like
\[H_{\min}^{\epsilon^{\prime}}(A|B)_{\rho}\geq H_{\min}^{\epsilon}(A|B)_{\eta}-O (D_{\max}^{\epsilon^{\prime\prime}}(\rho\|\eta)) \tag{114}\]
where \(\epsilon,\epsilon^{\prime}>0\) and the constant in front of \(D_{\max}^{\epsilon^{\prime\prime}}(\rho\|\eta)\) is independent of the dimensions \(|A|\) and \(|B|\).
Consider the probability distribution \(p_{AB}\) where \(B\) is chosen to be equal to 1 with probability \(1-\epsilon\) and 0 with probability \(\epsilon\), and \(A_{1}^{n}\) is chosen to be a random \(n\)-bit string if \(B=1\) otherwise \(A_{1}^{n}\) is chosen to be the all 0 string. Let \(E\) be the event that \(B=0\). Then, we have
\[p_{AB|E}\leq\frac{1}{p(E)}p_{AB}=\frac{1}{\epsilon}p_{AB}\]
or equivalently \(D_{\max}(p_{AB|E}\|p_{AB})\leq\log\frac{1}{\epsilon}\). In this case, we have \(H^{\epsilon}_{\min}(A|B)_{p}=n\) (where we are smoothing in the trace distance) and \(H^{\epsilon^{\prime}}_{\min}(A|B)_{p_{|E}}=\log\frac{1}{1-\epsilon^{\prime}}=O(1)\) (independent of \(n\)). If Eq. 114, were true then we would have
\[n-O\left(\log\frac{1}{\epsilon}\right) \leq H^{\epsilon}_{\min}(A|B)_{P}-O(D^{\epsilon^{\prime\prime}}_{ \max}(p_{AB|E}\|p_{AB}))\] \[\leq H^{\epsilon^{\prime}}_{\min}(A|B)_{p_{|E}}=O(1)\]
which would lead to a contradiction because \(n\) is a free parameter and we can let \(n\to\infty\).
The same example can be used to show that it is not possible to improve Corollary 3.6 to an equation of the form
\[H(A|B)_{\rho}\geq H(A|B)_{\eta}-O(D(\rho\|\eta)).\]
For \(\rho=P_{|E}\) and \(\eta=P\), such a bound would imply that
\[0\geq(1-\epsilon)n-\log\frac{1}{\epsilon}\]
which is not true for large \(n\).
Appendix B Bounds for \(D^{\#}_{\alpha}\) of the form in Lemma 5.3 necessarily diverge in the limit \(\alpha=1\)
Classically, we have the following bound for Renyi entropies.
**Lemma B.1**.: _Suppose \(\epsilon\in(0,1]\), \(d\geq\epsilon^{1/2}\), and \(p\) and \(q\) are two distributions over an alphabet \(\mathcal{X}\) such that \(\frac{1}{2}\left\|p-q\right\|_{1}\leq\epsilon\) and \(D_{\max}(p\|q)\leq d<\infty\), for \(\alpha>1\) we have_
\[D_{\alpha}(p\|q)\leq\frac{1}{\alpha-1}\log\left((1+\sqrt{\epsilon})^{\alpha-1} (1-2\sqrt{\epsilon})+2^{d(\alpha-1)+1}\sqrt{\epsilon}\right). \tag{115}\]
_In the limit, \(\alpha\to 1\), we get the bound_
\[D(p\|q)\leq(1-2\sqrt{\epsilon})\log(1+\sqrt{\epsilon})+2\sqrt{\epsilon}d. \tag{116}\]
Proof.: Classically, we have that the set \(S:=\{x\in\mathcal{X}:p(x)\leq(1+\sqrt{\epsilon})q(x)\}\) is such that \(p(S)\geq 1-2\sqrt{\epsilon}\) using Lemma 4.1. Thus, for \(\alpha>1\) we have
\[\sum_{x\in\mathcal{X}}p(x)\left(\frac{p(x)}{q(x)}\right)^{\alpha-1} =\sum_{x\in S}p(x)\left(\frac{p(x)}{q(x)}\right)^{\alpha-1}+\sum_{ x\notin S}p(x)\left(\frac{p(x)}{q(x)}\right)^{\alpha-1}\] \[\leq\sum_{x\in S}(1+\sqrt{\epsilon})^{\alpha-1}p(x)+\sum_{x\notin S }2^{d(\alpha-1)}p(x)\] \[=(1+\sqrt{\epsilon})^{\alpha-1}p(S)+2^{d(\alpha-1)}p(S^{c})\] \[\leq(1+\sqrt{\epsilon})^{\alpha-1}(1-2\sqrt{\epsilon})+2^{d( \alpha-1)+1}\sqrt{\epsilon}\]
where in the second line we used the definition of set \(S\) and the fact that \(D_{\max}(p\|q)\leq d\), in the last line we use the fact that since \(d\geq\sqrt{\epsilon}\geq\log(1+\sqrt{\epsilon})\), the convex sum is maximised for the largest possible value of \(p(S^{c})\), which is \(2\sqrt{\epsilon}\). The bound now follows.
We observed in Section 5.1 that the bound in Lemma 5.3 for \(D_{\alpha}^{\#}\) tends to \(\infty\) as \(\alpha\to 1\) for a fixed \(\epsilon>0\). One may wonder if a bound like Eq. 116 exists for \(\lim_{\alpha\to 1}D_{\alpha}^{\#}(\rho\|\sigma)=\hat{D}(\rho\|\sigma)\)[1]. We show in the following that such a bound is not possible.
Suppose, that for all \(\epsilon\in[0,a)\) (a small neighborhood of \(0\)), \(1\leq d<\infty\), states \(\rho\) and \(\sigma\), which satisfy \(\frac{1}{2}\left\|\rho-\sigma\right\|_{1}\leq\epsilon\) and \(\rho\leq 2^{d}\sigma\), the following bound holds
\[\hat{D}(\rho\|\sigma)\leq f(\epsilon,d) \tag{117}\]
where \(f(\epsilon,d)\) is such that \(\lim_{\epsilon\to 0}f(\epsilon,d)=f(0,d)=0\) for every \(1\leq d<\infty\). Note that the upper bound in Eq. 116 is of this form. It is known that for pure states \(\rho\), \(\hat{D}(\rho\|\sigma)=D_{\max}(\rho\|\sigma)\). We will use this to construct a contradiction.
**Lemma B.2**.: 10 _For a pure state \(\rho=\left|\rho\right\rangle\left\langle\rho\right|\) and a state \(\sigma\), we have_
Footnote 10: This Lemma was pointed out to us by Omar Fawzi.
\[\hat{D}(\rho\|\sigma)=D_{\max}(\rho\|\sigma)=\left\langle\rho\right|\sigma^{- 1}\left|\rho\right\rangle.\]
Proof.: First, we can evaluate \(\hat{D}\) as
\[\hat{D}(\rho\|\sigma) =\operatorname{tr}\left(\rho\log\left(\rho^{\frac{1}{2}}\sigma^{ -1}\rho^{\frac{1}{2}}\right)\right)\] \[=\operatorname{tr}\left(\left|\rho\right\rangle\left\langle\rho \right|\log\left(\left|\rho\right\rangle\left\langle\rho\right|\sigma^{-1} \left|\rho\right\rangle\left\langle\rho\right|\right)\right)\] \[=\operatorname{tr}\left(\left|\rho\right\rangle\left\langle\rho \right|\log\left(\left\langle\rho\right|\sigma^{-1}\left|\rho\right\rangle \left|\rho\right\rangle\left\langle\rho\right|\right)\right.\] \[=\log\left\langle\rho\right|\sigma^{-1}\left|\rho\right\rangle.\]
Next, we have that
\[D_{\max}(\rho\|\sigma) =\log\left\|\sigma^{-\frac{1}{2}}\rho\sigma^{-\frac{1}{2}} \right\|_{\infty}\] \[=\log\left\|\sigma^{-\frac{1}{2}}\left|\rho\right\rangle\left\langle \rho\right|\sigma^{-\frac{1}{2}}\right\|_{\infty}\] \[=\log\left\langle\rho\right|\sigma^{-1}\left|\rho\right\rangle.\]
To obtain a contradiction, let \(\epsilon\in[0,a^{2})\). Define the states
\[\rho :=\ket{0}\bra{0}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\] \[\sigma_{\epsilon}^{\prime} :=(\sqrt{1-\epsilon}\ket{0}+\sqrt{\epsilon}\ket{1})(\sqrt{1- \epsilon}\ket{0}+\sqrt{\epsilon}\ket{1})^{\dagger}\] \[=\begin{pmatrix}1-\epsilon&\sqrt{\epsilon(1-\epsilon)}\\ \sqrt{\epsilon(1-\epsilon)}&\epsilon\end{pmatrix}\] \[\sigma_{\epsilon} :=(1-\delta)\sigma_{\epsilon}^{\prime}+\delta\rho\] \[=\begin{pmatrix}(1-\epsilon)(1-\delta)+\delta&(1-\delta)\sqrt{ \epsilon(1-\epsilon)}\\ (1-\delta)\sqrt{\epsilon(1-\epsilon)}&(1-\delta)\epsilon\end{pmatrix}\]
where \(\{\ket{0},\ket{1}\}\) is the standard basis and \(\delta\in(0,1)\) is a parameter, which will be chosen later. Observe that \(F(\rho,\sigma_{\epsilon})=\langle e_{0},\sigma_{\epsilon}e_{0}\rangle=1- \epsilon(1-\delta)\), which implies that \(\frac{1}{2}\left\lVert\rho-\sigma_{\epsilon}\right\rVert_{1}\leq\sqrt{ \epsilon}\in[0,a)\). For these definitions, we have
\[\sigma_{\epsilon}^{-1}=\frac{1}{(1-\delta)\delta\epsilon}\begin{pmatrix}(1- \delta)\epsilon&-(1-\delta)\sqrt{\epsilon(1-\epsilon)}\\ -(1-\delta)\sqrt{\epsilon(1-\epsilon)}&(1-\epsilon)(1-\delta)+\delta\end{pmatrix}\]
which implies that \(\hat{D}(\rho\|\sigma_{\epsilon})=\log\frac{1}{\delta}\) using Lemma B.2. We can fix \(\delta=\frac{1}{10}\). Note that \(\hat{D}(\rho\|\sigma_{\epsilon})>0\) is independent of \(\epsilon\). Now observe that if the bound in Eq. 117 were true, then as \(\epsilon\to 0\), \(\hat{D}(\rho\|\sigma_{\epsilon})=\log(10)\to 0\), which leads us to a contradiction. Thus, we cannot have bounds of the form in Eq. 117 (also see [1]). Consequently, any kind of bound on \(\hat{D}_{\alpha}\) or \(D_{\alpha}^{\#}\) which results in a bound of the form in Eq. 117 as \(\alpha\to 1\), for example, the bound in Eq. 115, is also not possible at least close to \(\alpha=1\).
It should be noted that the reason we can have bounds of the form in Lemma 5.3, despite the fact that no good bound on \(\hat{D}=\lim_{\alpha\to 1}D_{\alpha}^{\#}\) can be produced is that \(D_{\alpha}^{\#}\), unlike the conventional generalizations of the Renyi divergence, is **not monotone** in \(\alpha\)[12, Remark 3.3](otherwise the above counterexample would also give a no-go argument for \(D_{\alpha}^{\#}\)).
Appendix C Transforming lemmas for EAT from \(\tilde{H}_{\alpha}^{\downarrow}\) to \(\tilde{H}_{\alpha}^{\uparrow}\)
We have to redo the Lemmas used in [1] using \(\tilde{H}_{\alpha}^{\uparrow}\) because we were only able to prove the dimension bound we need \((\tilde{H}_{\alpha}^{\uparrow}(A|BC)\geq\tilde{H}_{\alpha}^{\uparrow}(A|B)-2 \log|C|)\) in terms of \(\tilde{H}_{\alpha}^{\uparrow}\)
**Lemma C.1** ([1, Lemma 3.1]).: _For \(\rho_{A_{1}A_{2}B}\) and \(\sigma_{B}\) be states and \(\alpha\in(0,\infty)\), we have the chain rule_
\[\tilde{D}_{\alpha}(\rho_{A_{1}B}\|\mathds{1}_{A_{1}}\otimes\sigma_{B})-\tilde {D}_{\alpha}(\rho_{A_{1}A_{2}B}\|\mathds{1}_{A_{1}A_{2}}\otimes\sigma_{B})= \tilde{H}_{\alpha}^{\downarrow}(A_{2}|A_{1}B)_{\nu} \tag{118}\]
_where the state \(\nu_{A_{1}A_{2}B}\) is defined as_
\[\nu_{A_{1}B} :=\frac{\left(\rho_{A_{1}B}^{\frac{1}{2}}\sigma_{B}^{-\alpha^{ \prime}}\rho_{A_{1}B}^{\frac{1}{2}}\right)^{\alpha}}{\operatorname{tr}\left( \rho_{A_{1}B}^{\frac{1}{2}}\sigma_{B}^{-\alpha^{\prime}}\rho_{A_{1}B}^{\frac{1 }{2}}\right)^{\alpha}}\] \[\nu_{A_{1}A_{2}B} :=\nu_{A_{1}B}^{\frac{1}{2}}\rho_{A_{2}|A_{1}B}\nu_{A_{1}B}^{\frac {1}{2}}\]
_and \(\alpha^{\prime}:=\frac{\alpha-1}{\alpha}\)._
**Corollary C.2** (Chain rule for \(\tilde{H}_{\alpha}^{\downarrow}\)[16, Theorem 3.2]).: _For \(\alpha\in(0,\infty)\), a state \(\rho_{A_{1}A_{2}B}\), we have the chain rule_
\[\tilde{H}_{\alpha}^{\downarrow}(A_{1}A_{2}|B)_{\rho}=\tilde{H}_{\alpha}^{ \downarrow}(A_{1}|B)_{\rho}+\tilde{H}_{\alpha}^{\downarrow}(A_{2}|A_{1}B)_{\nu} \tag{119}\]
_where the state \(\nu_{A_{1}A_{2}B}\) is defined as_
\[\nu_{A_{1}B} :=\frac{\left(\rho_{A_{1}B}^{\frac{1}{2}}\rho_{B}^{-\alpha^{ \prime}}\rho_{A_{1}B}^{\frac{1}{2}}\right)^{\alpha}}{\operatorname{tr}\left( \rho_{A_{1}B}^{\frac{1}{2}}\rho_{B}^{-\alpha^{\prime}}\rho_{A_{1}B}^{\frac{1 }{2}}\right)^{\alpha}}\] \[\nu_{A_{1}A_{2}B} :=\nu_{A_{1}B}^{\frac{1}{2}}\rho_{A_{2}|A_{1}B}\nu_{A_{1}B}^{ \frac{1}{2}}\]
_and \(\alpha^{\prime}:=\frac{\alpha-1}{\alpha}\)._
We can modify [16, Theorem 3.2], which is in terms of \(\tilde{H}_{\alpha}^{\downarrow}\), to the following, which is a chain rule in terms of \(\tilde{H}_{\alpha}^{\uparrow}\). The chain rule in this Corollary was also observed in [16].
**Corollary C.3** (Chain rule for \(\tilde{H}_{\alpha}^{\uparrow}\)).: _For \(\alpha\in(0,\infty)\), a state \(\rho_{A_{1}A_{2}B}\) and for any state \(\sigma_{B}\) such that \(\tilde{H}_{\alpha}^{\uparrow}(A_{1}|B)_{\rho}=-\tilde{D}_{\alpha}(\rho_{A_{1} B}|\operatorname{\mathds{1}}_{A_{1}}\otimes\sigma_{B})\), we have_
\[\tilde{H}_{\alpha}^{\uparrow}(A_{1}A_{2}|B)_{\rho}\geq\tilde{H}_{\alpha}^{ \uparrow}(A_{1}|B)_{\rho}+\tilde{H}_{\alpha}^{\downarrow}(A_{2}|A_{1}B)_{\nu} \tag{120}\]
_where the state \(\nu_{A_{1}A_{2}B}\) is defined as_
\[\nu_{A_{1}B} :=\frac{\left(\rho_{A_{1}B}^{\frac{1}{2}}\sigma_{B}^{-\alpha^{ \prime}}\rho_{A_{1}B}^{\frac{1}{2}}\right)^{\alpha}}{\operatorname{tr}\left( \rho_{A_{1}B}^{\frac{1}{2}}\sigma_{B}^{-\alpha^{\prime}}\rho_{A_{1}B}^{\frac{1 }{2}}\right)^{\alpha}}\] \[\nu_{A_{1}A_{2}B} :=\nu_{A_{1}B}^{\frac{1}{2}}\rho_{A_{2}|A_{1}B}\nu_{A_{1}B}^{ \frac{1}{2}}\]
_and \(\alpha^{\prime}:=\frac{\alpha-1}{\alpha}\). For \(\alpha\in(0,\infty)\), state \(\rho_{A_{1}A_{2}B}\) and any state \(\sigma_{B}\) such that \(\tilde{H}_{\alpha}^{\uparrow}(A_{1}A_{2}|B)_{\rho}=-\tilde{D}_{\alpha}(\rho_{A _{1}A_{2}B}|\operatorname{\mathds{1}}_{A_{1}A_{2}}\otimes\sigma_{B})\), we have_
\[\tilde{H}_{\alpha}^{\uparrow}(A_{1}A_{2}|B)_{\rho}\leq\tilde{H}_{\alpha}^{ \uparrow}(A_{1}|B)_{\rho}+\tilde{H}_{\alpha}^{\downarrow}(A_{2}|A_{1}B)_{\nu} \tag{121}\]
_where the state \(\nu_{A_{1}A_{2}B}\) is defined the same as above._
Proof.: Let \(\sigma_{B}\) be a state such that \(\tilde{H}_{\alpha}^{\dagger}(A_{1}|B)_{\rho}=-\tilde{D}_{\alpha}(\rho_{A_{1}B} \|\mathds{1}\otimes\sigma_{B})\). Then, using Lemma C.1, we have
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}A_{2}|B)_{\rho} \geq-\tilde{D}_{\alpha}(\rho_{A_{1}A_{2}B}\|\mathds{1}_{A_{1}A_{2 }}\otimes\sigma_{B})\] \[=-\tilde{D}_{\alpha}(\rho_{A_{1}B}\|\mathds{1}_{A_{1}}\otimes \sigma_{B})+\tilde{H}_{\alpha}^{\dagger}(A_{2}|A_{1}B)_{\nu}\] \[=\tilde{H}_{\alpha}^{\dagger}(A_{1}|B)_{\rho}+\tilde{H}_{\alpha}^ {\dagger}(A_{2}|A_{1}B)_{\nu}\]
for \(\nu_{A_{1}A_{2}B}\) defined as in the Lemma. Similarly, if \(\tilde{H}_{\alpha}^{\dagger}(A_{1}A_{2}|B)_{\rho}=-\tilde{D}_{\alpha}(\rho_{A _{1}A_{2}B}\|\mathds{1}_{A_{1}A_{2}}\otimes\sigma_{B})\), then
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}A_{2}|B)_{\rho} =-\tilde{D}_{\alpha}(\rho_{A_{1}A_{2}B}\|\mathds{1}_{A_{1}A_{2}} \otimes\sigma_{B})\] \[=-\tilde{D}_{\alpha}(\rho_{A_{1}B}\|\mathds{1}_{A_{1}}\otimes \sigma_{B})+\tilde{H}_{\alpha}^{\dagger}(A_{2}|A_{1}B)_{\nu}\] \[\leq\tilde{H}_{\alpha}^{\dagger}(A_{1}|B)_{\rho}+\tilde{H}_{\alpha }^{\dagger}(A_{2}|A_{1}B)_{\nu}\]
for \(\nu_{A_{1}A_{2}B}\) defined as in the Lemma.
We transform [1, Theorem 3.3] to a statement about \(\tilde{H}_{\alpha}^{\dagger}\) in the following.
**Lemma C.4**.: _Let \(\alpha\in\left[\frac{1}{2},\infty\right)\) and \(\rho_{A_{1}A_{2}B_{1}B_{2}}\) be a state which satisfies the Markov chain \(A_{1}\leftrightarrow B_{1}\leftrightarrow B_{2}\). Then, we have_
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}A_{2}|B_{1}B_{2})_{\rho}\geq \tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1})_{\rho}+\inf_{\nu}\tilde{H}_{\alpha} ^{\dagger}(A_{2}|A_{1}B_{1}B_{2})_{\nu} \tag{122}\]
_where the infimum is taken over all states \(\nu_{A_{1}A_{2}B_{1}B_{2}}\) such that \(\nu_{A_{2}B_{2}|A_{1}B_{1}}=\rho_{A_{2}B_{2}|A_{1}B_{1}}\)._
Proof.: Since, \(\rho\) satisfies the Markov chain \(A_{1}\leftrightarrow B_{1}\leftrightarrow B_{2}\), there exists a decomposition of the system \(B_{1}\) as [11, Theorem 5.4]
\[B_{1}=\bigoplus_{j\in J}a_{j}\otimes c_{j}\]
such that
\[\rho_{A_{1}B_{1}B_{2}}=\bigoplus_{j\in J}p(j)\rho_{A_{1}a_{j}} \otimes\rho_{c_{j}B_{2}}. \tag{123}\]
Let \(J^{\prime}\subseteq J\) be the set \(\{j\in J:p(j)>0\}\). Note, that we can replace \(J\) by \(J^{\prime}\) in the above equation.
We can define the CPTP recovery map \(\mathcal{R}_{B_{1}\to B_{1}B_{2}}\) for \(\rho_{A_{1}B_{1}B_{2}}\) as
\[\mathcal{R}_{B_{1}\to B_{1}B_{2}}(X)\coloneqq\bigoplus_{j\in J} \operatorname{tr}_{c_{j}}\left(\Pi_{a_{j}}\otimes\Pi_{c_{j}}X\Pi_{a_{j}} \otimes\Pi_{c_{j}}\right)\otimes\rho_{c_{j}B_{2}} \tag{124}\]
where \(\Pi_{a_{j}}\otimes\Pi_{c_{j}}\) is the projector on the subspace \(a_{j}\otimes c_{j}\). This recovery channel satisfies
\[\mathcal{R}_{B_{1}\to B_{1}B_{2}}(\rho_{A_{1}B_{1}})=\rho_{A_{1}B_{1}B_{2}}. \tag{125}\]
We can now show that the optimisation for the conditional entropy \(\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}\) can be restricted to states of the form \(\mathcal{R}_{B_{1}\to B_{1}B_{2}}\left(\sigma_{B_{1}}\right)\). This follows as
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho} =\sup_{\sigma_{B_{1}B_{2}}}-\tilde{D}_{\alpha}(\rho_{A_{1}B_{1}B_ {2}}\|\operatorname{\mathds{1}}_{A_{1}}\otimes\sigma_{B_{1}B_{2}})\] \[\leq\sup_{\sigma_{B_{1}B_{2}}}-\tilde{D}_{\alpha}(\mathcal{R}_{B _{1}\to B_{1}B_{2}}\circ\operatorname{tr}_{B_{2}}\left(\rho_{A_{1}B_{1}B_{2}} \right)\|\mathcal{R}_{B_{1}\to B_{1}B_{2}}\circ\operatorname{tr}_{B_{2}} \left(\operatorname{\mathds{1}}_{A_{1}}\otimes\sigma_{B_{1}B_{2}}\right))\] \[=\sup_{\sigma_{B_{1}}}-\tilde{D}_{\alpha}(\rho_{A_{1}B_{1}B_{2}} \|\operatorname{\mathds{1}}_{A_{1}}\otimes\mathcal{R}_{B_{1}\to B_{1}B_{2}} \left(\sigma_{B_{1}}\right))\] \[\leq\sup_{\sigma_{B_{1}B_{2}}}-\tilde{D}_{\alpha}(\rho_{A_{1}B_{1 }B_{2}}\|\operatorname{\mathds{1}}_{A_{1}}\otimes\sigma_{B_{1}B_{2}})\] \[=\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}\]
where the second line follows from the data processing inequality for \(\tilde{D}_{\alpha}\) for \(\alpha\geq\frac{1}{2}\), the supremum in the fourth line is over all states on the registers \(B_{1}B_{2}\),and the last line simply follows from the definition of \(\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}\). As a result, it follows that
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}=\sup_{\sigma_{B_{1}}}- \tilde{D}_{\alpha}(\rho_{A_{1}B_{1}B_{2}}\|\operatorname{\mathds{1}}_{A_{1}} \otimes\mathcal{R}_{B_{1}\to B_{1}B_{2}}\left(\sigma_{B_{1}}\right)) \tag{126}\]
Let \(\sigma_{B_{1}B_{2}}=\mathcal{R}_{B_{1}\to B_{1}B_{2}}\left(\eta_{B_{1}}\right)\) be such that \(\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}=-\tilde{D}_{\alpha}(\rho _{A_{1}B_{1}B_{2}}\|\operatorname{\mathds{1}}_{A_{1}}\otimes\sigma_{B_{1}B_{2 }})\). Using Corollary C.3, for this choice of \(\sigma_{B_{1}B_{2}}\), we have that
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}A_{2}|B_{1}B_{2})_{\rho}\geq\tilde{H}_{ \alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}+\tilde{H}_{\alpha}^{\dagger}(A_{2}| A_{1}B_{1}B_{2})_{\nu} \tag{127}\]
where the state \(\nu_{A_{1}A_{2}B_{1}B_{2}}\) is defined as
\[\nu_{A_{1}B_{1}B_{2}} :=\frac{\left(\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\sigma_{B_{1}B_ {2}}^{-\alpha^{\prime}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\right)^{\alpha}}{ \operatorname{tr}\left(\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\sigma_{B_{1}B_{2}} ^{-\alpha^{\prime}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\right)^{\alpha}}\] \[\nu_{A_{1}A_{2}B_{1}B_{2}} :=\nu_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\rho_{A_{2}|A_{1}B_{1}B_{2}} \nu_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}.\]
We will now show that \(\nu_{A_{2}B_{2}|A_{1}B_{1}}=\rho_{A_{2}B_{2}|A_{1}B_{1}}\). For this it is sufficient to show that
\[\nu_{A_{1}B_{1}}^{-\frac{1}{2}}\nu_{A_{1}B_{1}}^{\frac{1}{2}}\nu_{A_{1}B_{1}B_{ 2}}^{\frac{1}{2}}=\rho_{A_{1}B_{1}}^{-\frac{1}{2}}\rho_{A_{1}B_{1}B_{2}}^{ \frac{1}{2}}.\]
We have that
\[\sigma_{B_{1}B_{2}} =\mathcal{R}_{B_{1}\to B_{1}B_{2}}\left(\eta_{B_{1}}\right)\] \[=\bigoplus_{j\in J}\operatorname{tr}_{c_{j}}\left(\Pi_{a_{j}} \otimes\Pi_{c_{j}}\eta_{B_{1}}\Pi_{a_{j}}\otimes\Pi_{c_{j}}\right)\otimes \rho_{c_{j}B_{2}}\] \[=\bigoplus_{j\in J}q(j)\omega_{a_{j}}\otimes\rho_{c_{j}B_{2}}\]
where we have defined the probability distribution \(q(j)\coloneqq\operatorname{tr}(\Pi_{a_{j}}\otimes\Pi_{c_{j}}\eta_{B_{1}})\) and states \(\omega_{a_{j}}=\frac{1}{q(j)}\Pi_{a_{j}}\operatorname{tr}_{c_{j}}\left(\Pi_{c_{ j}}\eta_{B_{1}}\Pi_{c_{j}}\right)\Pi_{a_{j}}\) for every \(j\in J\).
Since \(\tilde{D}_{\alpha}(\rho_{A_{1}B_{1}B_{2}}||\mathds{1}_{A_{1}}\otimes\sigma_{B _{1}B_{2}})=-\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}\leq\log|A_ {1}|<\infty\), we have that
\[\rho_{A_{1}B_{1}B_{2}}\ll\mathds{1}_{A_{1}}\otimes\sigma_{B_{1}B _{2}}\] \[\Rightarrow \bigoplus_{j\in J^{\prime}}p(j)\rho_{A_{1}a_{j}}\otimes\rho_{c_ {j}B_{2}}\ll\mathds{1}_{A_{1}}\otimes\bigoplus_{j\in J}q(j)\omega_{a_{j}} \otimes\rho_{c_{j}B_{2}}\] \[\Rightarrow \text{for every }j\in J^{\prime}:\rho_{A_{1}a_{j}}\ll\mathds{1}_{A_{1}} \otimes\omega_{a_{j}}\text{ and }q(j)>0. \tag{128}\]
This decomposition can be used to evaluate \(\nu_{A_{1}B_{1}B_{2}}\) as follows
\[\nu_{A_{1}B_{1}B_{2}} =\frac{1}{N}\left(\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\sigma_{B_{ 1}B_{2}}^{-\alpha^{\prime}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\right)^{\alpha}\] \[=\frac{1}{N}\left(\bigoplus_{j\in J^{\prime}}p(j)^{\frac{1}{2}} \rho_{A_{1}a_{j}}^{\frac{1}{2}}\otimes\rho_{c_{j}B_{2}}^{\frac{1}{2}}\bigoplus _{j\in J}q(j)^{-\alpha^{\prime}}\omega_{a_{j}}^{-\alpha^{\prime}}\otimes\rho_ {c_{j}B_{2}}^{-\alpha^{\prime}}\bigoplus_{j\in J^{\prime}}p(j)^{\frac{1}{2}} \rho_{A_{1}a_{j}}^{\frac{1}{2}}\otimes\rho_{c_{j}B_{2}}^{\frac{1}{2}}\right)^ {\alpha}\] \[=\frac{1}{N}\left(\bigoplus_{j\in J^{\prime}}p(j)q(j)^{-\alpha^{ \prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{\prime}}\rho_ {A_{1}a_{j}}^{\frac{1}{2}}\otimes\rho_{c_{j}B_{2}}^{1-\alpha^{\prime}}\right) ^{\alpha}\] \[=\frac{1}{N}\bigoplus_{j\in J^{\prime}}p(j)^{\alpha}q(j)^{1- \alpha}\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{\prime}} \rho_{A_{1}a_{j}}^{\frac{1}{2}}\right)^{\alpha}\otimes\rho_{c_{j}B_{2}}\]
for \(N:=\operatorname{tr}\left(\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\sigma_{B_{1}B _{2}}^{-\alpha^{\prime}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\right)^{\alpha}\). Further, we have
\[\nu_{A_{1}B_{1}}^{-\frac{1}{2}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2 }}\] \[=\frac{1}{N^{-\frac{1}{2}}}\bigoplus_{j\in J^{\prime}}p(j)^{- \frac{\alpha}{2}}q(j)^{-\frac{1-\alpha}{2}}\left(\rho_{A_{1}a_{j}}^{\frac{1}{ 2}}\omega_{a_{j}}^{-\alpha^{\prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\right)^{- \frac{\alpha}{2}}\otimes\rho_{c_{j}}^{-\frac{1}{2}}\] \[\qquad\qquad\qquad\qquad\cdot\frac{1}{N^{\frac{1}{2}}}\bigoplus _{j\in J^{\prime}}p(j)^{\frac{\alpha}{2}}q(j)^{\frac{1-\alpha}{2}}\left(\rho_{A _{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{\prime}}\rho_{A_{1}a_{j}}^{ \frac{1}{2}}\right)^{\frac{\alpha}{2}}\otimes\rho_{c_{j}B_{2}}^{\frac{1}{2}}\] \[=\bigoplus_{j\in J^{\prime}}\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}} \omega_{a_{j}}^{-\alpha^{\prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\right)^{0} \otimes\rho_{c_{j}}^{-\frac{1}{2}}\rho_{c_{j}B_{2}}^{\frac{1}{2}}\] \[=\bigoplus_{j\in J^{\prime}}\rho_{A_{1}a_{j}}^{0}\otimes\rho_{c_ {j}}^{-\frac{1}{2}}\rho_{c_{j}B_{2}}^{\frac{1}{2}}\]
where in the last line we have used that the projector \(\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{\prime}}\rho_{A_{1} a_{j}}^{\frac{1}{2}}\right)^{0}\) is equal to the projector \(\rho_{A_{1}a_{j}}^{0}\) for every \(j\in J^{\prime}\) (here \(P^{0}\) is the projector onto the image of positive semidefinite operator \(P\)). This can be seen since for every \(j\in J^{\prime}\) we first have
\[\operatorname{im}\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{ \prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\right)\subseteq\operatorname{im} \left(\rho_{A_{1}a_{j}}\right). \tag{129}\]
Second, we have that Eq. 128 above implies that \(\omega_{a_{j}}^{0}\rho_{Aa_{j}}^{0}=\rho_{Aa_{j}}^{0}\) for every \(j\in J^{\prime}\). Now, for \(j\in J^{\prime}\) we have the following inequality
\[\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{ \prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\right) \geq m\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{0}\rho _{A_{1}a_{j}}^{\frac{1}{2}}\right)\] \[=m\rho_{A_{1}a_{j}}\]
where \(m>0\) is the minimum non-zero eigenvalue of \(\omega_{a_{j}}^{-\alpha^{\prime}}\). Finally, raising the above to the power of \(0\) (this action is operator monotone)
\[\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{ \prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\right)^{0}\geq\rho_{A_{1}a_{j}}^{0}. \tag{130}\]
Eq. 129 and 130 together imply that for \(j\in J^{\prime}\)
\[\left(\rho_{A_{1}a_{j}}^{\frac{1}{2}}\omega_{a_{j}}^{-\alpha^{ \prime}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\right)^{0}=\rho_{A_{1}a_{j}}^{0}.\]
Finally, we have that
\[\rho_{A_{1}B_{1}}^{-\frac{1}{2}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{ 2}} =\bigoplus_{j\in J^{\prime}}p(j)^{-\frac{1}{2}}\rho_{A_{1}a_{j}}^{- \frac{1}{2}}\otimes\rho_{cj}^{-\frac{1}{2}}\bigoplus_{j\in J^{\prime}}p(j)^{ \frac{1}{2}}\rho_{A_{1}a_{j}}^{\frac{1}{2}}\otimes\rho_{cj}^{\frac{1}{2}}\] \[=\bigoplus_{j\in J^{\prime}}\rho_{A_{1}a_{j}}^{0}\otimes\rho_{cj}^ {-\frac{1}{2}}\rho_{cj}^{\frac{1}{2}}.\]
This proves that
\[\nu_{A_{1}B_{1}}^{-\frac{1}{2}}\nu_{A_{1}B_{1}B_{2}}^{\frac{1}{2 }}=\rho_{A_{1}B_{1}}^{-\frac{1}{2}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}} \tag{131}\]
and hence
\[\nu_{A_{2}B_{2}|A_{1}B_{1}} =\nu_{A_{1}B_{1}}^{-\frac{1}{2}}\nu_{A_{1}B_{1}B_{2}}^{\frac{1}{ 2}}\nu_{A_{2}|A_{1}B_{1}B_{2}}^{\frac{1}{2}}\nu_{A_{1}B_{1}B_{2}}^{-\frac{1}{ 2}}\] \[=\rho_{A_{1}B_{1}}^{-\frac{1}{2}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{ 2}}\rho_{A_{2}|A_{1}B_{1}B_{2}}\rho_{A_{1}B_{1}B_{2}}^{\frac{1}{2}}\rho_{A_{1} B_{1}}^{-\frac{1}{2}}\] \[=\rho_{A_{2}B_{2}|A_{1}B_{1}}\]
where we have used the fact that \(\nu_{A_{2}|A_{1}B_{1}B_{2}}=\rho_{A_{2}|A_{1}B_{1}B_{2}}\) and Eq. 131. We can now modify Eq. 127 to get
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}A_{2}|B_{1}B_{2})_{\rho}\geq \tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho}+\inf_{\nu}\tilde{H}_{ \alpha}^{\dagger}(A_{2}|A_{1}B_{1}B_{2})_{\nu}\]
where the infimum is over states \(\nu\) such that \(\nu_{A_{2}B_{2}|A_{1}B_{1}}=\rho_{A_{2}B_{2}|A_{1}B_{1}}\). We can use the data processing inequality to get
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\rho} =\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1}B_{2})_{\mathcal{R}_{B_{ 1}}\to B_{1}B_{2}}(\rho_{AB_{1}})\] \[\geq\tilde{H}_{\alpha}^{\dagger}(A_{1}|B_{1})_{\rho}.\]
Together with the above inequality this proves the Lemma.
We will use the following modification of [13, Corollary 3.5].
**Corollary C.5**.: _Let \(\mathcal{M}_{R\to A_{2}B_{2}}\) be a channel and \(\rho_{A_{1}A_{2}B_{1}B_{2}}=\mathcal{M}(\rho^{\prime}_{A_{1}B_{1}R})\) such that the Markov chain \(A_{1}\leftrightarrow B_{1}\leftrightarrow B_{2}\) holds. Then, we have_
\[\tilde{H}^{\dagger}_{\alpha}(A_{1}A_{2}|B_{1}B_{2})_{\rho}\geq \tilde{H}^{\dagger}_{\alpha}(A_{1}|B_{1})_{\rho}+\inf_{\omega}\tilde{H}^{ \downarrow}_{\alpha}(A_{2}|A_{1}B_{1}B_{2})_{\mathcal{M}(\omega)} \tag{132}\]
_where the infimum is taken over all states \(\omega_{A_{1}B_{1}R}\). Moreover, if \(\rho^{\prime}_{A_{1}B_{1}R}\) is pure then we can restrict the optimisation to pure states._
Proof.: The proof is the same as [13, Corollary 3.5]. We include it here for the sake of completeness.
It is sufficient to show that for every state \(\nu\) such that \(\nu_{A_{2}B_{2}|A_{1}B_{1}}=\rho_{A_{2}B_{2}|A_{1}B_{1}}\), there exists an \(\omega_{A_{1}B_{1}R}\) such that \(\nu_{A_{1}A_{2}B_{1}B_{2}}=\mathcal{M}(\omega)\). For such a \(\nu\), we can define
\[\omega_{RA_{1}B_{1}}=\nu^{\frac{1}{2}}_{A_{1}B_{1}}\rho^{-\frac{1}{2}}_{A_{1}B _{1}}\rho^{\prime}_{A_{1}B_{1}R}\rho^{-\frac{1}{2}}_{A_{1}B_{1}}\nu^{\frac{1}{ 2}}_{A_{1}B_{1}}\]
which can be seen to be a valid state and also satisfy \(\nu_{A_{1}A_{2}B_{1}B_{2}}=\mathcal{M}(\omega)\).
## Appendix D Dimension bounds for conditional Renyi entropies
**Lemma D.1** (Dimension bound).: _For \(\alpha\in[\frac{1}{2},\infty]\), a state \(\rho_{A_{1}A_{2}B}\), the following bounds hold for the sandwiched conditional entropies_
\[\tilde{H}^{\downarrow}_{\alpha}(A_{1}|B)_{\rho}-\log|A_{2}| \leq\tilde{H}^{\downarrow}_{\alpha}(A_{1}A_{2}|B)_{\rho}\leq \tilde{H}^{\downarrow}_{\alpha}(A_{1}|B)_{\rho}+\log|A_{2}|\] \[\tilde{H}^{\uparrow}_{\alpha}(A_{1}|B)_{\rho}-\log|A_{2}| \leq\tilde{H}^{\uparrow}_{\alpha}(A_{1}A_{2}|B)_{\rho}\leq\tilde{H}^{ \uparrow}_{\alpha}(A_{1}|B)_{\rho}+\log|A_{2}|.\]
_For \(\alpha\in[0,2]\) and a state \(\rho_{A_{1}A_{2}B}\), the following bounds hold for the Petz conditional entropies_
\[\bar{H}^{\downarrow}_{\alpha}(A_{1}A_{2}|B)_{\rho} \leq\bar{H}^{\downarrow}_{\alpha}(A_{1}|B)_{\rho}+\log|A_{2}|\] \[\bar{H}^{\uparrow}_{\alpha}(A_{1}A_{2}|B)_{\rho} \leq\bar{H}^{\uparrow}_{\alpha}(A_{1}|B)_{\rho}+\log|A_{2}|.\]
Proof.: For the sandwiched conditional entropies, we simply use the corresponding chain rules (Corollary C.2 or Corollary C.3) along with the fact that for all states \(\nu\), \(\tilde{H}^{\downarrow}_{\alpha}(A_{2}|A_{1}B)_{\nu}\in[-\log|A_{2}|,\log|A_{2}|]\)[14, Lemma 5.2].
For the Petz conditional entropies, we will make use of the Jensen's inequality for operators [1, Theorem V.2.3]. Suppose, \(\{|e_{i}\rangle\}_{i=1}^{|X|}\) is an orthogonal basis for the space \(X\)
Then, we have for a positive operator \(P_{XY}\) and \(\alpha\in[0,1]\)
\[\operatorname{tr}_{X}P_{XY}^{\alpha} =\sum_{i=1}^{|X|}\mathds{1}_{Y}\otimes\langle e_{i}|_{X}\,P_{XY}^{ \alpha}\,\mathds{1}_{Y}\otimes|e_{i}\rangle_{X}\] \[\leq|X|\left(\sum_{i=1}^{|X|}\frac{1}{|X|}\,\mathds{1}_{Y}\otimes \langle e_{i}|_{X}\,P_{XY}\,\mathds{1}_{Y}\otimes|e_{i}\rangle_{X}\right)^{\alpha}\] \[=|X|^{1-\alpha}P_{Y}^{\alpha} \tag{133}\]
where in the second step we have used the operator Jensen's inequality with the operators \(\left\{\frac{1}{\sqrt{|X|}}\,\mathds{1}_{Y}\otimes|e_{i}\rangle_{X}\right\}_{ i=1}^{|X|}\) along with the fact that the map \(X\mapsto X^{\alpha}\) is operator concave. For \(\alpha\in[1,2]\) and positive operator \(P_{XY}\), we can use the same argument as above and the fact that \(X\mapsto X^{\alpha}\) is operator convex in this regime and derive
\[\operatorname{tr}_{X}P_{XY}^{\alpha}\geq|X|^{1-\alpha}P_{Y}^{\alpha}. \tag{134}\]
To prove the dimension bound, observe that for a positive state \(\sigma_{B}\) and \(\alpha\in[0,2]\), we have
\[-\bar{D}_{\alpha}(\rho_{A_{1}A_{2}B}\|\,\mathds{1}_{A_{1}A_{2}} \otimes\sigma_{B}) =\frac{1}{1-\alpha}\log\operatorname{tr}\left(\rho_{A_{1}A_{2}B} ^{\alpha}\sigma_{B}^{1-\alpha}\right)\] \[=\frac{1}{1-\alpha}\log\operatorname{tr}\left(\operatorname{tr}_ {A_{2}}\left(\rho_{A_{1}A_{2}B}^{\alpha}\right)\sigma_{B}^{1-\alpha}\right)\] \[\leq\frac{1}{1-\alpha}\log\operatorname{tr}\left(|A_{2}|^{1- \alpha}\rho_{A_{1}B}^{\alpha}\sigma_{B}^{1-\alpha}\right)\] \[=-\bar{D}_{\alpha}(\rho_{A_{1}B}\|\,\mathds{1}_{A_{1}}\otimes \sigma_{B})+\log|A_{2}|.\]
We can now take a supremum over \(\sigma_{B}\) to prove the dimension bound for \(\bar{H}_{\alpha}^{\dagger}\) or choose \(\sigma_{B}=\rho_{B}\) to prove the dimension bound for \(\bar{H}_{\alpha}^{\dagger}\).
The following Lemma was originally proven in [13, Proposition 8]. We reproduce the proof argument here.
**Lemma D.2**.: _For \(\alpha\in[\frac{1}{2},\infty]\), a state \(\rho_{ABC}\), we have_
\[\tilde{H}_{\alpha}^{\dagger}(A|BC)_{\rho}\geq\tilde{H}_{\alpha}^{\dagger}(AC|B )_{\rho}-\log|C| \tag{135}\]
_and for \(\alpha\in[0,2]\)_
\[\bar{H}_{\alpha}^{\dagger}(A|BC)_{\rho}\geq\bar{H}_{\alpha}^{\dagger}(AC|B)_{ \rho}-\log|C| \tag{136}\]
Proof.: By the definition of the sandwiched conditional entropy, we have
\[\tilde{H}^{\dagger}_{\alpha}(A|BC) =\sup_{\eta_{BC}\in D(BC)}-\tilde{D}_{\alpha}(\rho_{ABC}||\,\mathds{1 }_{A}\otimes\eta_{BC})\] \[\geq\sup_{\eta_{B}\in D(B)}-\tilde{D}_{\alpha}\left(\rho_{ABC}|| \,\mathds{1}_{A}\otimes\frac{\mathds{1}_{C}}{|C|}\otimes\eta_{B}\right)\] \[=\sup_{\eta_{B}\in D(B)}-\tilde{D}_{\alpha}\left(\rho_{ABC}||\, \mathds{1}_{AC}\otimes\eta_{B}\right)-\log|C|\] \[=\tilde{H}^{\dagger}_{\alpha}(AC|B)-\log|C|\]
where we simply restrict the supremum in the second line to states of the form \(\eta_{BC}=\eta_{B}\otimes\frac{\mathds{1}_{C}}{|C|}\) to derive the inequality. The same proof also works with \(\tilde{H}^{\dagger}_{\alpha}\) entropy.
The following lemma was originally proven in [16, Proposition 3.3.5].
**Lemma D.3** (Dimension bound for conditioning register).: _For \(\alpha\in[\frac{1}{2},\infty]\) and a state \(\rho_{ABC}\) we have_
\[\tilde{H}^{\dagger}_{\alpha}(A|BC)_{\rho}\geq\tilde{H}^{\dagger}_ {\alpha}(A|B)_{\rho}-2\log|C|. \tag{137}\]
_Further, if the register \(C\) is classical, then we have_
\[\tilde{H}^{\dagger}_{\alpha}(A|BC)_{\rho}\geq\tilde{H}^{\dagger}_ {\alpha}(A|B)_{\rho}-\log|C|. \tag{138}\]
Proof.: This bound can be proven by combining Lemma D.1 and Lemma D.2. In the case that \(C\) is classical, we have the inequality \(\tilde{H}^{\dagger}_{\alpha}(AC|B)_{\rho}\geq\tilde{H}^{\dagger}_{\alpha}(A|B )_{\rho}\)[16, Lemma 5.3].
Appendix E Bounds on the size of the side information are necessary for the approximate entropy accumulation theorem
It turns out that it is necessary to place some sort of bound on the size of the side information for an approximate entropy accumulation theorem of the form in Theorem 5.1. The following classical example demonstrates this.
Let there be \(n\) rounds. For \(k\in[n]\), the map \(\mathcal{M}_{k}:A_{1}^{k-1}\to A_{k}B_{k}C_{k}\). This map sets the variables as follows:
1. Measure \(A_{1}^{k-1}\) in the standard basis.
2. Let \(A_{k}\in_{R}\left\{0,1\right\}\) be a randomly chosen bit.
3. Let \(C_{k}=0\) with probability \(\frac{\epsilon}{2}\) and \(C_{k}=1\) otherwise.
4. In the case that \(C_{k}=1\), let \(B_{k}\in_{R}\left\{0,1\right\}^{n}\) be a randomly chosen \(n\)-bit string. Otherwise, let \(B_{k}=A_{1}^{k}R_{k}\), where \(R_{k}\) is an \((n-k)\) bit randomly chosen string from \(\left\{0,1\right\}\).
Let \(\mathcal{M}_{k}^{\prime}\) be the map which always chooses \(B_{k}\) to be a random \(n\)-bit string. It is easy to see that in this case, we have \(H_{\min}(A_{1}^{n}|B_{1}^{n}C_{1}^{n})_{\mathcal{M}_{n}^{\prime}\circ\cdots \circ\mathcal{M}_{1}^{\prime}(1)}=n\) whereas \(H_{\min}(A_{1}^{n}|B_{1}^{n}C_{1}^{n})_{\mathcal{M}_{n}\circ\cdots\circ \mathcal{M}_{1}(1)}=O(1)\) even though for every \(k\in[n]\), the maps \(\mathcal{M}_{k}\) are \(\epsilon-\)close in diamond norm distance to the maps \(\mathcal{M}_{k}^{\prime}\). This proves that a bound on the size of the side registers is indeed necessary for approximate entropy accumulation. We show these facts formally in the following.
**Lemma E.1**.: _Suppose \(\Phi:R\to A\) and \(\Phi^{\prime}:R\to A\) are two channels which take a register \(R\) and measure it in the standard basis and map the resulting classical register \(C\) to the classical register \(A\). Then, for every \(\rho_{RR^{\prime}}\), we have_
\[\left\|\Phi(\rho_{RR^{\prime}})-\Phi^{\prime}(\rho_{RR^{\prime}})\right\|_{1} \leq\left\|P_{AC}^{\Phi}-P_{AC}^{\Phi^{\prime}}\right\|_{1} \tag{139}\]
_where \(P_{AC}^{\Phi}\) and \(P_{AC}^{\Phi^{\prime}}\) are the classical distributions produced when the maps \(\Phi\) and \(\Phi^{\prime}\) are applied to the state \(\rho_{RR^{\prime}}\) respectively._
Proof.: Let \(\left\{\left|c\right\rangle\left\langle c\right|\right\}_{c}\) represent the measurement in the standard basis. Since, both the channels first measure register \(R\) in the standard basis, they produce the state
\[\rho_{CR^{\prime}} =\sum_{c}\left|c\right\rangle\left\langle c\right|_{C}\otimes \operatorname{tr}_{R}\left(\left|c\right\rangle\left\langle c\right|_{R}\rho_ {RR^{\prime}}\right)\] \[=\sum_{c}p(c)\left|c\right\rangle\left\langle c\right|_{C} \otimes\rho_{R^{\prime}|c}\]
where we have defined \(p(c):=\operatorname{tr}\left(\left|c\right\rangle\left\langle c\right|_{R} \rho_{R}\right)\) and \(\rho_{R^{\prime}|c}:=\frac{1}{p(c)}\operatorname{tr}_{R}\left(\left|c\right\rangle \left\langle c\right|_{R}\rho_{RR^{\prime}}\right)\). Now, the action of channel \(\Phi\) on register \(C\) can be represented using the conditional probability distribution \(p_{A|C}^{\Phi}\) and the action of channel \(\Phi^{\prime}\) on register \(C\) can be similarly represented using \(p_{A|C}^{\Phi^{\prime}}\). We can define the states
\[\rho_{ACR^{\prime}}^{\Phi} :=\sum_{ac}p_{A|C}^{\Phi}(a|c)p(c)\left|a,c\right\rangle\left\langle a,c\right|\otimes\rho_{R^{\prime}|c}\] \[\rho_{ACR^{\prime}}^{\Phi^{\prime}} :=\sum_{ac}p_{A|C}^{\Phi^{\prime}}(a|c)p(c)\left|a,c\right\rangle \left\langle a,c\right|\otimes\rho_{R^{\prime}|c}.\]
Note that \(\operatorname{tr}_{C}\left(\rho_{ACR^{\prime}}^{\Phi}\right)=\Phi(\rho_{RR^{ \prime}})\) and \(\operatorname{tr}_{C}\left(\rho_{ACR^{\prime}}^{\Phi^{\prime}}\right)=\Phi^{ \prime}(\rho_{RR^{\prime}})\). Further, we can view the \(R^{\prime}\) register of \(\rho_{ACR^{\prime}}^{\Phi}\) and \(\rho_{ACR^{\prime}}^{\Phi^{\prime}}\) as being created by a channel which measures the register
and outputs the state \(\rho_{R^{\prime}|c}\) in the register \(R^{\prime}\). Therefore, we have
\[\left\|\Phi(\rho_{RR^{\prime}})-\Phi^{\prime}(\rho_{RR^{\prime}}) \right\|_{1} \leq\left\|\rho_{ACR^{\prime}}^{\Phi}-\rho_{ACR^{\prime}}^{\Phi^{ \prime}}\right\|_{1}\] \[\leq\left\|\rho_{AC}^{\Phi}-\rho_{AC}^{\Phi^{\prime}}\right\|_{1}\] \[=\left\|P_{AC}^{\Phi}-P_{AC}^{\Phi^{\prime}}\right\|_{1}.\]
We can use the above lemma to evaluate the distance between the channels \(\mathcal{M}_{k}\) and \(\mathcal{M}_{k}^{\prime}\). Using the above lemma, it is sufficient to suppose that the input of the channels are classical. We can suppose that the registers \(A_{1}^{k-1}\) are classical and distributed as \(P_{A_{1}^{k-1}}\). Let \(P_{A_{1}^{k}B_{k}C_{k}}\) be the output of \(\mathcal{M}_{k}\) on this distribution and \(Q_{A_{1}^{k}B_{k}C_{k}}\) be the output of applying \(\mathcal{M}_{k}^{\prime}\). Then, we have
\[\left\|P_{A_{1}^{k}B_{k}C_{k}}-Q_{A_{1}^{k}B_{k}C_{k}}\right\|_{1} =\sum_{a_{1}^{k},c_{k}}P(a_{1}^{k-1})P(a_{k})P(c_{k})\left\|P_{B_{ k}|a_{1}^{k},c_{k}}-Q_{B_{k}}\right\|_{1}\] \[=\sum_{a_{1}^{k}}P(a_{1}^{k-1})P(a_{k})\left(\left(1-\frac{ \epsilon}{2}\right)\left\|P_{B_{k}|a_{1}^{k},c_{k}=1}-Q_{B_{k}}\right\|_{1}+ \frac{\epsilon}{2}\left\|P_{B_{k}|a_{1}^{k},c_{k}=0}-Q_{B_{k}}\right\|_{1}\right)\] \[\leq\sum_{a_{1}^{k}}P(a_{1}^{k-1})P(a_{k})\epsilon\] \[=\epsilon\]
where in the first line we have used the fact that \(A_{k}\) and \(C_{k}\) are chosen independently with the same distribution in both the maps and the fact that \(B_{k}\) is chosen independently in \(\mathcal{M}_{k}^{\prime}\), for the third line we have used the fact that \(B_{k}\) is independent and has the same distribution as \(Q_{B_{k}}\) when \(c_{k}=1\). Since, this is true for all input distributions, we have \(\left\|\mathcal{M}_{k}-\mathcal{M}_{k}^{\prime}\right\|_{\diamond}\leq\epsilon\).
Now, let \(R_{A_{1}^{n}B_{1}^{n}C_{1}^{n}}\) be the probability distribution created when the maps \(\mathcal{M}_{k}\) are applied sequentially \(n\) times and \(S_{A_{1}^{n}B_{1}^{n}C_{1}^{n}}\) be the probability distribution created when the maps \(\mathcal{M}_{k}^{\prime}\) are applied sequentially \(n\) times. Since, \(B_{k}\) and \(C_{k}\) are independent of \(A_{k}\) in the distribution \(S\), we have
\[H_{\min}(A_{1}^{n}|B_{1}^{n}C_{1}^{n})_{S}=n.\]
We will show that \(H_{\min}^{\epsilon^{\prime}}(A_{1}^{n}|B_{1}^{n}C_{1}^{n})_{R}=O(1)\) as long as \(\epsilon^{\prime}\leq\frac{1}{4}\). Let \(l\coloneqq\frac{2}{\epsilon}\log\frac{1}{\epsilon^{\prime}}\). Let \(E\) be the event that there exists a \(k>n-l\) such that \(C_{k}=0\). For our choice of \(l\), we have \(p(E)\geq 1-\epsilon^{\prime}\).
**Lemma E.2**.: _Let \(P_{AB}\) be a subnormalized probability distribution such that \(A=f(B)\) for some function \(f\) (that is, \(P(a,b)>0\) only if \(a=f(b)\)). Then, \(H_{\min}^{\epsilon}(A|B)_{P}\leq\log\frac{1}{\operatorname{tr}(P)-\sqrt{2 \epsilon}}\)._
Proof.: Let \(P^{\prime}_{AB}\) be a distribution \(\epsilon\)-close to \(P\) in purified distance. Then, it is \(\sqrt{2\epsilon}\) close to \(P\) in trace distance. We have that
\[2^{-H_{\min}(A|B)_{P^{\prime}}} =P^{\prime}_{\text{guess}}(A|B)\] \[\geq\sum_{b}P^{\prime}_{AB}(f(b),b)\] \[\geq\sum_{b}P_{AB}(f(b),b)-\sqrt{2\epsilon}\] \[=\operatorname{tr}(P)-\sqrt{2\epsilon}\]
which implies that \(H_{\min}(A|B)_{P^{\prime}}\leq\log\frac{1}{\operatorname{tr}(P)-\sqrt{2 \epsilon}}\). Since, this is true for every distribution \(\epsilon\)-close to \(P\), it also holds for \(H^{\epsilon}_{\min}(A|B)_{P}\).
We then have that
\[H^{\epsilon^{\prime}}_{\min}(A_{1}^{n}|B_{1}^{n}C_{1}^{n})_{R} \leq H^{\epsilon^{\prime}}_{\min}(A_{1}^{n}|B_{1}^{n}C_{1}^{n} \wedge E)_{R}\] \[\leq H^{\epsilon^{\prime}}_{\min}(A_{1}^{n-l}|B_{1}^{n}C_{1}^{n} \wedge E)_{R}+l\] \[\leq\log\frac{1}{p(E)-\sqrt{2\epsilon^{\prime}}}+l\] \[\leq\log\frac{1}{1-\epsilon^{\prime}-\sqrt{2\epsilon^{\prime}}}+l\] \[\leq l+\log 8/3=O(1)\]
where in the first line we have used [17, Lemma 10] in the first line, dimension bound (can be proven using Lemma D.1) in the second line, Lemma E.2 in the third line and the fact that \(p(E)\geq 1-\epsilon^{\prime}\).
Also, note that the example given here satisfies
\[\left\|P_{A_{1}^{k}B_{1}^{k}C_{1}^{k}}-P_{A_{1}^{k-1}B_{1}^{k-1}C_{1}^{k-1}}P _{A_{k}B_{k}C_{k}}\right\|_{1}\leq\epsilon\]
for every \(k\). This also proves that a bound on the size of the side information registers (\(B_{k}C_{k}\) here), as we have in Theorem 4.5, is necessary for an approximate version of AEP.
Further, this example also rules out the possibility of a natural approximate extension to the generalised entropy accumulation [10] where the maps \(\mathcal{M}_{k}\approx_{\epsilon}\mathcal{M}_{k}^{\prime}\) and the maps \(\mathcal{M}_{k}^{\prime}\) satisfy the non-signalling conditions because one can write the entropy accumulation scenario in the form of a generalised entropy accumulation scenario where Eve's information contains the side information \(B_{1}^{k}E\) in each step. Thus, it would not be possible to prove a meaningful bound on the smooth min-entropy without some sort of bound on the information transferred between the adversary's register \(E_{i}\) and the register \(R_{i}\).
Classical approximate entropy accumulation
We present a simple proof for the approximate entropy accumulation theorem for classical distributions. This result also requires a much weaker assumption than Theorem 5.1.
**Theorem F.1**.: _Let \(p_{A_{1}^{n}B_{1}^{n}E}\) be a classical distribution such that for every \(k\in[n]\), and \(a_{1}^{k-1},b_{1}^{k-1}\) and \(e\)_
\[\left\|p_{A_{k}B_{k}|a_{1}^{k-1},b_{1}^{k-1},e}-q_{A_{k}B_{k}|a_{1 }^{k-1},b_{1}^{k-1},e}\right\|_{\infty}\leq\epsilon \tag{140}\]
_where \(\left\|v\right\|_{\infty}:=\max_{i}|v(i)|\) and the \(q_{B_{k}|a_{1}^{k-1},b_{1}^{k-1},e}^{(k)}=q_{B_{k}|b_{1}^{k-1},e}^{(k)}\) or equivalently \(q^{(k)}\) satisfies the Markov chain \(A_{k}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}\). Also, let \(|A_{k}|=|A|\), \(|B_{k}|=|B|\) for every \(k\in[n]\)._
_Then, for \(\epsilon^{\prime}\in(0,1)\) and \(\alpha\in\left(1,1+\frac{1}{\log(1+2|A|)}\right)\), we have that_
\[H_{\min}^{\epsilon^{\prime}}(A_{1}^{n}|B_{1}^{n}E)_{p}\geq\sum_ {k=1}^{n}\inf_{q}H(A_{k}|B_{k}A_{1}^{k-1}B_{1}^{k-1}E)_{q_{A_{k}B_{k}|A_{1}^{k -1}B_{1}^{k-1}E}^{(k)}}\\ -n(\alpha-1)\log^{2}(2|A|+1)-\frac{\alpha}{\alpha-1}n\log(1+ \epsilon|A||B|)-\frac{g_{0}(\epsilon^{\prime})}{\alpha-1}. \tag{141}\]
_where \(g_{0}(x):=-\log(1-\sqrt{1-x^{2}})\). The infimums are taken over all possible input probability distributions._
For \(\alpha=1+\sqrt{\epsilon}\) (assuming \(\sqrt{\epsilon}\leq 1+\frac{1}{\log(1+2|A|)}\)), and using \(\alpha\leq 2\) and \(\log(1+x)\leq x\) as long as \(x\geq 0\), the above bound gives us
\[H_{\min}^{\epsilon^{\prime}}(A_{1}^{n}|B_{1}^{n}E)_{p}\geq\sum_ {k=1}^{n}\inf_{q}H(A_{k}|B_{k}A_{1}^{k-1}B_{1}^{k-1}E)_{q_{A_{k}B_{k}|A_{1}^{k -1}B_{1}^{k-1}E}^{(k)}}\\ -n\sqrt{\epsilon}\left(\log^{2}(2|A|+1)-2|A||B|\right)-\frac{g_{0 }(\epsilon^{\prime})}{\alpha-1} \tag{142}\]
Proof.: For every \(k\in[n]\), we modify \(q_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\) to create the distributions \(r_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\) which are defined as follows
1. Choose a random variable \(C_{k}\) from \(\{0,1\}\) with probabilities \(\left(\frac{|A||B|\epsilon}{1+|A||B|\epsilon},\frac{1}{1+|A||B|\epsilon}\right)\).
2. If \(C_{k}=1\), then choose random variables \(A_{k},B_{k}\) using \(q_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\) else choose \(A_{k},B_{k}\) randomly with probability \(\frac{1}{|\mathcal{A}||\mathcal{B}|}\).
That is, we have
\[r_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\coloneqq\frac{1}{1+|A||B|\epsilon} q_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}+\frac{|A||B|\epsilon}{1+|A||B| \epsilon}u_{A_{k}B_{k}}\]
where \(u_{A_{k}B_{k}}\) is the uniform distribution on the registers \(A_{k}\) and \(B_{k}\).
For every \(k,a_{1}^{k-1},b_{1}^{k-1}\), and \(e\), we have
\[\left\|p_{A_{k}B_{k}|a_{1}^{k-1},b_{1}^{k-1},e}-q_{A_{k}B_{k}|a_{1} ^{k-1},b_{1}^{k-1},e}^{(k)}\right\|_{\infty}\leq\epsilon\] \[\Rightarrow p_{A_{k}B_{k}|a_{1}^{k-1},b_{1}^{k-1},e}\leq q_{A_{k}B_{k}|a_{1 }^{k-1},b_{1}^{k-1},e}^{(k)}+\epsilon\mathds{1}_{A_{k}B_{k}}\] \[\Rightarrow p_{A_{k}B_{k}|a_{1}^{k-1},b_{1}^{k-1},e}\leq q_{A_{k}B_{k}|a_{1 }^{k-1},b_{1}^{k-1},e}^{(k)}+\epsilon|A||B|u_{A_{k}B_{k}}\] \[\Rightarrow p_{A_{k}B_{k}|a_{1}^{k-1},b_{1}^{k-1},e}\leq(1+|A||B| \epsilon)r_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\]
Define the distribution
\[r_{A_{1}^{n}B_{1}^{n}E}=\prod_{k=1}^{n}r_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^ {(k)}p_{E}. \tag{143}\]
Note that for every \(k,a_{1}^{k-1},b_{1}^{k}\), and \(e\), we have
\[r_{B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}(b_{k}|a_{1}^{k-1}b_{1}^{k-1}e) =\frac{1}{1+|A||B|\epsilon}q_{B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k) }(b_{k}|a_{1}^{k-1}b_{1}^{k-1}e)+\frac{\epsilon}{1+|A||B|\epsilon}\] \[=\frac{1}{1+|A||B|\epsilon}q_{B_{k}|B_{1}^{k-1}E}^{(k)}(b_{k}|b_{ 1}^{k-1}e)+\frac{\epsilon}{1+|A||B|\epsilon},\]
which implies
\[r_{B_{k}|B_{1}^{k-1}E}(b_{k}|b_{1}^{k-1}e) =\sum_{\bar{a}_{1}^{k-1}}r_{A_{1}^{k-1}|B_{1}^{k-1}E}(\bar{a}_{1} ^{k-1}|b_{1}^{k-1}e)r_{B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}(b_{k}|\bar{a}_{1}^{k-1}b _{1}^{k-1}e)\] \[=\sum_{\bar{a}_{1}^{k-1}}r_{A_{1}^{k-1}|B_{1}^{k-1}E}(\bar{a}_{1} ^{k-1}|b_{1}^{k-1}e)\left(\frac{1}{1+|A||B|\epsilon}q_{B_{k}|B_{1}^{k-1}E}^{(k )}(b_{k}|b_{1}^{k-1}e)+\frac{\epsilon}{1+|A||B|\epsilon}\right)\] \[=r_{B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}(b_{k}|a_{1}^{k-1}b_{1}^{k-1}e).\]
Thus, for every \(k\in[n]\), \(r\) satisfies the Markov chain \(A_{1}^{k-1}\leftrightarrow B_{1}^{k-1}E\leftrightarrow B_{k}\). Further, we have
\[p_{A_{1}^{n}B_{1}^{n}E}(a_{1}^{n},b_{1}^{n},e) =\prod_{k=1}^{n}p_{A_{k}B_{k}|A_{1}^{k-1},B_{1}^{k-1},E}(a_{k},b_ {k}|a_{1}^{k-1},b_{1}^{k-1},e)p_{E}(e)\] \[\leq(1+\epsilon|A||B|)^{n}\prod_{k=1}^{n}r_{A_{k}B_{k}|A_{1}^{k-1 },B_{1}^{k-1},E}^{(k)}(a_{k},b_{k}|a_{1}^{k-1},b_{1}^{k-1},e)p_{E}(e)\] \[=(1+\epsilon|A||B|)^{n}r_{A_{1}^{n}B_{1}^{n}E}^{(k)}(a_{1}^{n},b_ {1}^{n},e)\]
which shows that \(D_{\max}(p_{A_{1}^{n}B_{1}^{n}E}\|r_{A_{1}^{n}B_{1}^{n}E})\leq n\log(1+\epsilon| A||B|)\).
The distribution \(r_{A_{1}^{n}B_{1}^{n}E}\) can be viewed as the result of a series of maps as in Fig. 5. We can now use the EAT chain rule [13, Corollary 3.5] along with [13, Lemma B.9]\(n\)-times to bound the entropy of this auxiliary distribution. We get
\[\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{r} \geq\sum_{k=1}^{n}\inf_{A_{1}^{k-1}B_{1}^{k-1}E}\tilde{H}_{\alpha }^{\downarrow}(A_{k}|B_{k}A_{1}^{k-1}B_{1}^{k-1}E)_{r_{A_{k}B_{k}|A_{1}^{k-1}B_ {1}^{k-1}E}^{(k)}}\] \[\geq\sum_{k=1}^{n}\inf_{A_{1}^{k-1}B_{1}^{k-1}E}H(A_{k}|B_{k}A_{1} ^{k-1}B_{1}^{k-1}E)_{r_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}}-n(\alpha-1) \log^{2}(2|A|+1)\] \[\geq\sum_{k=1}^{n}\left(\inf_{q_{A_{1}^{k-1}B_{1}^{k-1}E}}\frac{1 }{1+|A||B|\epsilon}H(A_{k}|B_{k}A_{1}^{k-1}B_{1}^{k-1}E)_{q_{A_{k}B_{k}|A_{1}^ {k-1}B_{1}^{k-1}E}^{(k)}}\right._{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\] \[\qquad+\frac{\epsilon}{1+|A||B|\epsilon}\log|A|\right)-n(\alpha-1 )\log^{2}(2|A|+1)\] \[\geq\sum_{k=1}^{n}\inf_{q_{A_{1}^{k-1}B_{1}^{k-1}E}}H(A_{k}|B_{k} A_{1}^{k-1}B_{1}^{k-1}E)_{q_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{q_{A_{k-1}B _{1}^{k-1}E}^{(k)}}}-n(\alpha-1)\log^{2}(2|A|+1)\]
for \(\alpha\in\left(1,1+\frac{1}{\log(1+2|A|)}\right)\). In the third line, we have used the concavity of the von Neumann entropy along with the definition of \(r_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}\). Using Lemma 3.5, we have
\[H_{\min}^{\epsilon^{\prime}}(A_{1}^{n}|B_{1}^{n}E)_{p} \geq\tilde{H}_{\alpha}^{\dagger}(A_{1}^{n}|B_{1}^{n}E)_{r}-\frac {\alpha}{\alpha-1}D_{\max}(p_{A_{1}^{n}B_{1}^{n}E}\|r_{A_{1}^{n}B_{1}^{n}E})- \frac{g_{1}(\epsilon^{\prime},0)}{\alpha-1}\] \[\geq\sum_{k=1}^{n}\inf_{q}H(A_{k}|B_{k}A_{1}^{k-1}B_{1}^{k-1}E)_{ q_{A_{k}B_{k}|A_{1}^{k-1}B_{1}^{k-1}E}^{q_{A_{1}^{k-1}B_{1}^{k-1}E}^{(k)}}}\] \[\qquad-n(\alpha-1)\log^{2}(2|A|+1)-\frac{\alpha}{\alpha-1}n\log( 1+\epsilon|A||B|)-\frac{g_{0}(\epsilon^{\prime})}{\alpha-1}.\]
Figure 5: Setting for classical EAT
Proof of Theorem 6.3
In this section, we formally prove the lower bound on the smooth min-entropy required for the security of QKD in Theorem 6.3 using the entropy accumulation theorem (EAT). In Section 6.1 (Eq. 93 and 94), we showed that \(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}:=\bar{\bar{\beta}}_{X_{1}^{n} \Theta_{1}^{n}A_{1}^{n}|\Omega}\) and \(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}=\left(\hat{\rho}^{(\epsilon+\delta)}_ {X\Theta A}\right)^{\otimes n}\) is such that
\[\frac{1}{2}\left\|\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n} }-\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}|\Omega}\right\|_{1}\leq\frac{ \epsilon_{f}^{2}}{2} \tag{144}\]
and
\[D_{\max}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}\| \sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\leq nh(\epsilon+\delta)+\log\frac{ 1}{\Pr_{\rho}(\Omega)-\epsilon_{\mathrm{qu}}^{\delta}}. \tag{145}\]
Fix an arbitrary strategy for Eve. Let \(\Phi_{\mathrm{QKD}}:X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}\to X_{1}^{n}Y_{1}^{n} \hat{X}_{S}\hat{C}_{1}^{n}\Theta_{1}^{n}\hat{\Theta}_{1}^{n}STE\) be the map applied by Alice, Bob and Eve on the states produced by Alice during the QKD protocol. In order to prove security for the BB84 protocol, we need a lower bound on the following smooth min-entropy of \(\Phi_{\mathrm{QKD}}(\bar{\rho})\)
\[H^{\nu}_{\min}(X_{S}|ET\Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{ \Phi_{\mathrm{QKD}}(\bar{\rho})|_{\Upsilon}}\]
for some \(\nu\geq 0\). In [10, Appendix A], it is shown that it is sufficient to show a lower bound for the smooth min-entropy of the final state of the protocol conditioned on the event \(\Upsilon^{\prime\prime}\) when the protocol uses perfect source states. The arguments mentioned there are also valid for our case, which is why we bound the smooth min-entropy
\[H^{\nu}_{\min}(X_{S}|ET\Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{ \Phi_{\mathrm{QKD}}(\bar{\rho})|_{\Upsilon^{\prime\prime}}}\]
in Theorem 6.311.
Footnote 11: The arguments in [10, Appendix A] can also be modified to show that it is sufficient to show that \(P(\Upsilon^{\prime\prime})\left\|\rho^{f}_{K_{A}E^{\prime}}-\tau_{K_{A}}\otimes \rho^{f}_{E^{\prime}}\right\|_{1}\) is small, where \(K_{A}\) is Alice’s key and \(\rho^{f}\) is the state produced at the end of the protocol conditioned on not aborting, to prove the security of QKD.
Using the data processing inequality and Eq. 145, we see that
\[D_{\max}(\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^ {n}A_{1}^{n}})\|\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}} ))\leq nh(\epsilon+\delta)+\log\frac{1}{\Pr_{\rho}(\Omega)-\epsilon_{\mathrm{ qu}}^{\delta}}. \tag{146}\]
Note that \(\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\) and \(\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\) are the states that are produced at the end of the protocol if Alice's source were to produce the states \(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}\) and \(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}\)
respectively. The states \(\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\) and \(\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\) also contain all the corresponding classical variables as the real protocol state \(\Phi_{\mathrm{QKD}}(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\). In particular, the event \(\Upsilon^{\prime\prime}\) is well-defined (defined using classical variables) for both of these states.
Using Lemma 6.2 and Eq. 144, we have that the final states conditioned on the event \(\Upsilon^{\prime\prime}\) satisfy
\[\left\|\Phi_{\mathrm{QKD}}(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})_{| \Omega\wedge\Upsilon^{\prime\prime}}-\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^ {n}\Theta_{1}^{n}A_{1}^{n}})_{|\Upsilon^{\prime\prime}}\right\|_{1}\leq\frac{ \epsilon_{f}^{2}}{\Pr_{\bar{\rho}}(\Upsilon^{\prime\prime}|\Omega)} \tag{147}\]
where \(\Pr_{\bar{\rho}}(\Upsilon^{\prime\prime}|\Omega)\) is the probability for the event \(\Upsilon^{\prime\prime}\) for the state \(\Phi_{\mathrm{QKD}}(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}\Omega)^{(12)}\). Using the Fuchs- van de Graaf inequality [16, Lemma 3.5], we can transform this to a purified distance bound
\[P(\Phi_{\mathrm{QKD}}(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})_{|\Omega \wedge\Upsilon^{\prime\prime}},\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n} \Theta_{1}^{n}A_{1}^{n}})_{|\Upsilon^{\prime\prime}})\leq\sqrt{\frac{2}{P_{ \bar{\rho}}(\Upsilon^{\prime\prime}|\Omega)}}\epsilon_{f}. \tag{148}\]
Let \(d:=D_{\max}(\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n} })\|\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}))\). We have proven an upper bound on \(d\) in Eq. 146. By definition of \(D_{\max}\), we have
\[\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\leq 2^{ d}\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}).\]
Conditioning both sides on the event \(\Upsilon^{\prime\prime}\) implies that
\[P_{\rho^{\prime}}(\Upsilon^{\prime\prime})\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X _{1}^{n}\Theta_{1}^{n}A_{1}^{n}})_{|\Upsilon^{\prime\prime}}\leq 2^{d}P_{ \sigma}(\Upsilon^{\prime\prime})\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_ {1}^{n}A_{1}^{n}})_{|\Upsilon^{\prime\prime}}\]
where \(P_{\rho^{\prime}}(\Upsilon^{\prime\prime})\) and \(P_{\sigma}(\Upsilon^{\prime\prime})\) are the probability for \(\Upsilon^{\prime\prime}\) for the states \(\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\) and \(\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})\) respectively. Therefore, we have
\[D_{\max}(\Phi_{\mathrm{QKD}}(\rho^{\prime}_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}}) _{|\Upsilon^{\prime\prime}}\|\Phi_{\mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{ n}A_{1}^{n}})_{|\Upsilon^{\prime\prime}})\leq d+\log\frac{P_{\sigma}(\Upsilon^{ \prime\prime})}{P_{\rho^{\prime}}(\Upsilon^{\prime\prime})}.\]
Together, with Eq. 148 for \(\epsilon_{\mathrm{pa}}:=\left(\frac{2}{P_{\bar{\rho}}(\Upsilon^{\prime\prime}| \Omega)}\right)^{\frac{1}{2}}\epsilon_{f}\), we have that
\[D_{\max}^{\epsilon_{\mathrm{pa}}}(\Phi_{\mathrm{QKD}}(\bar{\rho}_{X_{1}^{n} \Theta_{1}^{n}A_{1}^{n}})_{|\Omega\wedge\Upsilon^{\prime\prime}}\|\bar{\Phi}_{ \mathrm{QKD}}(\sigma_{X_{1}^{n}\Theta_{1}^{n}A_{1}^{n}})_{|\Upsilon^{\prime \prime}})\leq d+\log\frac{P_{\sigma}(\Upsilon^{\prime\prime})}{P_{\rho^{\prime}}( \Upsilon^{\prime\prime})}. \tag{149}\]
Let \(\epsilon_{1},\epsilon_{2},\epsilon_{3}>0\) be arbitrary parameters. We have
\[H_{\min}^{\epsilon_{\text{pa}}+\epsilon_{1}+2(\epsilon_{2}+ \epsilon_{3})}(X_{S}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n}T)_{\Phi_{\text{QKD}}( \bar{\rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}\] \[=H_{\min}^{\epsilon_{\text{pa}}+\epsilon_{1}+2(\epsilon_{2}+ \epsilon_{3})}(\bar{X}_{1}^{n}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n}T)_{\Phi_{ \text{QKD}}(\bar{\rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}\] \[\geq H_{\min}^{\epsilon_{\text{pa}}+\epsilon_{1}}(\bar{X}_{1}^{n} \bar{Y}_{1}^{n}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n}T)_{\Phi_{\text{QKD}}(\bar{ \rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}-H_{\max}^{\epsilon_{2}}(\bar{Y} _{1}^{n}|\bar{X}_{1}^{n}E\Theta_{1}^{n}\hat{\Theta}_{1}^{n}T)_{\Phi_{\text{QKD }}(\bar{\rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}-3g_{0}(\epsilon_{3})\] \[\geq H_{\min}^{\epsilon_{\text{pa}}+\epsilon_{1}}(\bar{X}_{1}^{n} \bar{Y}_{1}^{n}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\bar{ \rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}-\log|T|-H_{\max}^{\epsilon_{2}} (\bar{Y}_{1}^{n}|\bar{X}_{1}^{n}E\Theta_{1}^{n}\hat{\Theta}_{1}^{n}T)_{\Phi_{ \text{QKD}}(\bar{\rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}-3g_{0}( \epsilon_{3}) \tag{150}\]
where in the first line we have used the fact that given \(\Theta_{1}^{n}\) and \(\hat{\Theta}_{1}^{n}\), one can figure out the set \(S\) and then \(\bar{X}_{1}^{n}=X_{S}(\bot)_{S^{c}}\) (see Table 1 for definition of the registers), in the second line we have used the chain rule for smooth min-entropy [23, Theorem 15] and in the last line we have used the dimension bound. We have used the chain rule here to reduce our proof to bounding an entropy, which in the perfect source case, can be bound using entropy accumulation [14, Section 5.1].
Now, we can use Lemma 3.5 to derive
\[H_{\min}^{\epsilon_{\text{pa}}+\epsilon_{1}}(\bar{X}_{1}^{n}\bar {Y}_{1}^{n}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\bar{\rho })_{|\Omega\Lambda\Upsilon^{\prime\prime}}}\] \[\geq\tilde{H}_{\alpha}^{\dagger}(\bar{X}_{1}^{n}\bar{Y}_{1}^{n}|E \Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\sigma)_{|\Upsilon^{ \prime\prime}}}\] \[\qquad\qquad-\frac{\alpha}{\alpha-1}D_{\max}^{\epsilon_{\text{pa} }}(\Phi_{\text{QKD}}(\bar{\rho}_{X_{1}^{n}\Theta_{1}^{n}Q_{1}^{n}})_{|\Omega \Lambda\Upsilon^{\prime\prime}}\|\Phi_{\text{QKD}}(\sigma_{X_{1}^{n}\Theta_{ 1}^{n}Q_{1}^{n}})_{|\Upsilon^{\prime\prime}})-\frac{g_{1}(\epsilon_{1}, \epsilon_{\text{pa}})}{\alpha-1}\] \[\geq\tilde{H}_{\alpha}^{\dagger}(\bar{X}_{1}^{n}\bar{Y}_{1}^{n}|E \Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\sigma)_{|\Upsilon^{ \prime\prime}}}\] \[\qquad\qquad-\frac{\alpha}{\alpha-1}d-\frac{\alpha}{\alpha-1}\log \frac{P_{\sigma}(\Upsilon^{\prime\prime})}{P_{\rho^{\prime}}(\Upsilon^{\prime \prime})}-\frac{g_{1}(\epsilon_{1},\epsilon_{\text{pa}})}{\alpha-1} \tag{151}\]
Thus, we have reduced the problem to lower bounding \(\alpha\)-Renyi conditional entropy for the QKD protocol in Protocol 2, where Alice's source produces noisy BB84 states. We can bound this conditional entropy using the entropy accumulation theorem. The only difference in the following arguments from [14, Section 5.1] is that we need to employ entropy accumulation for \(\alpha\)-Renyi entropies (also see [11]).
Firstly, note that we can use source purification for the state \(\Phi_{\text{QKD}}(\sigma)\), that is, we can imagine that the state \(\Phi_{\text{QKD}}(\sigma)\) was produced by the following procedure:
1. Alice prepares \(n\) Bell states \((\Phi^{+})_{\bar{A}_{1}^{n}A_{1}^{n}}^{\otimes n}\).
2. For each \(i\in[n]\), Alice measures the qubit \(\bar{A}_{i}\) in the basis \(\Theta_{i}\), which is chosen to be \(Z\) with probability \((1-\mu)\) and otherwise is chosen to be \(X\). The measurement result is labelled \(X_{i}\).
3. She then applies the \(2(\epsilon+\delta)-\)depolarising channel to each of the qubits \(A_{i}\) for \(i\in[n]\) and sends them over the channel to Bob.
We can imagine that the source state is prepared in this fashion. The initial state for EAT will be represented by the registers \(\bar{A}_{1}^{n}A_{1}^{n}E\), which contain the state produced after Eve forwards the state produced above by Alice to Bob. We can now define the EAT maps \(\mathcal{M}_{i}:\bar{A}_{i}^{n}A_{i}^{n}\rightarrow\bar{A}_{i+1}^{n}A_{i+1}^{n }\bar{X}_{i}\bar{Y}_{i}\Theta_{i}\hat{\Theta}_{i}C_{i}\), where the registers \(\Theta_{i}\) and \(\hat{\Theta}_{i}\) are produced by randomly sampling according to the probabilities in the protocol, \(\bar{X}_{i}\) and \(\bar{Y}_{i}\) are produced according to the measurements chosen in the protocol and the source preparation procedure above, and \(C_{i}\) is defined as in Table 1.
Note that by conditioning on the event \(\Upsilon^{\prime\prime}\), we are requiring that \(q=\text{freq}(C_{1}^{n})\) satisfies \(q(1)\leq e\mu^{2}\). It is shown in [14, Proof of Claim 5.2] that there exists an affine min-tradeoff function \(f\), such that \(C_{1}^{n}\) given \(\Upsilon^{\prime\prime}\) satisfies \(f(\text{freq}(C_{1}^{n}))\geq 1-2\mu+\mu^{2}-h(e)\). Using the entropy accumulation theorem [14, Proposition 4.5], we get
\[\tilde{H}_{\alpha}^{\dagger}(\bar{X}_{1}^{n}\bar{Y}_{1}^{n}|E \Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\text{QKD}}(\sigma^{\delta})_{| \Upsilon^{\prime\prime}}}\geq n(1-2\mu+\mu^{2}-h(e))-n\frac{\alpha-1}{4}V^{2}- \frac{\alpha}{\alpha-1}\log\frac{1}{P_{\sigma}(\Upsilon^{\prime\prime})} \tag{152}\]
where \(V:=2\big{[}\|\nabla f\|_{\infty}\big{]}+2\log(1+2|\mathcal{X}|^{2})=\frac{2}{ \mu^{2}}\log\frac{1-e}{e}+2\log(1+2|\mathcal{X}|^{2})\)13 and \(1<\alpha<1+\frac{2}{V}\). Combining Eq. 151 and 152, we get
Footnote 13: It should be noted that this term can be improved using [14].
\[H_{\min}^{\epsilon_{\text{pa}}+\epsilon_{1}} (\bar{X}_{1}^{n}\bar{Y}_{1}^{n}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n })_{\Phi_{\text{QKD}}(\bar{\rho})_{|\Omega\Lambda\Upsilon^{\prime\prime}}}\] \[\geq n(1-2\mu+\mu^{2}-h(e))-n\frac{\alpha-1}{4}V^{2}-\frac{\alpha }{\alpha-1}d-\frac{\alpha}{\alpha-1}\log\frac{1}{P_{\rho^{\prime}}(\Upsilon^{ \prime\prime})}-\frac{g_{1}(\epsilon_{1},\epsilon_{\text{pa}})}{\alpha-1}\] \[\geq n(1-2\mu+\mu^{2}-h(e))-n\frac{\alpha-1}{4}V^{2}-\frac{\alpha }{\alpha-1}nh(\epsilon+\delta)-\frac{\alpha}{\alpha-1}\log\frac{1}{\Pr_{\rho}( \Omega)-\epsilon_{\text{qu}}^{\delta}}\] \[\qquad-\frac{\alpha}{\alpha-1}\log\frac{1}{P_{\bar{\rho}}( \Upsilon^{\prime\prime}|\Omega)-\frac{2\epsilon_{\text{qu}}^{\delta}}{P_{\rho }(\Omega)}}-\frac{g_{1}(\epsilon_{1},\epsilon_{\text{pa}})}{\alpha-1}\] \[\geq n(1-2\mu+\mu^{2}-h(e))-n\frac{\alpha-1}{4}V^{2}-\frac{\alpha }{\alpha-1}nh(\epsilon+\delta)\] \[\qquad-\frac{\alpha}{\alpha-1}\left(\log\frac{1}{P_{\rho}(\Omega \wedge\Upsilon^{\prime\prime})-2\epsilon_{\text{qu}}^{\delta}}+1\right)- \frac{g_{1}(\epsilon_{1},\epsilon_{\text{pa}})}{\alpha-1} \tag{153}\]
where we have used Eq. 144, \(\epsilon_{f}=2\sqrt{\frac{\epsilon_{\text{qu}}^{\delta}}{P_{\rho}(\Omega)}}\), \(P_{\rho}(\Omega\wedge\Upsilon^{\prime\prime})=P_{\rho}(\Omega)P_{\bar{\rho}}( \Upsilon^{\prime\prime}|\Omega)\) and \(P_{\rho}(\Omega)\geq P_{\rho}(\Omega\wedge\Upsilon^{\prime\prime})>2\epsilon_ {\text{qu}}^{\delta}\) to simplify the result. It should be noted that the probability \(P_{\sigma}(\Upsilon^{\prime\prime})\) of
the auxiliary state cancels out. Since, we restrict \(\epsilon\) and \(\delta\) to the region, where \(h(\epsilon+\delta)<\frac{1}{\sqrt{2}}\), we can choose
\[\alpha:=1+\frac{2\sqrt{2h(\epsilon+\delta)}}{V} \tag{154}\]
which gives us the bound
\[H_{\min}^{\epsilon_{\mathrm{pa}}+\epsilon_{1}} (\bar{X}_{1}^{n}\bar{Y}_{1}^{n}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n })_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega\wedge\Upsilon^{\prime\prime}}} \geq n\big{(}1-2\mu+\mu^{2}-h(\epsilon)-V\sqrt{2h(\epsilon+\delta)}\big{)}\] \[-\frac{V}{\sqrt{2h(\epsilon+\delta)}}\left(\log\frac{1}{P_{\rho} (\Omega\wedge\Upsilon^{\prime\prime})-2\epsilon_{\mathrm{qu}}^{\delta}}+1 \right)-\frac{g_{1}(\epsilon_{1},\epsilon_{\mathrm{pa}})}{2\sqrt{2h(\epsilon+ \delta)}}V. \tag{155}\]
We also need to bound \(H_{\max}^{\epsilon_{2}}(\bar{Y}_{1}^{n}|\bar{X}_{1}^{n}E\Theta_{1}^{n}\hat{ \Theta}_{1}^{n}T)_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega\wedge\Upsilon^{ \prime\prime}}}\) in Eq. 150. The bound and the proof for this bound are the same as in [14, Claim 5.2]. We have for \(\beta\in(1,2)\) that
\[H_{\max}^{\epsilon_{2}} (\bar{Y}_{1}^{n}|\bar{X}_{1}^{n}E\Theta_{1}^{n}\hat{\Theta}_{1}^{ n}T)_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega\wedge\Upsilon^{\prime\prime}}}\] \[\leq H_{\max}^{\epsilon_{2}}(\bar{Y}_{1}^{n}|\Theta_{1}^{n}\hat{ \Theta}_{1}^{n})_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega\wedge\Upsilon^{ \prime\prime}}}\] \[\leq\tilde{H}_{\frac{1}{\beta}}^{\downarrow}(\bar{Y}_{1}^{n}| \Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega \wedge\Upsilon^{\prime\prime}}}+\frac{g_{0}\big{(}\epsilon_{2}\big{)}}{\beta-1}\] \[\leq\tilde{H}_{\frac{1}{\beta}}^{\downarrow}(\bar{Y}_{1}^{n}| \Theta_{1}^{n}\hat{\Theta}_{1}^{n})_{\Phi_{\mathrm{QKD}}(\bar{\rho})}+\frac{ \beta}{\beta-1}\log\frac{1}{P_{\bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime })}+\frac{g_{0}(\epsilon_{2})}{\beta-1}\] \[=\frac{\beta}{\beta-1}\log\sum_{\theta_{1}^{n},\hat{\theta}_{1}^{ n}}P(\theta_{1}^{n},\hat{\theta}_{1}^{n})2^{\left(1-\frac{1}{\beta}\right) \tilde{H}_{\frac{1}{\beta}}^{\downarrow}(\bar{Y}_{1}^{n}|\theta_{1}^{n},\hat{ \theta}_{1}^{n})}+\frac{\beta}{\beta-1}\log\frac{1}{P_{\bar{\rho}}(\Omega \wedge\Upsilon^{\prime\prime})}+\frac{g_{0}(\epsilon_{2})}{\beta-1}\]
where the first line follows from the data processing inequality for the smooth max-entropy, second line follows from [14, Lemma B.10], third line using [14, Lemma B.6]. Let the random variable \(Z\) denote the number of \(i\in[n]\), such that \(\Theta_{i}=\hat{\Theta}_{i}=1\). Then, we have the following inequalities for the first term in the bound above
\[\frac{\beta}{\beta-1}\log\sum_{\theta_{1}^{n},\hat{\theta}_{1}^{ n}}P(\theta_{1}^{n},\hat{\theta}_{1}^{n})2^{\left(1-\frac{1}{\beta}\right) \tilde{H}_{\frac{1}{\beta}}^{\downarrow}(\bar{A}_{1}^{n}|\theta_{1}^{n},\hat{ \theta}_{1}^{n})} \leq\frac{\beta}{\beta-1}\log\sum_{\theta_{1}^{n},\hat{\theta}_{1 }^{n}}P(\theta_{1}^{n},\hat{\theta}_{1}^{n})2^{\left(1-\frac{1}{\beta}\right) Z_{\theta_{1}^{n},\hat{\theta}_{1}^{n}}}\] \[=\frac{\beta}{\beta-1}\log\sum_{z=0}^{n}\binom{n}{z}\mu^{2z}(1- \mu^{2})^{n-z}2^{\left(1-\frac{1}{\beta}\right)z}\] \[=n\frac{\beta}{\beta-1}\log\left(1-\mu^{2}+2^{\left(1-\frac{1}{ \beta}\right)}\mu^{2}\right)\] \[=n\mu^{2}\frac{\beta}{(\beta-1)\ln(2)}\left(2^{\left(1-\frac{1}{ \beta}\right)}-1\right)\] \[\leq n\mu^{2}\left(1+\frac{(\beta-1)\ln(2)}{\beta}\right)\]
where we use \(Z_{\theta_{1}^{n},\hat{\theta}_{1}^{n}}\) to denote the fact that the value of random variable \(Z\) is fixed by \(\theta_{1}^{n}\) and \(\hat{\theta}_{1}^{n}\), in the second line we transform the expectation over \(\theta_{1}^{n}\) and \(\hat{\theta}_{1}^{n}\) into an expectation over \(Z\), in the third line we use the binomial theorem, in the fourth line we use the fact that \(\ln(1+x)\leq x\) for all \(x>-1\), and in the last line we use the fact that \(e^{x}\leq 1+x+x^{2}\) for \(x\in(0,1)\) and that for \(\beta>1\) the term \(\ln(2)\left(1-\frac{1}{\beta}\right)\) lies in this range. Thus, we get that for \(\beta\in(1,2)\),
\[H_{\max}^{\epsilon_{2}}(\bar{Y}_{1}^{n}|\bar{X}_{1}^{n}E\Theta_{1}^{n}\hat{ \Theta}_{1}^{n}T)_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega\wedge\Upsilon^{ \prime\prime}}}\leq n\mu^{2}+\frac{(\beta-1)\ln(2)}{\beta}n\mu^{2}+\frac{\beta }{\beta-1}\log\frac{1}{P_{\bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})}+ \frac{g_{0}(\epsilon_{2})}{\beta-1}.\]
Choosing \(\beta=1+\frac{1}{\sqrt{n}}\) and using the coarse bounds \(1<\beta<2\), gives us
\[H_{\max}^{\epsilon_{2}}(\bar{Y}_{1}^{n}|\bar{X}_{1}^{n}E\Theta_{1}^{n}\hat{ \Theta}_{1}^{n}T)_{\Phi_{\mathrm{QKD}}(\bar{\rho})_{|\Omega\wedge\Upsilon^{ \prime\prime}}}\leq n\mu^{2}+\sqrt{n}\left(\mu^{2}\ln(2)+2\log\frac{1}{P_{ \bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})}+g_{0}(\epsilon_{2})\right). \tag{156}\]
Combining Eq. 150, 155, and 156, we get
\[H_{\min}^{\epsilon_{\mathrm{pa}}+\epsilon_{1}+2(\epsilon_{2}+ \epsilon_{3})}(X_{S}|E\Theta_{1}^{n}\hat{\Theta}_{1}^{n}T)_{\Phi_{\mathrm{QKD }}(\bar{\rho})_{|\Omega\wedge\Upsilon^{\prime\prime}}}\] \[\quad\geq n(1-2\mu-h(e)-V\sqrt{2h(\epsilon+\delta)})-\sqrt{n} \left(\mu^{2}\ln(2)+2\log\frac{1}{P_{\bar{\rho}}(\Omega\wedge\Upsilon^{ \prime\prime})}+g_{0}(\epsilon_{2})\right)\] \[\quad-\frac{V}{\sqrt{2h(\epsilon+\delta)}}\left(\log\frac{1}{P_{ \bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})-2\epsilon_{\mathrm{qu}}^{ \delta}}+1\right)-\frac{g_{1}(\epsilon_{1},\epsilon_{\mathrm{pa}})}{2\sqrt{2h( \epsilon+\delta)}}V-\log|T|-3g_{0}(\epsilon_{3}) \tag{157}\]
where the parameters \(\epsilon_{1},\epsilon_{2},\epsilon_{3}>0\) are arbitrary, and
\[\epsilon_{\mathrm{pa}}=2\left(\frac{2\epsilon_{\mathrm{qu}}^{\delta}}{P_{ \bar{\rho}}(\Omega\wedge\Upsilon^{\prime\prime})}\right)^{1/2}.\]
For an arbitrary \(\epsilon^{\prime}>0\), we can set \(\epsilon_{1}=\frac{\epsilon^{\prime}}{2}\) and \(\epsilon_{2}=\epsilon_{3}=\frac{\epsilon^{\prime}}{8}\) to derive the result in the theorem.
|
2306.01089 | Semi-supervised Community Detection via Structural Similarity Metrics | Motivated by social network analysis and network-based recommendation
systems, we study a semi-supervised community detection problem in which the
objective is to estimate the community label of a new node using the network
topology and partially observed community labels of existing nodes. The network
is modeled using a degree-corrected stochastic block model, which allows for
severe degree heterogeneity and potentially non-assortative communities. We
propose an algorithm that computes a `structural similarity metric' between the
new node and each of the $K$ communities by aggregating labeled and unlabeled
data. The estimated label of the new node corresponds to the value of $k$ that
maximizes this similarity metric. Our method is fast and numerically
outperforms existing semi-supervised algorithms. Theoretically, we derive
explicit bounds for the misclassification error and show the efficiency of our
method by comparing it with an ideal classifier. Our findings highlight, to the
best of our knowledge, the first semi-supervised community detection algorithm
that offers theoretical guarantees. | Yicong Jiang, Tracy Ke | 2023-06-01T19:02:50Z | http://arxiv.org/abs/2306.01089v1 | # Semi-supervised Community Detection via Structural Similarity Metrics
###### Abstract
Motivated by social network analysis and network-based recommendation systems, we study a semi-supervised community detection problem in which the objective is to estimate the community label of a new node using the network topology and partially observed community labels of existing nodes. The network is modeled using a degree-corrected stochastic block model, which allows for severe degree heterogeneity and potentially non-assortative communities. We propose an algorithm that computes a'structural similarity metric' between the new node and each of the \(K\) communities by aggregating labeled and unlabeled data. The estimated label of the new node corresponds to the value of \(k\) that maximizes this similarity metric. Our method is fast and numerically outperforms existing semi-supervised algorithms. Theoretically, we derive explicit bounds for the misclassification error and show the efficiency of our method by comparing it with an ideal classifier. Our findings highlight, to the best of our knowledge, the first semi-supervised community detection algorithm that offers theoretical guarantees.
## 1 Introduction
Nowadays, large network data are frequently observed on social media (such as Facebook, Twitter, and LinkedIn), science, and social science. Learning the latent community structure in a network is of particular interest. For example, community analysis is useful in designing recommendation systems (Debnath et al., 2008), measuring scholarly impacts (Ji et al., 2022), and re-constructing pseudo-dynamics in single-cell data (Liu et al., 2018). In this paper, we consider a semi-supervised community detection setting: we are given a symmetric network with \(n\) nodes, and denote by \(A\in\mathbb{R}^{n\times n}\) the adjacency matrix, where \(A_{ij}\in\{0,1\}\) indicates whether there is an edge between nodes \(i\) and \(j\). Suppose the nodes partition into \(K\) non-overlapping communities \(\mathcal{C}_{1},\mathcal{C}_{2},\ldots,\mathcal{C}_{K}\). For a subset \(\mathcal{L}\subset\{1,2,\ldots,n\}\), we observe the true community label \(y_{i}\in\{1,2,\ldots,K\}\) for each \(i\in\mathcal{L}\). Write \(m=|\mathcal{L}|\) and \(Y_{\mathcal{C}}=(y_{i})_{i\in\mathcal{L}}\). In this context, there are two related semi-supervised community detection problems: (i) _in-sample classification_, where the goal is to classify all the existing unlabeled nodes; (ii) _prediction_, where the goal is to classify a new node joining the network. Notably, the in-sample classification problem can be easily reduced to prediction problem: we can successively single out each existing unlabeled node, regard it as the "new node", and then predict its label by applying an algorithms for the prediction problem. Hence, for most of the paper, we focus on the prediction problem and defer the study of in-sample classification to Section 3. In the _prediction_ problem, let \(X\in\{0,1\}^{n}\) denote the vector consisting of edges between the new node and each of the existing nodes. Given \((A,Y_{\mathcal{L}},X)\), our goal is to estimate the community label of the new node.
This problem has multiple applications. Consider the news suggestion or online advertising push for a new Facebook user (Shapira et al., 2013). Given a big Facebook network of existing users, for a small fraction of nodes (e.g., active users), we may have good information about the communities to which they belong, whereas for the majority of users, we just observe who they link to. We are interested in estimating the community label of the new user in order to personalize news or ad recommendations. For another example, in a co-citation network of researchers (Ji et al., 2022), each community might be interpreted as a group of researchers working on the same research area. We frequently have a clear understanding of the research areas of some authors (e.g., senior authors), and we intend to use this knowledge to determine the community to which a new node (e.g., a junior author) belongs.
The statistical literature on community detection has mainly focused on the _unsupervised_ setting (Bickel & Chen, 2009; Rohe et al., 2011; Jin, 2015; Gao et al., 2018; Li et al., 2021). The _semi-supervised_ setting is less studied. Leng & Ma (2019) offers a comprehensive literature review of semi-supervised community detection algorithms. Liu et al. (2014) and Ji et al. (2016) derive systems of linear equations for the community labels through physics theory, and predict the labels by solving those equations. Zhou et al. (2018) leverages on the belief function to propagate labels across the network, so that one can estimate the label of a node through its belief. Betzel et al. (2018) extracts several patterns in size and structural composition across the known communities and search for similar patterns in the graph. Yang et al. (2015) unifies a number of different community detection algorithms based on non-negative matrix factorization or spectral clustering under the unsupervised setting, and fits them into the semi-supervised scenario by adding various regularization terms to encourage the estimated labels for nodes in \(\mathcal{L}\) to match with the clustering behavior of their observed labels. However, the existing methods still face challenges. First, many of them employ the heuristic that a node tends to have more edges with nodes in the same community than those in other communities. This is true only when communities are _assortative_. But non-assortative communities are also seen in real networks (Goldenberg et al., 2010; Betzel et al., 2018); for instance, Facebook users sharing similar restaurant preferences are not necessarily friends of each other. Second, real networks often have severe degree heterogeneity (i.e., the degrees of some nodes can be many times larger than the degrees of other nodes), but most semi-supervised community detection algorithms do not handle degree heterogeneity. Third, the optimization-based algorithms (Yang et al., 2015) solve non-convex problems and face the issue of local minima. Last, to our best knowledge, none of the existing methods have theoretical guarantees.
Attributed network clustering is a problem related to community detection, for which many algorithms have been developed (please see Chunaev et al. (2019) for a nice survey). The graph neural networks (GNN) reported great successes in attributed network clustering. Kipf & Welling (2016) proposes a graph convolutional network (GCN) approach to semi-supervised community detection, and Jin et al. (2019) combines GNN with the Markov random field to predict node labels. However, GNN is designed for the setting where each node has a large number of attributes and these attributes contain rich information of community labels. The key question in the GNN research is how to utilize the graph to better propagate messages. In contrast, we are interested in the scenario where it is infeasible or costly to collect node attributes. For instance, it is easy to construct a co-authorship network from bitbrex files, but collecting features of authors is much harder. Additionally, a number of benchmark network datasets do not have attributes (e.g. Caltech (Red et al., 2011; Traud et al., 2012), Simmons (Red et al., 2011; Traud et al., 2012), and Polblogs (Adamic & Glance, 2005)). It is unclear how to implement GNN on these data sets. In Section 4, we briefly study the performance of GNN with self-created nodal features from 1-hop representation, graph topology and node embedding. Our experiments indicate that GNN is often not suitable for the case of no node attributes.
We propose a new algorithm for semi-supervised community detection to address the limitations of existing methods. We adopt the DCBM model (Karrer & Newman, 2011) for networks, which models degree heterogeneity and allows for both assortative and non-assortative communities. Inspired by the viewpoint of Goldenberg et al. (2010) that a 'community' is a group of'structurally equivalent' nodes, we design a _structural similar metric_ between the new node and each of the \(K\) communities. This metric aggregates information in both labeled and unlabeled nodes. We then estimate the community label of the new node by the \(k\) that maximizes this similarity metric. Our method is easy to implement, computationally fast, and compares favorably with other methods in numerical experiments. In theory, we derive explicit bounds for the misclassification probability of our method under the DCBM model. We also study the efficiency of our method by comparing its misclassification probability with that of an ideal classifier having access to the community labels of all nodes.
## 2 Semi-supervised community detection
Recall that \(A\) is the \(n\times n\) adjacency matrix on the existing nodes and \(Y_{\mathcal{L}}\) contains the community labels of nodes in \(\mathcal{L}\). Write \([n]=\{1,2,\ldots,n\}\) and let \(\mathcal{U}=[n]\setminus\mathcal{L}\) denote the set of unlabeled nodes. We index the new node by \(n+1\) and let \(X\in\mathbb{R}^{n}\) be the binary vector consisting of the edges between the new node and existing nodes. Denote by \(\bar{A}\) the adjacency matrix for the network of \((n+1)\) nodes.
2.1 The DCBM model and structural equivalence of communitiesWe model \(\bar{A}\) with the degree-corrected block model (DCBM) (Karrer & Newman, 2011). Define a \(K\)-dimensional membership
matrix \(\pi_{i}\in\{e_{1},e_{2},\ldots,e_{K}\}\), where \(e_{k}\)'s are the standard basis vectors of \(\mathbb{R}^{K}\). We encode the community labels by \(\pi_{i}\), where \(\pi_{i}=e_{k}\) if and only if \(y_{i}=k\). For a symmetric nonnegative matrix \(P\in\mathbb{R}^{K\times K}\) and a degree parameter \(\theta_{i}\in(0,1]\) for each node \(i\), we assume that the upper triangle of \(\bar{A}\) contains independent Bernoulli variables, where
\[\mathbb{P}(\bar{A}_{ij}=1)=\theta_{i}\theta_{j}\cdot\pi_{i}^{\prime}P\pi_{j}, \qquad\text{for all }\ 1\leq i\neq j\leq n+1. \tag{1}\]
When \(\theta_{i}\) are equal, the DCBM model reduces to the stochastic block model (SBM). Compared with SBM, DCBM is more flexible as it accommodates degree heterogeneity. For a matrix \(M\) or a vector \(v\), let \(\mathrm{diag}(M)\) and \(\mathrm{diag}(v)\) denote the diagonal matrices whose diagonals are from the diagonal of \(M\) or the vector \(v\), respectively. Write \(\theta=(\theta_{1},\theta_{2},\ldots,\theta_{n+1})^{\prime}\), \(\Theta=\mathrm{diag}(\theta)\), and \(\Pi=[\pi_{1},\pi_{2},\ldots,\pi_{n+1}]^{\prime}\in\mathbb{R}^{n\times K}\). Model (1) yields that
\[\bar{A}=\Omega-\mathrm{diag}(\Omega)+W,\qquad\text{where }\ \Omega=\Theta\Pi P \Pi^{\prime}\Theta\ \text{ and }\ \bar{W}=\bar{A}-\mathbb{E}\bar{A}. \tag{2}\]
Here, \(\Omega\) is a low-rank matrix that captures the'signal', \(W\) is a generalized Wigner matrix that captures 'noise', and \(\mathrm{diag}(\Omega)\) yields a bias to the'signal' but its effect is usually negligible.
The DCBM belongs to the family of block models for networks. In block models, it is not necessarily true that the edge densities within a community are higher than those between different communities. Such communities are called assortative communities. However, non-assortative communities also appear in many real networks (Goldenberg et al., 2010; Betzel et al., 2018). For instance, in news and recommendation, we are interested in identifying a group of users who have similar behaviors, but they may not be densely connected to each other. Goldenberg et al. (2010) introduced an intuitive notion of _structural equivalence_ - two nodes are structurally equivalent if their connectivity with similar nodes is similar. They argued that a 'community' in block models is a group of structurally equivalent nodes. This way of defining communities is more general than assortative communities.
We introduce a rigorous description of structural equivalence in the DCBM model. For two vectors \(u\) and \(v\), define \(\psi(u,v)=\arccos(\frac{u}{\|u\|},\frac{v}{\|v\|})\), which is the angle between these two vectors. Let \(\bar{A}_{i}\) be the \(i\)th column of \(\bar{A}\). This vector describes the 'behavior' of node \(i\) in the network. Recall that \(\Omega\) is as in (2). When the signal-to-noise ratio is sufficiently large, \(\bar{A}_{i}\approx\Omega_{i}\), where \(\Omega_{i}\) is the \(i\)th column of \(\Omega\). We approximate the angle between \(\bar{A}_{i}\) and \(\bar{A}_{j}\) by the angle between \(\Omega_{i}\) and \(\Omega_{j}\). By DCBM model, for a node \(i\) in community \(k\), \(\Omega_{i}=\theta_{i}\Theta\Pi Pe_{k}\), where \(e_{k}\) is the \(k\)th standard basis of \(\mathbb{R}^{K}\). It follows that for \(i\in\mathcal{C}_{k}\) and \(j\in\mathcal{C}_{\ell}\), the degree parameters \(\theta_{i}\) and \(\theta_{j}\) cancel out in our structural similarity:
\[\cos\psi(\Omega_{i},\Omega_{j})=\frac{\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\big{<}\!\!\!\!\big{<}\!\!\!\big{<}\!\!\! \big{<}\!\!\!\big{<}\!\!\!\big{<
We call (4) the _AngleMin_ estimate. Note that each \(A^{(k)}\) is an \(n\)-dimensional vector, the construction of which uses both \(A_{\mathcal{LC}}\) and \(A_{\mathcal{LU}}\). Therefore, \(A^{(k)}\) aggregates information from both labeled and unlabeled nodes, and so AngleMin is indeed a semi-supervised approach.
The estimate in (4) still has space to improve. First, \(A^{(k)}\) and \(X\) are high-dimensional random vectors, each entry of which is a sum of independent Bernoulli variables. When the network is very sparse or communities are heavily imbalanced in size or degree, the large-deviation bound for \(\psi(A^{(k)},X)\) can be unsatisfactory. Second, recall that our observed data include \(A\) and \(X\). Denote by \(A_{\mathcal{LC}}\) the submatrix of \(A\) restricted on \(\mathcal{L}\times\mathcal{L}\) and \(X_{\mathcal{L}}\) the subvector of \(X\) restricted on \(\mathcal{L}\); other notations are similar. In (4), only \((A_{\mathcal{LC}},A_{\mathcal{LU}},X)\) are used, but the information in \(A_{\mathcal{LU}}\) is wasted. We now propose a variant of (4). For any vector \(x\in\mathbb{R}^{n}\), let \(x_{\mathcal{L}}\) and \(x_{\mathcal{LU}}\) be the sub-vectors restricted to indices in \(\mathcal{L}\) and \(\mathcal{U}\), respectively. Let \(\mathbf{1}_{(k)}\) denote the \(|\mathcal{L}|\)-dimensional vector indicating whether each labeled node is in community \(k\). Given any \(|\mathcal{U}|\times K\) matrix \(H=[h_{1},h_{2},\ldots,h_{K}]\), define
\[f(x;H)=[x_{\mathcal{L}}^{\prime}\mathbf{1}_{(1)},\ \ldots,\ x_{\mathcal{L}}^{ \prime}\mathbf{1}_{(k)},\ x_{\mathcal{U}}^{\prime}h_{1},\ldots,x_{\mathcal{U }}^{\prime}h_{K}]^{\prime}\ \in\ \mathbb{R}^{2K}. \tag{5}\]
The mapping \(f(\cdot;H)\) creates a low-dimensional projection of \(x\). Suppose we now apply this mapping to \(A^{(k)}\). In the projected vector, each entry is a weighted sum of a large number of entries of \(A^{(k)}\). Since \(A^{(k)}\) contains independent entries, it follows from large-deviation inequalities that each entry of \(f(A^{(k)},H)\) has a nice asymptotic tail behavior. This resolves the first issue above. We then modify the AngleMin estimate in (4) to the following estimate, which we call (3):1
Footnote 1: In AngleMin+, \(H\) serves to reduce noise. For example, let \(X,Y\in\mathbb{R}^{2m}\) be two random Bernoulli vectors, where \(\mathbb{E}X=\mathbb{E}Y=(.1,\ldots,.1,.4,\ldots,.4)^{\prime}\). As \(m\to\infty\), it can be shown that \(\psi(X,Y)\to 0.34\neq 1\) almost surely. If we project \(X\) and \(Y\) into \(\mathbb{R}^{2}\) by summing the first \(m\) coordinates and last \(m\) coordinates separately, then as \(m\to\infty\), \(\psi(X,Y)\to 1\) almost surely.
\[\hat{y}(H)=\arg\min_{1\leq k\leq K}\psi\big{(}f(A^{(k)};H),\ f(X;H)\big{)}. \tag{6}\]
AngleMin+ requires an input of \(H\). Our theory suggests that \(H\) has to satisfy two conditions: (a) The spectral norm of \(H^{\prime}H\) is \(O(|\mathcal{U}|)\). In fact, given any \(H\), we can always multiply it by a scalar so that \(\|H^{\prime}H\|\) is at the order of \(|\mathcal{U}|\). Hence, this condition says that the scaling of \(H\) should be properly set to balance the contributions from labeled and unlabeled nodes. (b) The minimum singular value of \(H^{\prime}\Theta_{\mathcal{LU}}\Pi_{\mathcal{U}}\) has to be at least a constant times \(|H|\|\|\Theta_{\mathcal{LU}}\Pi_{\mathcal{U}}\|\), where \(\Theta_{\mathcal{LU}}\) is the submatrix of \(\Theta\) restricted to the \((\mathcal{U},\mathcal{U})\) block and \(\Pi_{\mathcal{U}}\) is the sub-matrix of \(\Pi\) restricted to the rows in \(\mathcal{U}\). This condition prevents the columns of \(H\) from being orthogonal to the columns of \(\Theta_{\mathcal{LU}}\Pi_{\mathcal{U}}\), and it guarantees that the last \(K\) entries of \(f(x;H)\) retain enough information of the unlabeled nodes.
We construct a data-driven \(H\) from \(\mathcal{A}_{\mathcal{LU}}\), by taking advantage of the existing unsupervised community detection algorithms such as Gao et al. (2018); Jin et al. (2021). Let \(\hat{\Pi}_{\mathcal{U}}=[\hat{\pi}_{i}]_{i\in\mathcal{U}}\) be the community labels obtained by applying a community detection algorithm on the sub-network restricted to unlabeled nodes, where \(\hat{\pi}_{i}=e_{k}\) if and only if node \(k\) is clustered to community \(k\). We propose using
\[H=\hat{\Pi}_{\mathcal{U}}. \tag{7}\]
This choice of \(H\) always satisfies the aforementioned condition (a). Furthermore, under mild regularity conditions, as long as the clustering error fraction is bounded by a constant, this \(H\) also satisfies the aforementioned condition (b). We note that the information in \(\mathcal{A}_{\mathcal{LU}}\) has been absorbed into \(H\), so it resolves the second issue above. Combining (7) with (3) gives a two-stage algorithm for estimating \(y\).
**Remark 1**: A nice property of AngleMin+ is that it tolerates an arbitrary permutation of communities in \(\hat{\Pi}_{\mathcal{U}}\). In other words, the communities output by the unsupervised community detection algorithm do not need to have a one-to-one correspondence with the communities on the labeled nodes. To see the reason, we consider an arbitrary permutation of columns of \(\hat{\Pi}_{\mathcal{U}}\). By (12), this yields a permutation of the last \(K\) entries of \(f(x;H)\), simultaneously for all \(x\). However, the angle between \(f(A^{(k)};H)\) and \(f(X;H))\) is still the same, and so \(\hat{y}(H)\) is unchanged. This property brings a lot of practical conveniences. When \(K\) is large or the signals are weak, it is challenging (both computationally and statistically) to match the communities in \(\hat{\Pi}_{\mathcal{U}}\) with those in \(\Pi_{\mathcal{L}}\). Our method avoids this issue.
**Remark 2**: AngleMin+ is flexible to accommodate other choices of \(H\). Some unsupervised community detection algorithms provide both \(\hat{\Pi}_{\mathcal{U}}\) and \(\hat{\Theta}_{\mathcal{LU}}\) (Jin et al., 2022). We may use \(H\propto\hat{\Theta}_{\mathcal{LU}}\hat{\Pi}_{\mathcal{U}}\)
(subject to a re-scaling to satisfy the aforementioned condition (a)). This \(H\) down-weights the contribution of low-degree unlabeled nodes in the last \(K\) entries of (12). This is beneficial if the signals are weak and the degree heterogeneity is severe. Another choice is \(H\propto\tilde{\Xi}_{(d)}\hat{\Lambda}_{(\ell)}^{-1}\), where \(\hat{\Lambda}_{\ell\ell}\) is a diagonal matrix containing the \(K\) largest eigenvalues (in magnitude) of \(A_{\mathcal{U}\mathcal{U}}\) and \(\hat{\Xi}_{(d)}\) is the associated matrix of eigenvectors. For this \(H\), we do not even need to perform any community detection algorithm on \(\mathcal{A}_{\mathcal{U}\mathcal{U}}\). We may also use spectral embedding (Rubin-Delanchy et al., 2017).
**Remark 3**: The local refinement algorithm (Gao et al., 2018) may be adapted to the semi-supervised setting, but it requires prior knowledge on assortativity or dis-assortativity and a strong balance condition on the average degrees of communities. When these conditions are not satisfied, we can construct examples where the error rate of AngleMin+ is \(o(1)\) but the error rate of local refinement is 0.5. See Section C.
2.3 The choice of the unsupervised community detection algorithmWe discuss how to obtain \(\hat{\Pi}_{\mathcal{U}}\). In the statistical literature, there are several approaches to unsupervised community detection. The first is modularity maximization (Girvan and Newman, 2002). It exhaustively searches for all cluster assignments and selects the one that maximizes an empirical modularity function. The second is spectral clustering (Jin, 2015). It applies k-means clustering to rows of the matrix consisting of empirical eigenvectors. Other methods include post-processing the output of spectral clustering by majority vote (Gao et al., 2018). Not every method deals with degree heterogeneity and non-assortative communities as in the DCBM model. We use a recent spectral algorithm SCORE+ (Jin et al., 2021), which allows for both severe degree heterogeneity and non-assortative communities.
SCORE+: We tentatively write \(A_{\mathcal{U}\mathcal{U}}\)=\(A\) and \(|\mathcal{U}|\)=\(n\) and assume the network (on unlabeled nodes) is connected (otherwise consider its giant component). SCORE+ first computes \(L\)=\(D_{\tau}^{-1/2}AD_{\tau}^{-1/2}\), where \(D_{\tau}\)=\(\mathrm{diag}(d_{1},\ldots,d_{n})\)+\(0.1d_{\mathrm{max}}I_{n}\), and \(d_{i}\) is degree of node \(i\). Let \(\hat{\lambda}_{k}\) be the \(k\)th eigenvalue (in magnitude) of \(L\) and let \(\hat{\xi}_{k}\) be the associated eigenvector. Let \(r\)=\(K\) or \(r\)=\(K\)+1 (see Jin et al. (2021) for details). Let \(\hat{R}\in\mathbb{R}^{n\times(r-1)}\) by \(\hat{R}_{ik}=(\hat{\lambda}_{k+1}/\hat{\lambda}_{1})\cdot[\hat{\xi}_{k+1}(i)/ \hat{\xi}_{1}(i)]\). Run k-means on rows of \(\hat{R}\).
## 3 Theoretical properties
We assume that the observed adjacency matrix \(\bar{A}\) follows the DCBM model in (1)-(2). From now on, let \(\theta_{*}\) denote the degree parameter of the new node \(n+1\). Suppose \(k^{*}\in\{1,2,\ldots,K\}\) is its true community label, and the corresponding \(K\)-dimensional membership vector is \(\pi^{*}=e_{k^{*}}\). In (2), \(\theta\) and \(P\) are not identifiable. To have identifiability, we assume that all diagonal entries of \(P\) are equal to \(1\) (if this is not true, we replace \(P\) by \([\mathrm{diag}(P)]^{-\frac{1}{2}}P[\mathrm{diag}(P)]^{-\frac{1}{2}}\) and each \(\theta_{i}\) in community \(k\) by \(\theta_{i}\sqrt{P_{kk}}\), while keeping \(\Omega=\Theta\Pi P\Pi^{\prime}\Theta\) unchanged). In the asymptotic framework, we fix \(K\) and assume \(n\rightarrow\infty\). We need some regularity conditions. For any symmetric matrix \(B\), let \(\|B\|_{\max}\) denote its entry-wise maximum norm and \(\lambda_{\min}(B)\) denote its minimum eigenvalue (in magnitude). We assume for a constant \(C_{1}>0\) and a positive sequence \(\beta_{n}\) (which may tend to 0),
\[\|P\|_{\max}\leq C_{1},\qquad|\lambda_{\min}(P)|\geq\beta_{n}. \tag{8}\]
For \(1\leq k\leq K\), let \(\theta^{(k)}\in\mathbb{R}^{n}\) be the vector with \(\theta^{(k)}_{i}=\theta_{i}\cdot 1\{i\in\mathcal{C}_{k}\}\), and let \(\theta^{(k)}_{\mathcal{L}}\) and \(\theta^{(k)}_{\mathcal{U}}\) be the sub-vectors restricted to indices in \(\mathcal{L}\) and \(\mathcal{U}\), respectively. We assume for a constant \(C_{2}>0\) and a properly small constant \(c_{3}>0\),
\[\max_{k}\|\theta^{(k)}\|_{1}\leq C_{2}\min_{k}\|\theta^{(k)}\|_{1},\qquad\| \theta^{(k)}_{\mathcal{L}}\|^{2}\leq c_{3}\beta_{n}\|\theta^{(k)}_{\mathcal{L }}\|_{1}\|\theta\|_{1},\ \ \mbox{for all }1\leq k\leq K. \tag{9}\]
These conditions are mild. Consider (8). For identifiability, \(P\) is already scaled to make \(P_{kk}=1\) for all \(k\). It is thus a mild condition to assume \(\|P\|_{\max}\leq C_{1}\). The condition of \(|\lambda_{\min}(P)|\geq\beta_{n}\) is also mild, because we allow \(\beta_{n}\to 0\). Here, \(\beta_{n}\) captures the 'dissimilarity' of communities. To see this, consider a special \(P\) where the diagonals are \(1\) and the off-diagonals are all equal to \(b\); in this example, \(|1-b|\) captures the difference of within-community connectivity and between-community connectivity, and it can be shown that \(|\lambda_{\min}(P)|=|1-b|\). Consider (9). The first condition requires that the total degree in different communities are balanced, which is mild. The second condition is about degree heterogeneity. Let \(\theta_{\max}\) and \(\theta\) be the maximum and average of \(\theta_{i}\), respectively. In the second inequality of (9), the left hand side is \(O(n^{-1}\theta_{\max}/\bar{\theta})\), so this condition is satisfied as long as \(\theta_{\max}/\bar{\theta}=O(n\beta_{n})\). This is a very mild requirement.
### The misclassification error of AngleMin+
For any \(|\mathcal{U}|\times K\) matrix \(H\), let \(\hat{\psi}_{k}(H)=\psi(f(A^{(k)};H),f(X;H))\) be as in (3). AngleMin+ estimates the community label to the new node by finding the minimum of \(\hat{\psi}_{1}(H),\ldots,\hat{\psi}_{K}(H)\), with \(H=\hat{\Pi}_{\mathcal{U}}\). We first introduce a counterpart of \(\hat{\psi}_{k}(H)\). Recall that \(\Omega\) is as in (2), which is the'signal' matrix. Let \(\Omega^{(k)}\in\mathbb{R}^{n}\) by \(\Omega^{(k)}_{j}=\sum_{i\in\mathcal{L}\cap\mathcal{C}_{k}}\Omega_{ij}\), for \(1\leq j\leq n\), and define
\[\psi_{k}(H)=\psi\big{(}f(\Omega^{(k)};H),\ f(\EX;H)\big{)},\qquad\text{for }1 \leq k\leq K. \tag{10}\]
The next lemma gives the explicit expression of \(\psi_{k}(H)\) for an arbitrary \(H\).
Consider the DCBM model where (8)-(9) are satisfied. We define three \(K\times K\) matrices: \(G_{\mathcal{L}\mathcal{C}}=\Pi_{\mathcal{C}}^{\prime}\theta_{\mathcal{L} \mathcal{C}}\Pi_{\mathcal{C}}\), \(G_{\mathcal{U}\mathcal{C}}=\Pi_{\mathcal{C}}^{\prime}\theta_{\mathcal{L} \mathcal{C}}\Pi_{\mathcal{C}}\), and \(Q=G_{\mathcal{U}\mathcal{U}}^{-1}\Pi_{\mathcal{U}}^{\prime}\theta_{\mathcal{U }\mathcal{U}}H\). For \(1\leq k\leq K\), \(\psi_{k}(H)=\arccos\big{(}\frac{M_{k\star}}{\sqrt{M_{kk}}\sqrt{M_{k^{\star}k^{ \star}}}}\big{)}\), where \(M=P(G_{\mathcal{L}\mathcal{C}}^{2}+G_{\mathcal{U}\mathcal{U}}QQ^{\prime}G_{ \mathcal{U}\mathcal{U}})P\).
The choice of \(H\) is flexible. For convenience, we focus on the class of \(H\) that is an eligible community membership matrix, i.e., \(H=\hat{\Pi}_{\mathcal{U}}\). Our theory can be easily extended to more general forms of \(H\).
For any \(b_{0}\in(0,1)\), we say that \(\hat{\Pi}_{\mathcal{U}}\) is \(b_{0}\)-correct if \(\min_{T}\bigl{(}\sum_{i\in\mathcal{U}}\theta_{i}\cdot 1\{T\hat{\pi}_{i}\neq\pi_{i} \}\bigr{)}\leq b_{0}\|\theta\|_{1}\), where the minimum is taken over all permutations of \(K\) columns of \(\hat{\Pi}_{\mathcal{U}}\).
The next two theorems study \(\psi_{k}(H)\) and \(\hat{\psi}_{k}(H)\), respectively, for \(H=\hat{\Pi}_{\mathcal{U}}\).
Consider the DCBM model where (8)-(9) hold. Let \(k^{\star}\) denote the true community label of the new node. Suppose \(\hat{\Pi}_{\mathcal{U}}\) is \(b_{0}\)-correct, for a constant \(b_{0}\in(0,1)\). When \(b_{0}\) is properly small, there exists a constant \(c_{0}>0\), which does not depend on \(b_{0}\), such that \(\psi_{k^{\star}}(\hat{\Pi}_{\mathcal{U}})=0\) and \(\min_{k\neq k^{\star}}\{\psi_{k}(\hat{\Pi}_{\mathcal{U}})\}\geq c_{0}\beta_{n}\).
Consider the DCBM model where (8)-(9) hold. There exists constant \(C>0\), such that for any \(\delta\in(0,1/2)\), with probability \(1-\delta\), simultaneously for \(1\leq k\leq K\), \(|\hat{\psi}_{k}(\hat{\Pi}_{\mathcal{U}})-\psi_{k}(\hat{\Pi}_{\mathcal{U}})| \leq C\left(\sqrt{\frac{\log(1/\delta)}{\|\theta\|_{1}\cdot\min\{\theta^{ \star},\|\theta^{(k)}_{\mathcal{U}}\|_{1}\}}}+\frac{\|\theta^{(k)}_{\mathcal{U }}\|^{2}}{\|\theta^{(k)}_{\mathcal{U}}\|_{1}\|\theta\|_{1}}\right)\).
Write \(\hat{\psi}_{k}=\hat{\psi}_{k}(\hat{\Pi}_{\mathcal{U}})\) and \(\psi_{k}=\psi_{k}(\hat{\Pi}_{\mathcal{U}})\) for short. When \(\max_{k}\{|\hat{\psi}_{k}-\psi_{k}|\}<(1/2)\min_{k\neq k^{\star}}\{\psi_{k}\}\), the community label of the new node is correctly estimated. We can immediately translate the results in Theorems 1-2 to an upper bound for the misclassification probability.
Consider the DCBM model where (8)-(9) hold. Suppose for some constants \(b_{0}\in(0,1)\) and \(\epsilon\in(0,1/2)\), \(\hat{\Pi}_{\mathcal{U}}\) is \(b_{0}\)-correct with probability \(1-\epsilon\). When \(b_{0}\) is properly small, there exist constants \(C_{0}>0\) and \(\bar{C}>0\), which do not depend on \((b_{0},\epsilon)\), such that \(\mathbb{P}(\hat{y}\neq k^{\star})\leq\epsilon+\bar{C}\sum_{k=1}^{K}\exp\bigl{(}- C_{0}\beta_{n}^{2}\|\theta\|_{1}\cdot\min\{\theta^{\star},\|\theta^{(k)}_{ \mathcal{L}}\|_{1}\}\bigr{)}\).
When \(\min_{k}\|\theta^{(k)}_{\mathcal{L}}\|_{1}\geq O(\theta^{\star})\), the stochastic noise in \(X\) will dominate the error, and the misspecification probability in Corollary 1 will not improve with more label information. Typically, the error rate will be the same as in the ideal case that \(\Pi_{\mathcal{U}}\) is known (except there is no \(\epsilon\) in the ideal case). Hence, only little label information can make AngleMin+ perform almost as well as a fully supervised algorithm that possesses all the label information. We will formalize this in Section 3.
Notice that \(\min_{T}\bigl{(}\sum_{i\in\mathcal{U}}\theta_{i}\cdot 1\{T\hat{\pi}_{i}\neq\pi_{i} \}\bigr{)}\leq\frac{1}{K^{\star}}\sum_{T}\bigl{(}\sum_{i\in\mathcal{U}}\theta_{i }\cdot 1\{T\hat{\pi}_{i}\neq\pi_{i}\}\bigr{)}\leq\frac{K-1}{K}\|\theta_{\mathcal{U}} \|_{1}\). Therefore, if \(\|\theta_{\mathcal{L}}\|_{1}\geq(1-\frac{Kb_{0}}{K-1})\|\theta\|_{1}\), then \(\min_{T}\bigl{(}\sum_{i\in\mathcal{U}}\theta_{i}\cdot 1\{T\hat{\pi}_{i}\neq\pi_{i} \}\bigr{)}\leq b_{0}\|\theta\|_{1}\) is always true. In other words, as long as the information on the labels is strong enough, AngleMin+ would not require any assumption on the unsupervised community detection algorithm.
For AngleMin+ to be consistent, we need the bound in Corollary 1 to be \(o(1)\). It then requires that for a small constant \(b_{0}\), \(\hat{\Pi}_{\mathcal{U}}\) is \(b_{0}\)-correct with probability \(1-o(1)\). This is a mild requirement and can be achieved by several unsupervised community detection algorithms. The next corollary studies the specific version of AngleMin+, when \(\hat{\Pi}_{\mathcal{U}}\) is from SCORE+:
Consider the DCBM model where (8)-(9) hold. We apply SCORE+ to obtain \(\hat{\Pi}_{\mathcal{U}}\) and plug it into AngleMin+. As \(n\to\infty\), suppose for some constant \(q_{0}>0\), \(\min_{i\in\mathcal{U}}\theta_{i}\geq q_{0}\max_{i\in\mathcal{U}}\theta_{i}\), \(\beta_{n}\|\theta_{\mathcal{U}}\|\geq q_{0}\sqrt{\log(n)}\), \(\beta_{n}^{2}\|\theta\|_{1}\theta^{\star}\to\infty\), and \(\beta_{n}^{2}\|\theta\|_{1}\min_{k}\{\|\theta^{(k)}_{\mathcal{L}}\|_{1}\}\to\infty\). Then, \(\mathbb{P}(\hat{y}\neq k^{\star})\to 0\), so the AngleMin+ estimate is consistent.
#### 3.2 Comparison with an information theoretical lower bound
We compare the performance of AngleMin+ with an ideal estimate that has access to all model parameters, except for the community label \(k^{*}\) of the new node. For simplicity, we first consider the case of \(K=2\). For any label predictor \(\tilde{y}\) for the new node, define \(\mathrm{Risk}(\tilde{y})=\sum_{k^{*}\in[K]}\mathbb{P}(\tilde{y}\neq k^{*}|\pi^{ *}=e_{k^{*}})\).
**Lemma 2**.: _Consider a DCBM with \(K=2\) and \(P=(1-b)I_{2}+b\mathbf{1}_{2}\mathbf{1}_{2}^{\prime}\). Suppose \(\theta^{*}=o(1)\), \(\frac{\theta^{*}}{\min_{k}\|\theta_{C}^{(k)}\|_{1}}=o(1)\), \(1-b=o(1)\), \(\frac{\|\theta_{C}^{(1)}\|_{1}}{\|\theta_{C}^{(2)}\|_{1}}=\frac{\|\theta_{C}^{ (1)}\|_{1}}{\|\theta_{C}^{(2)}\|_{1}}=1\). There exists a constant \(c_{4}>0\) such that \(\inf_{\tilde{y}}\{\mathrm{Risk}(\tilde{y})\}\geq c_{4}\exp\Bigl{\{}-2[1+o(1)] \frac{(\frac{1-b^{2}}{8})}{8}\cdot\theta^{*}(\|\theta_{C}\|_{1}+\|\theta_{ \mathcal{U}}\|_{1})\Bigr{\}}\), where the infimum is taken over all measurable functions of \(A\), \(X\), and parameters \(\Pi_{\mathcal{L}}\), \(\Pi_{\mathcal{L}}\), \(\Theta\), \(P\), \(\theta^{*}\). In AngleMin+, suppose the second part of condition 9 holds with \(c_{3}=o(1)\), \(\hat{\Pi}_{\mathcal{U}}\) is \(\tilde{b}_{0}\)-correct with \(\tilde{b}_{0}\stackrel{{ a.s.}}{{\rightarrow}}0\). There is a constant \(C_{4}>0\) such that, \(\mathrm{Risk}(\tilde{y})\leq C_{4}\exp\Bigl{\{}-[1-o(1)]\frac{(1-b^{2})}{8} \cdot\theta^{*}\frac{(\|\theta_{C}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2} )^{2}}{\|\theta_{C}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2}}\Bigr{\}}\)._
Lemma 2 indicates that the classification error of AngleMin+ is almost the same as the information theoretical lower bound of an algorithm that knows all the parameters except \(\pi^{*}\) apart from a mild difference of the exponents. This difference comes from two sources. The first is the extra "2" in the exponent of \(\mathrm{Risk}(\tilde{y})\), which is largely an artifact of proof techniques, because we bound the total variation distance by the Hellinger distance (the total variation distance is hard to analyze directly). The second is the difference of \(\|\theta_{\mathcal{L}}\|_{1}+\|\theta_{\mathcal{U}}\|_{1}\) in \(\inf_{\tilde{y}}\{\mathrm{Risk}(\tilde{y})\}\) and \(\frac{(\|\theta_{C}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2})^{2}}{\|\theta _{C}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2}}\) in \(\mathrm{Risk}(\hat{y})\). Note that \(\frac{(\|\theta_{\mathcal{L}}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2})^{2} }{\|\theta_{C}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2}}\leq\|\theta_{ \mathcal{L}}\|_{1}+\|\theta_{\mathcal{U}}\|_{1}\leq 1.125\frac{(\|\theta_{ \mathcal{L}}\|_{1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2})^{2}}{\|\theta_{C}\|_{ 1}^{2}+\|\theta_{\mathcal{U}}\|_{1}^{2}}\), so this difference is quite mild. It arises from the fact that AngleMin+ does not aggregate the information in labeled and unlabeled data by adding the first and last \(K\) coordinates of \(f(x;H)\) together. The reason we do not do this is that unsupervised community detection methods only provide class labels up to a permutation, and practically it is really hard to estimate this permutation, which will result in the algorithm being extremely unstable. To conclude, the difference of the error rate of our method and the information theoretical lower bound is mild, demonstrating that our algorithm is nearly optimal. For a general \(K\), we have a similar conclusion:
**Theorem 3**.: _Suppose the conditions of Corollary 1 hold, where \(b_{0}\) is properly small, and suppose that \(\hat{\Pi}_{\mathcal{U}}\) is \(b_{0}\)-correct. Furthermore, we assume for sufficiently large constant \(C_{3}\), \(\theta^{*}\leq\frac{1}{C_{3}}\), \(\theta^{*}\leq\min_{k\in[K]}C_{3}\|\theta_{C}^{(k)}\|_{1}\), and for a constant \(r_{0}>0\), \(\min_{k\neq\ell}\{P_{\ell}\}\geq r_{0}\). Then, there is a constant \(\tilde{c}_{2}=\tilde{c}_{2}(K,C_{1},C_{2},C_{3},c_{3},r_{0})>0\) such that \([-\log(\tilde{c}_{2}\mathrm{Risk}(\tilde{y}))]/[-\log(\inf_{\tilde{y}}\{ \mathrm{Risk}(\tilde{y})\})]\geq\tilde{c}_{2}\)._
#### 3.3 In-sample Classification
In this part, we briefly discuss the in-sample classification problem. Formally, our goal is to estimate \(\pi_{i}\) for all \(i\in\mathcal{U}\). As mentioned in section 1, an in-sample classification algorithm can be directly derived from AngleMin+: for each \(i\in\mathcal{U}\), predict the label of \(i\) as \(\hat{y}_{i}(H)=\arg\min_{1\leq k\leq K}\psi\bigl{(}f(A_{-i}^{(k)};H_{i}),\ f(A_{-i };H_{i})\bigr{)}\), where \(A_{-i}^{(k)}\) is the subvector of \(A^{(k)}\) by removing the \(i\)th entry, \(A_{-i,i}\) is the subvector of \(A_{i}\) by removing the \(i\)th entry, and \(H_{i}\) is a \((|\mathcal{U}|-1)\times K\) projection matrix which may be different across distinct \(i\). As discussed in subsection 2, the choices of \(H_{i}\) are quite flexible. For purely theoretical convenience, we would focus on the case that \(H_{i}=\hat{\Pi}_{\mathcal{U}\setminus\{i\}}\). For any in-sample classifier \(\tilde{y}=(\tilde{y}_{i})_{i\in\mathcal{U}}\in[K]^{|\mathcal{U}|}\), define the in-sample risk \(\mathrm{Risk}_{ins}(\tilde{y})=\frac{1}{|\mathcal{U}|}\sum_{i\in\mathcal{U}} \sum_{k^{*}\in[K]}\mathbb{P}(\tilde{y}_{i}\neq k^{*}|\pi_{i}=e_{k^{*}})\). For the above in-sample classification algorithm, we have similar theoretical results as in section 3 on consistency and efficiency under some very mild conditions:
**Theorem 4**.: _Consider the DCBM model where (8)-(9) hold. We apply SCORE+ to obtain \(\hat{\Pi}_{\mathcal{U}\setminus\{i\}}\) and plug it into the above algorithm. As \(n\rightarrow\infty\), suppose for some constant \(q_{0}>0\), \(\min_{i\in\mathcal{U}}\theta_{i}\geq q_{0}\max_{i\in\mathcal{U}}\theta_{i}\), \(\beta_{n}\|\theta_{\mathcal{U}}\|\geq q_{0}\sqrt{\log(n)}\), \(\beta_{n}^{2}\|\theta\|_{1}\min_{i\in\mathcal{U}}\theta_{i}\rightarrow\infty\), and \(\beta_{n}^{2}\|\theta\|_{1}\min_{k}\{\|\theta_{\mathcal{L}}^{(k)}\|_{1}\}\rightarrow\infty\). Then, \(\frac{1}{|\mathcal{U}|}\sum_{i\in\mathcal{U}}\mathbb{P}(\tilde{y}_{i}\neq k_{i})\to 0\), so the above in-sample classification algorithm is consistent._
**Theorem 5**.: _Suppose the conditions of Corollary 1 hold, where \(b_{0}\) is properly small, and suppose that \(\hat{\Pi}_{\mathcal{U}\setminus\{i\}}\) is \(b_{0}\)-correct for all \(i\in\mathcal{U}\). Furthermore, we assume for sufficiently large constant \(C_{3}\), \(\max_{i\in\mathcal{U}}\theta_{i}\leq\frac{1}{C_{3}}\), \(\max_{i\in\mathcal{U}}\theta_{i}\leq\min_{k\in[K]}C_{3}\|\theta_{C}^{(k)}\|_{1}\), \(\log(|\mathcal{U}|)\leq C_{3}\beta_{n}^{2}\|\theta\|_{1}\min_{i\in\mathcal{U}} \theta_{i}\), and for a constant \(r_{0}>0\), \(\min_{k\neq\ell}\{P_{\ell}\}\geq r_{0}\). Then, there is a constant \(\tilde{c}_{21}=\tilde{c}_{21}(K,C_{1},C_{2},C_{3},c_{3},r_{0})>0\) such that \([-\log(\tilde{c}_{21}\mathrm{Risk}_{ins}(\tilde{y}))]/[-\log(\inf_{\tilde{y}} \{\mathrm{Risk}_{ins}(\tilde{y})\})]\geq\tilde{c}_{21}\), so the above in-sample classification algorithm is efficient._
## 4 Empirical Study
We study the performance of AngelMin+, where \(\hat{\Pi}_{\mathcal{U}}\) is from SCORE+ (Jin et al., 2021). We compare our methods with SNMF (Yang et al., 2015) (a representative of semi-supervised approaches) and SCORE+ (a fully unsupervised approach). We also compare our algorithm to typical GNN methods (Kipf and Welling, 2016) in the real data part.
**Simulations**: To illustrate how information in \(A_{\mathcal{U}\mathcal{U}}\) will improve the classification accuracy, we would consider AngleMin in (4) in simulations. Also, to cast light on how information on unlabeled data will ameliorate the classification accuracy, we consider a special version of AngleMin+ in simulations by feeding into the algorithm only \(A_{\mathcal{LC}}\) and \(X_{\mathcal{L}}\). It ignores information on unlabeled data and only uses the subnetwork consisting of labeled nodes. We call it AngleMin+(subnetwork). This method is practically uninteresting, but it serves as a representative of the fully supervised approach that ignores unlabeled nodes. We simulate data from the DCBM with \((n,K)=(500,3)\). To generate \(P\), we draw its (off diagonal) entries from \(\mathrm{Uniform}(0,1)\), and then symmetrize it. We generate the degree heterogeneity parameters \(\theta_{i}\) i.i.d. from one of the 4 following distributions: \(n^{-0.5}\sqrt{\log(n)}\mathrm{Gamma}(3.5)\), \(n^{-0.25}\mathrm{Gamma}(3.5)\), \(n^{-0.5}\sqrt{\log(n)}\mathrm{Pareto}(3.5)\), \(n^{-0.25}\mathrm{Pareto}(3.5)\). They cover most scenarios: Gamma distributions have considerable mass near 0, so the network has severely low degree nodes; Pareto distributions have heavy tails, so the network has severely high degree nodes. The scaling \(n^{-0.5}\sqrt{\log(n)}\) corresponds to the sparse regime, where the average node degree is \(\asymp\log(n)^{2}\), and \(n^{-0.25}\) corresponds to the dense regime, with average node degree \(\asymp\sqrt{n}\). We consider two cases of \(\Pi\): the balanced case (bal.) and the imbalanced case (inbal.). In the former, \(\pi(i)\) are i.i.d. from \(\mathrm{Multimomial}(1/3,1/3,1/3)\), and in the latter, \(\pi(i)\) are i.i.d. from \(\mathrm{Multimomial}(0.2,0.2,0.6)\). We repeat the simulation 100 times. Our results are presented in Figure 1, which shows the average classification error of each algorithm as the number of labeled nodes, \(N_{L}\) increases. The plots indicate that AngleMin+ outperforms other methods in all the cases. Furthermore, though AngleMin is not so good as AngleMin+ when \(N_{L}\) is small, it still surpasses all the other approaches except AngleMin+ in most scenarios. Compared to supervised and unsupervised methods which only use part of the data, we can see that AngleMin+ gains a great amount of accuracy by leveraging on both the labeled and unlabeled data.
**Real data**: We consider three benchmark datasets for community detection, Caltech (Traud et al., 2012), Simmons (Traud et al., 2012), and Polblogs (Adamic and Glance, 2005). For each data set, we separate nodes into 10 folds and treat each fold as the test data at a time, with the other 9 folds as training data. In the training network, we randomly choose \(n_{L}\) nodes as labeled nodes. We then estimate the label of each node in the test data and report the misclassification error rate (averaged
Figure 1: Simulations (\(n=500\), \(K=3\); data are generated from DCBM). In each plot, the x-axis is the number of labeled nodes, and the y-axis is the average misclassification rate over 100 repetitions.
over 10 folds). We consider \(n_{L}/n\in\{0.3,0.5,0.7\}\), where \(n\) is the number of nodes in training data. The results are shown in Table 1. In most cases, AngleMin+ significantly outperforms the other methods (unsupervised or semi-supervised). Additionally, we notice that in the Polblogs data, the standard deviation of the error of SCORE+ is quite large, indicating that its performance is unstable. Remarkably, even though AngleMin+ uses SCORE+ to initialize, the performance of AngleMin+ is nearly unaffected: It still achieves low means and standard deviations in misclassification error. This is consistent with our theory in Section 3. We also compare the running time of different methods (please see Section B of the appendix) and find that AngleMin+ is much faster than SNMF.
GNN is a popular approach for attributed node clustering. Although it is not designed for the case of no node attributes, we are still interested in whether GNN can be easily adapted to our setting by self-created features. We take the GCN method in Kipf and Welling (2016) and consider 6 schemes of creating a feature vector for each node: i) a 50-dimensional constant vector of 1's, ii) a 50-dimensional randomly generated feature vector, iii) the \(n\)-dimensional adjacency vector, iv) the vector of landing probabilities (LP) (Li et al., 2019) (which contains network topology information), v) the embedding vector from node2vec (Grover and Leskovec, 2016), and vi) a practically infeasible vector \(e_{i}^{\prime}A\Pi\in\mathbb{R}^{K}\) (which uses the true \(\Pi\)). The results are in Table 1. GCN performs unsatisfactorily, regardless of how the features are created. For example, propagating messages with all-1 vectors seems to result in over-smoothing; and using adjacency vectors as node features means that the feature transformation linear layers' size changes with the number of nodes in a network, which could heavily overfit due to too many parameters. We conclude that it is not easy to adapt GNN to the case of no node attributes.
For a fairer comparison, we also consider a real network, Citeseer (Sen et al., 2008), that contains node features. We consider two state-of-the-art semi-supervised GNN algorithms, GCN (Kipf and Welling, 2016) and MasG (Jin et al., 2019). Our methods can also be generalized to accommodate node features. Using the "fusion" idea surveyed in Chunaev et al. (2019), we "fuse" the adjacency matrix \(\bar{A}\) (on \(n+1\) nodes) and node features into a weighted adjacency matrix \(\bar{A}_{\text{fuse}}\) (see the appendix for details). We denote its top left block by \(A_{\text{fuse}}\in\mathbb{R}^{n\times n}\) and its last column by \(X_{\text{fuse}}\in\mathbb{R}^{n}\) and apply AngleMin+ by replacing \((A,X)\) by \((A_{\text{fuse}},X_{\text{fuse}})\). The misclassification error averaged over 10 data splits is reported in Table 2. The error rates of GCN and MasG are quoted from those papers, which are based on 1 particular data split. We also re-run GCN on our 10 data splits.
**Conclusion and discussions**: In this paper, we propose a fast semi-supervised community detection algorithm AngleMin+ based on the structural similarity metric of DCBM. Our method is able to address degree heterogeneity and non-assortative network, is computationally fast, and possesses favorable theoretical properties on consistency and efficiency. Also, our algorithm performs well on both simulations and real data, indicating its strong usage in practice.
There are possible extensions for our method. Our method does not directly deal with soft label (a.k.a mixed membership) where the available label information is the probability of a certain node being in each community. We are currently endeavoring to solve this by fitting our algorithm into the degree-corrected mixed membership model (DCMM), and developing sharp theories for it.
\begin{table}
\begin{tabular}{c c c c|c c|c|c|c|c|c} \hline \hline Dataset & \(n\) & \(K\) & \(n_{L}/n\) & GCN & GCN & GCN+ & MasG+ & AngleMin+ \\ \hline Citeseer & 3312 & 6 & 0.036 & 0.321 & 0.297 & 0.268 & 0.334 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error rates on Citeseer, where node attributes are available. If the error rate has \({}^{*}\), it is quoted from literature and based on one particular data split; otherwise, it is averaged over 10 data splits.
\begin{table}
\begin{tabular}{c c c c|c|c|c|c|c|c} \hline \hline Dataset & \(n\) & \(K\) & \(n_{L}/n\) & GCN & GCN & GCN+ & MasG+ & AngleMin+ \\ \hline Citeseer & 3312 & 6 & 0.036 & 0.321 & 0.297 & 0.268 & 0.334 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error rates on Citeseer, where node attributes are available. If the error rate has \({}^{*}\), it is quoted from literature and based on one particular data split; otherwise, it is averaged over 10 data splits.
## Acknowledgments
This work is partially supported by the NSF CAREER grant DMS-1943902.
## Ethics Statement
This paper proposes a novel semi-supervised community detection algorithm, AngleMin+, based on the structural similarity metric of DCBM. Our method may be maliciously manipulated to identify certain group of people such as dissenters. This is a common drawback of all the community detection algorithms, and we think that this can be solved by replacing the network data by their differential private counterpart. All the real data we use come from public datasets which we have clearly cited, and we do not think that they will raise any privacy issues or other potential problems.
## Reproducibility Statement
We provide detailed theory on our algorithm AngleMin+. we derive explicit bounds for the misclassification probability of our method under DCBM, and show that it is consistent. We also study the efficiency of our method by comparing its misclassification probability with that of an ideal classifier having access to the community labels of all nodes. Additionally, we provide clear explanations and insights of our theory. All the proofs, together with some generalization of our theory, are available in the appendix. Also, we perform empirical study on our proposed algorithms under both simulations and real data settings, and we consider a large number of scenarios in both cases. All the codes are available in the supplementary materials.
|
2305.18088 | Drug Repurposing Targeting COVID-19 3CL Protease using Molecular Docking
and Machine Learning Regression Approach | The COVID-19 pandemic has initiated a global health emergency, with an
exigent need for effective cure. Progressively, drug repurposing is emerging a
promise solution as it saves the time, cost and labor. However, the number of
drug candidates that have been identified as being repurposed for the treatment
of COVID-19 are still insufficient, so more effective and thorough drug
exploring strategies are required. In this study, we joint the molecular
docking with machine learning regression approaches to find some prospective
therapeutic candidates for COVID-19 treatment. We screened the 5903 approved
drugs for their inhibition by targeting the main protease 3CL of SARS-CoV-2,
which is responsible to replicate the virus. Molecular docking is used to
calculate the binding affinities of these drugs to the main protease 3CL. We
employed several machine learning regression approaches for QSAR modeling to
find out some potential drugs with high binding affinities. Our outcomes
demonstrated that the Decision Tree Regression (DTR) model with best scores of
R2 and RMSE, is the most suitable model to explore the potential drugs. We
shortlisted six favorable drugs. These drugs have novel repurposing potential,
except for one antiviral ZINC203757351 compound that has already been
identified in other studies. We further examined the physiochemical and
pharmacokinetic properties of these most potent drugs and their best binding
interaction to specific target protease 3CLpro. Our verdicts contribute to the
larger goal of finding effective cures for COVID-19, which is an acute global
health challenge. The outcomes of our study provide valuable insights into
potential therapeutic candidates for COVID-19 treatment. | Imra Aqeel, Abdul Majid | 2023-05-25T05:34:39Z | http://arxiv.org/abs/2305.18088v7 | Drug Repurposing Targeting COVID-19 3CL Protease using Molecular Docking and Machine Learning Regression Approach
###### Abstract
The COVID-19 pandemic has created a global health crisis, with an urgent need for effective treatments. Drug repurposing has emerged as a promising solution, as it can save time, cost, and labor. However, the number of identified repurposed drugs for COVID-19 treatment remains limited, and there is a need for more efficient and comprehensive drug repurposing approaches. In this study, we aimed to identify potential therapeutic candidates for COVID-19 treatment through drug repurposing using a combination of molecular docking and machine learning regression approaches. We utilized the Zinc database to screen 5903 World-approved drugs for their potential to target the main protease 3CL of SARS-CoV-2, which is a key enzyme in the replication of the virus. We performed molecular docking to evaluate the binding affinity of the drugs to the main protease 3CL, and used several machine learning regression approaches for QSAR modeling to identify drugs with high binding affinity. Our results showed that the Decision Tree Regression (DTR) model had the best statistical measures of R2 and RMSE, and we shortlisted six promising drugs with their respective Zinc IDs (ZINC3873365, ZINC85432544, ZINC203757351, ZINC85536956, ZINC8214470, and ZINC261494640) within the range of -15 kcal/mol to -13 kcal/mol. These drugs have novel repurposing potential, except for one antiviral ZINC203757351 compound that has already been identified in other studies. We further analyzed the physiochemical and pharmacokinetic properties of these top-ranked selected drugs and their best binding interaction for specific target protease 3CLpro. Our study provides an efficient framework for drug repurposing against COVID-19, and demonstrates the potential of combining molecular docking with machine learning regression approaches to accelerate the identification of potential therapeutic candidates. Our findings contribute to the larger goal of finding effective treatments for COVID-19, which is a critical global health challenge. In conclusion, the results of our study provide valuable insights into potential therapeutic candidates for COVID-19 treatment and demonstrate the effectiveness of combining molecular docking with machine learning regression approaches for drug repurposing.
COVID-19; main protease 3CL; drug repurposing; QSAR model; binding affinity; molecular docking
## 1 Introduction
The COVID-19 pandemic has presented an unprecedented global health crisis, with over 687 million confirmed cases and over 6.8 million deaths worldwide as of May 2023 according to [https://www.worldometers.info/coronavirus/](https://www.worldometers.info/coronavirus/). Currently, there is no specific drug available to treat COVID-19, and the development of effective therapies has become a priority for researchers globally (Su et al., 2023). COVID-19 is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a positive-sense single-stranded RNA virus that primarily infects the respiratory tract of humans (Shah et al., 2020). The entry of the virus into host cells occurs when the spike protein binds to the ACE2 |
2307.07909 | Is Imitation All You Need? Generalized Decision-Making with Dual-Phase
Training | We introduce DualMind, a generalist agent designed to tackle various
decision-making tasks that addresses challenges posed by current methods, such
as overfitting behaviors and dependence on task-specific fine-tuning. DualMind
uses a novel "Dual-phase" training strategy that emulates how humans learn to
act in the world. The model first learns fundamental common knowledge through a
self-supervised objective tailored for control tasks and then learns how to
make decisions based on different contexts through imitating behaviors
conditioned on given prompts. DualMind can handle tasks across domains, scenes,
and embodiments using just a single set of model weights and can execute
zero-shot prompting without requiring task-specific fine-tuning. We evaluate
DualMind on MetaWorld and Habitat through extensive experiments and demonstrate
its superior generalizability compared to previous techniques, outperforming
other generalist agents by over 50$\%$ and 70$\%$ on Habitat and MetaWorld,
respectively. On the 45 tasks in MetaWorld, DualMind achieves over 30 tasks at
a 90$\%$ success rate. | Yao Wei, Yanchao Sun, Ruijie Zheng, Sai Vemprala, Rogerio Bonatti, Shuhang Chen, Ratnesh Madaan, Zhongjie Ba, Ashish Kapoor, Shuang Ma | 2023-07-16T00:34:12Z | http://arxiv.org/abs/2307.07909v3 | # Is Imitation All You Need? Generalized Decision-Making with Dual-Phase Training
###### Abstract
We introduce DualMind, a generalist agent designed to tackle various decision-making tasks that addresses challenges posed by current methods, such as overfitting behaviors and dependence on task-specific fine-tuning. DualMind uses a novel "Dual-phase" training strategy that emulates how humans learn to act in the world. The model first learns fundamental common knowledge through a self-supervised objective tailored for control tasks and then learns how to make decisions based on different contexts through imitating behaviors conditioned on given prompts. DualMind can handle tasks across domains, scenes, and embodiments using just a single set of model weights and can execute zero-shot prompting without requiring task-specific fine-tuning. We evaluate DualMind on MetaWorld [55] and Habitat [39] through extensive experiments and demonstrate its superior generalizability compared to previous techniques, outperforming other generalist agents by over 50\(\%\) and 70\(\%\) on Habitat and MetaWorld, respectively. On the 45 tasks in MetaWorld, DualMind achieves over 30 tasks at a 90\(\%\) success rate. Our source code is available at [https://github.com/yunyikristy/DualMind](https://github.com/yunyikristy/DualMind).
## 1 Introduction
Transformer-based models, combined with large-scale data, have shown success in generalizing across various tasks in both language and vision. Notable examples include BERT [13], GPT [36], MAE [19], CLIP [35] and Flamingo [1], etc. Recently, there has been a significant focus on developing such general-purpose models for sequential decision-making and control tasks, such as GATO [41]. The prominent approach is to train a decoder-only Transformer with Imitation Learning (IL) on massive datasets from all targeted tasks. By training with prompts, the model can perform zero-shot inference with just task prompts.
However, such IL-based approaches to general-purpose models face limitations when it comes to sequential control tasks, as highlighted below: (1) _Memorizing behaviors hinders generalization to diverse tasks_: Imitating expert behaviors can lead to memorization and over-fitting of specific behaviors that may not be applicable to new situations or variations of tasks, thus limiting the model's ability to generalize. This limitation is particularly challenging when dealing with a wide range of decision-making tasks that have vastly different configurations, transition functions, and state and action spaces. (2) _Dependence on high-quality data impedes practical application_: IL methods rely heavily on the availability of high-quality expert demonstrations, which can be difficult and expensive to obtain. When the available data is of low quality or not representative of the target task, the performance of the model may suffer.
In light of the aforementioned limitations, self-supervised pretraining has emerged as a viable solution. By focusing on learning common underlying information, a pretrained model can be better equipped to handle diverse tasks. Recently, a study known as SMART [49] has demonstrated the potential of self-supervised pretraining for multi-task decision-making.
Although SMART has shown promising results in promoting generalization, it still requires additional fine-tuning
Figure 1: A high-level overview of DualMind’s Dual-phase training scheme.
to adapt to each task. Furthermore, it has only been demonstrated on a small set of tasks on DMC [50]. For decision-making problems that involve numerous tasks with different configurations, finetuning the model for each task can become time-consuming and resource-intensive.
Given the limitations of both IL and self-supervised pre-training discussed earlier, a natural question arises: _How can we develop a decision-making approach that achieves a high degree of generalization without requiring task-specific fine-tuning?_ In this paper, we propose DualMind, a generalist agent, to address this question, which is stands for our proposed Dual-phase training scheme. The name 'DualMind' is derived from our main idea of Dual-phase training for generalized decision-making. Our approach introduces an Encoder-Decoder Control Transformer (Enc-Dec Control Transformer) that models state-action interactions from complex high-dimensional observations. To further improve computational efficiency, DualMind uses Token-Learner [45] as an attention-based Information Bottleneck (IB) [51] to compress the number of tokens so that to speed up training and inference. Building upon Enc-Dec Control Transformer, we propose a Dual-phase training scheme that initially prioritizes policy-independent transition probabilities and encourages the model to capture both short- and long-term temporal granularities. To facilitate zero-shot prompting, we train a second phase on a small fraction of model parameters to learn a generic policy by conditioning on various prompts (such as images, annotations, and language instructions) using a cross-attention mechanism (XAtten.). The Dual-phase training scheme parallels how humans learn to act in the world by first learning underlying common knowledge and subsequently making decisions based on different contexts. Our contributions are summarized below:
1. We propose DualMind, a solution for general-purpose decision-making that can handle various tasks using a single set of weights without task-specific fine-tuning.
2. We introduce a Dual-phase training scheme that overcomes limitations of IL and self-supervised learning.
3. We propose an Encoder-Decoder Transformer (Enc-Dec Control Transformer) that efficiently learns state-action transitions from high-dimensional observation spaces.
4. We conduct extensive experiments on Metaworld [55] and Habitat [39] and show that DualMind outperforms other generalist agents by over 50\(\%\) and 70\(\%\) on Habitat and MetaWorld, respectively. We also analyze and ablate different design choices to demonstrate the superior generalizability of DualMind.
## 2 Related work
_Pretraining Visual Representations for Policy Learning:_ Recent studies such as R3M [31], APV [47] VPT [4], NrNS [18], PVR [33] and MVP[37] have shown that pre-trained visual representations can significantly enhance the efficiency of downstream policy learning. However, these works mainly focus on learning object-centric semantics, potentially losing essential control-relevant information. To address this issue, VIP [29] formulates the problem as an offline goal-conditioned RL problem and proposes a visual representation algorithm capable of generating dense reward functions for downstream robotics tasks. On the other hand, COMPASS [28] introduces a general-purpose pretraining pipeline that effectively integrates multimodal signals for autonomous systems.
_Transformer-Based Foundational Model:_ The use of high-capacity transformer architectures trained on large-scale datasets has led to significant breakthroughs in various domains. Examples include language models such as BERT [13], GPT-3 [7], T5 [38], and PaLM [11], as well as vision and vision-language models such as MAE [19], Multi-MAE [3], BiT [24], MuST [16], Flaminglo [15], and CLIP [35]. For decision-making tasks, recent work such as SMART [49] has proposed a self-supervised pretraining framework tailored for control tasks. For robotics control problems, PACT [5] has shown that a pretrained representation could speed up various downstream tasks of mobile agents, such as navigation and localization.
_A General-Purpose Model for Control:_ Since the groundbreaking success of GPT [36], recent research has focused on using Transformer decoder-based models to tackle control tasks in an auto-regressive manner. Decision Transformer (DT) [10, 56] builds on the architecture of GPT to create a generalist agent for sequential decision-making tasks. This has been followed by Multi-game DT [26] and Online-DT [57], which demonstrate the potential of DTs for multi-task and online learning. GATO [41] imitates expert demonstrations from a vast dataset and showcases its ability to handle a large number of tasks. VIMA [21] is an agent that can accept multi-modal prompts for solving various robotics manipulation tasks. In real-life applications, RT-1 [6] has demonstrated the efficacy of this approach in robotic control.
## 3 Preliminary and Overview of DualMind
### Problem formulation
We focus on a set of tasks, denoted as \(\mathcal{T}\), from two representative benchmarks, namely Metaworld [55] and Habitat [39], which cover the _Manipulation_ and _Navigation_ domains, respectively. As shown in Table 2, our selection of these two benchmarks allows us to conduct a comprehensive study on tasks with a wide variety of characteristics. Here, we define a task as a partially observable Markov decision process (POMDP). The tasks we consider span across several factors, as defined below:
* _Domain_: This refers to tasks with different state/action
spaces and application scenarios. In our study, _Manipulation_ and _Navigation_ are two domains we focus on.
* _Embodiment_: This factor is used to differentiate tasks that have different physics and action spaces. For instance, a robot arm and an embodied agent in MetaWorld and Habitat are considered as different embodiments. Differences can also exist in the same domain, such as arms with distinct joint torques and/or hardware configurations.
* _Scene_: This refers to tasks that are performed in different observation spaces, state spaces, and world structures. For example, in Habitat, agents that navigate in different rooms should adapt to various visual appearances and geometry structures.
* _Prompt_: This factor captures different forms of prompt conditions. In MetaWorld, prompts are natural language instructions, while in Habitat, we use a single RGB image or an object annotation as the navigation goal to prompt our model.
### Overview of Dual-phase training scheme
In this section, we provide a brief overview of DualMind and compare it with two other prominent approaches: self-supervised pretraining (Self-superv.) and Imitation Learning with prompt conditions (IL-prompt). We also provide insights into the central idea behind our proposed approach. A summarized comparison of these approaches is shown in Table 1.
As shown in Fig. 2, In Phase I, we train the entire Enc-Dec Control Transformer (Sec. 4.1) with a self-supervised training objective to capture generic information of state-action transitions. In Phase II, we train only a small part of Enc-Dec Control Transformer attached with XAtten. on a diverse set of prompts for a conditional generic policy. After the Dual-phase training, we can obtain one model with a single set of weights that can be directly applied to a large number of tasks with corresponding prompts.
Compared to other generalist agents like GATO [41], which trains an imitating policy directly, DualMind demonstrates superior generalization capability. Moreover, our Phase II requires training only a small fraction of model weights while freezing the remaining parts, resulting in faster learning and reduced training cost when optimizing the model with the same number of iterations. Additionally, compared to self-supervised learning approaches such as SMART [49], DualMind is simple and effective, making it suitable for a wide range of application scenarios.
### Insights
The central idea behind our Dual-phase is to mimic how humans learn to act in the world, first by learning underlying common knowledge and then by learning to make decisions based on different contexts. Our approach relates to InstructGPT [32], which aims to align language models with user intent by fine-tuning them with human feedback. In analogy to InstructGPT, our Phase I can be considered as learning a general model that captures the common essential information. However, as stated in InstructGPT, this is different from the objective of "following task instructions (i.e. prompt conditions)," and thus such a model is _misaligned_. Therefore, in the second phase, we leverage conditional IL to align the model so that it can perform well for any given prompts.
## 4 Approach
In this section, we introduce our proposed DualMind. We present the model architecture in Section 4.1, and illustrate the training objective for DualMind in Section 4.2.
\begin{table}
\begin{tabular}{l|l l l} \hline & Self-superv. & IL-prompt & Dual-phase (ours) \\ \hline \multirow{2}{*}{Learning} & Pre.: generic info. & Cond. generic policy & I: generic info. \\ & FT: task-specific policy & & II: cond. generic policy \\ \hline \multirow{2}{*}{Data} & Pre: Multi-task large set & Multi-task large set+prompts & I: Multi-task large set \\ & FT: Single-task small set & & II: +prompts \\ \hline \multirow{2}{*}{Optim. weights} & Pre: whole model & & I: Entire model \\ & FT: Entire/freeze+Task heads & Entire model & II: Partial/freeze+XAtten. \\ \hline Inference task & Single & Multiple & Multiple \\ \hline No need FT & ✗ & ✓ & ✓ \\ \hline Zero-shot prompt. & ✗ & ✓ & ✓ \\ \hline Final utilization & Many models for each task & Single model & Single model \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons of different training approaches.
\begin{table}
\begin{tabular}{l|l l l l l l} \hline Bench. & Dom. & See. & Emb. & Prom. & Tasks & Epis. \\ Meta. & Man. & 1 & 1 & inst. & 50 & 50K \\ Habit. & Nav. & 933 & 1 & Obj. / Img & 27 & 50K \\ \hline Total & 2 & 934 & 2 & 3 & 77 & 100K \\ \hline \end{tabular}
\end{table}
Table 2: Dataset summerization Dom.: domains, Sec.: number scenes, Emb.: number of embodiments, Prom.: types of prompts, Epis.: number of episodes.
### Model Architecture
We propose an Encoder-Decoder Control Transformer to process state-action interaction sequences, as illustrated in Figure 2. The implementation details of each component in the Enc-Dec Control Transformer are outlined below.
**State tokenizer.** We utilize a ViT model [14] to tokenize raw pixel states. To reduce the computational burden of dealing with sequential decision-making tasks, we leverage an attention-based Information Bottleneck (IB) to further compress the number of tokens so as to speed up training and inference (Fig. 2-left). Specifically, we use TokenLearner [45] which is an element-wise attention module that learns to soft-select image tokens, passing only the important ones to subsequent layers. The inclusion of TokenLearner sub-samples the 196 state tokens that come out of ViT to just 8 tokens that are then passed to the Transformer decoder layers.
**Action tokenizer.** To handle both continuous and discrete action spaces in our two domains, we adapt a strategy similar to GATO [41] by discretizing continuous actions into bins. We first flatten the actions into sequences of floating point values in row-major order, and then mu-law encode them to the range [-1, 1] before discretizing them into 256 uniform bins. Discrete actions are tokenized into 256 bins in the same way.
**Transformer decoder.** Our transformer decoder architecture is similar to Control Transformer [49], but with a modification. In our approach, we encode each state into 8 tokens, which is different from SMART's single token representation. This modification enables richer representation learning, making it suitable for more complex visual control environments.
**Prompt tokenizer.** We tokenize prompts using a pre-trained CLIP encoder [35]. For "image goal" prompts in Habitat, we use the CLIP image encoder, while for "ob
**Xatten. layer.** We condition the Transformer decoder by training it to learn from the prompt sequence through a series of cross-attention layers. The output sequence from each cross-attention layer is computed by \(\text{softmax}(\frac{q_{H}k_{r}^{T}}{\sqrt{d}})v_{P}\), where \(H\) is the sequence of episodes, \(P\) is prompt, and \(d\) is the embedding dimension. This design builds a stronger connection between the prompts and the demonstrations, which is an improvement over prefix-style prompting approaches [41]. We will show the benefits of this design in Sec. 5.4.
### Training objectives
**Phase I: self-supervised SMART training.** The goal of this phase is to learn a good representation that captures control-relevant information shared across tasks. In this phase, we jointly train the encoder and the decoder following the self-supervised training objectives of SMART [49]. We use \(F_{\theta}\) to denote the learned model with parametrization \(\theta\), such that \(F_{\theta}(o_{i:j},a_{i:j})\) refers to the output tokens of the decoder corresponding to raw inputs \(o_{i:j}\) and \(a_{i:j}\), the observation and action sequence from step \(i\) to step \(j\). For a sequence of observations and actions denoted as
Figure 3: Batch input when training on multiple domains,ject goal” prompts in Habitat and “language instruction” prompts in MetaWorld, we use the CLIP text encoder. A learnable linear layer is added on top of the CLIP encoders to map all prompts to prompt tokens with the same dimensions. During training in both phases, we freeze the CLIP encoders.
Figure 2: The architecture diagram of DualMind. **Left: Phase I.** Agent is trained with self-supervised learning objectives. During this phase, Transformer encoder and decoder jointly trained. **Right: Phase II.** Agent is trained with prompt conditional imitation learning. We tokenize task prompts with a pretrained CLIP encoder, and condition the Transformer decoder on the prompt through XAtten. layers. The gray color indicates frozen modules. (Detailed training objectives are in Sec. 4.2.)
\(\{o_{t},a_{t},\cdots,o_{t+L},a_{t+L}\}\) with context length \(L\), we minimize the following objective.
\[\mathcal{L}_{P1} :=\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{3},\text{ where} \tag{1}\] \[\mathcal{L}_{1} :=\sum\nolimits_{i=0}^{L-1}l\left(f_{1}(F_{\theta}(o_{t:t+i},a_{t: t+i})),\bar{\phi}(o_{t++i+1})\right),\] (2) \[\mathcal{L}_{2} :=\sum\nolimits_{i=1}^{L}l\left(f_{2}(F_{\theta}(o_{t:t+i},a_{t: t+i-1}),a_{t+i})\right.,\] (3) \[\mathcal{L}_{3} :=\sum\nolimits_{i=1}^{L-1}l\left(f_{3}(F_{\theta}(\mathsf{Mask }(o_{t:t+L},a_{t:t+L})),a_{t+i})\right. \tag{4}\]
Here \(l\) is a loss function that is selected by the variable type. For latent states, we use a mean squared error, while for discrete actions, we use the cross-entropy loss. \(\mathcal{L}_{1}\) is to learn a forward prediction head \(f_{1}\) that can predict the next state representation based on the historical interactions. Since the groundtruth state representation is unknown, we use the learned state embedding from the ViT model to encode the next observation, denoted as \(\bar{\phi}\) where the overline stands for gradient stopping. \(\mathcal{L}_{2}\) aims to recover the action token in each step conditioning on the history and the next state. \(\mathcal{L}_{3}\) masks a proportion of input tokens and learns to recover the masked actions, which can extract long-term temporal dependence for control.
**Phase II: Imitation learning with prompt conditions.** In this phase, we train the model to follow prompt conditions. We formulate various tasks as a conditional generation problems, where the conditions can be goals, commands, prompts, etc. During Phase II, we let the agent learn a conditional policy, using expert trajectories with associated prompts. Let \(\psi\) be the prompt tokenizer, and \(\pi\) be the learned policy whose inputs are the representation tokens given by the decoder. For an expert sequence \(\{o_{t},a_{t},\cdots,o_{t+L},a_{t+L}\}\) with prompt \(P\), we minimize loss
\[\mathcal{L}_{P2}:=\sum\nolimits_{i=0}^{L-1}l(\pi(F_{\theta}(o_{t:t+i},a_{t: t+i};\psi(P))),a_{t+i+1}). \tag{5}\]
Note that in this phase, we do not train the entire model \(F_{\theta}\), and instead only re-train a small fraction of it. More discussion is in Sec. 5.4.
## 5 Experiments
### Experimental setup
**Data.** We evaluate and train DualMind on two benchmarks, Habitat [39] and MetaWorld [55]. Habitat is a photorealistic simulation platform for research in Embodied AI, emphasizing active perception and long-term planning, -while MetaWorld is a simulated benchmark for multi-task learning and meta-reinforcement learning, comprising 50 distinct robotic manipulation environments. Training on datasets collected from both these benchmarks allows us to demonstrate the model's generalizability across domains, embodiments, scenes, and prompts. We provide a detailed introduction to these factors in Sec. 3.1 and summarize them in Table 2. Additionally, we use 10 tasks as an out-of-distribution testbed to showcase the model's generalization capability. More details about our data collection process can be found in Appendix A.
**Comparing baselines.** We compare DualMind with existing transformer-based approaches and present results from two versions of our model: a generalist agent trained on the full dataset (DualMind) and a single-domain specialist trained only on data from either MetaWorld or Habitat (DualMind/single). To ensure fair comparisons, we implemented related works ourselves and trained and evaluated them on the same data and model architecture. We provide information on each baseline below:
* IL-only is a model trained only with prompt-conditioned imitation learning, which is related to GATO but uses a different prompting conditioning method.
* SMART-only is a model trained only using SMART training objectives (purely self-supervised).
* Jointly is a model jointly trained with both SMART objectives and prompt-conditioned IL loss.
* GATO* is the model described in the original paper. We include its reported performance on the Metaworld benchmark for reference. Notably, this model has 1.18 billion parameters and was trained on massive datasets, including 94.6k episodes from Metaworld. In comparison, DualMind has 175 million parameters and was trained on a smaller dataset consisting of 100k episodes, of which 50k are from MetaWorld.
* GATO is a model we implemented ourselves, reproducing the main technical approaches presented in the original paper. For a fair comparison, we used the same base model architecture (Enc-Dec Control Transformer), but replaced our XAtten.-based prompting approach with their proposed prefix prompting approach.
Specifically, we train IL-only, SMART-only, and Joint-only on Enc-Dec Control Transformer +XAtten., which has the same model architecture as DualMind. For GATO, we train it using Enc-Dec Control Transformer but insert prompts in a prefix manner since it uses a different prompting method. Moreover, we provide the performance of GATO reported in their paper for reference, denoted as GATO*. More details about our baselines can be found in Appendix A.
**Implementation details.** Our implementation of DualMind uses a Transformer-based architecture consisting of a ViT-B [17] model, a TokenLearner [45], and a GPT model [36] as the encoder and decoder, respectively. The decoder consists of 8 layers and 8 attention heads, with a context length of L=6 and an embedding size of d=5121. We trained our model with the AdamW optimizer and a learning rate of 5e-5 for both training phases. In Phase i, we trained the model for around 40 hours with BS=16 on
5x8xV100 GPUs. In Phase ii, the model was trained for about 12 hours with BS=128 on 2x8xV100 GPUs. Further implementation details are provided in Appendix A.
### Capabilities of DualMind
In this section, we aim to demonstrate the capabilities of DualMind on all tasks. Note that, as a generalist agent, the performance on both MetaWorld and Habitat are achieved by a single model. The performance is shown in Fig. 4 and Fig. 5. To provide a reference for readers, we follow GATO's evaluation protocol and report the Percentage Expert Score (PES), which measures the number of distinct tasks for which each model performs above a given score threshold relative to the expert performance. For each task, we roll out the model 10 times and average the defined scores. As shown in Fig. 5, DualMind achieves over 90\(\%\) expert score threshold across more than 27 tasks, outperforming GATO* by a large margin, which only has three tasks above the threshold. On lower expert score thresholds, for example, 80\(\%\) and 50\(\%\), DualMind can also achieve comparable performance. However, it should be noted that GATO's performance was achieved by their 1.18B model trained on massive datasets. Therefore, this is just a reference for readers, and a fully fair comparison with GATO cannot be performed without access to both the model and data. To provide a more fair comparison, we compare DualMind with a self-implemented GATO, which will be discussed in more detail in Sec. 5.1. We also report the number of tasks for which our model performs above a given Success Rate (SR). DualMind achieves 39 tasks at over 0.5 SR and can maintain good performance on higher SRs, with 34 tasks at over 0.8 SR and 28 tasks at over 1 SR. We present the performance of DualMind on Habitat by averaging across all 12 testing scenes and reporting the success rate (SR) and success weighted by path length (SPL) evaluation metrics. As shown Fig. 4, DualMind outperforms the other baseline models by a large margin under both evaluation metrics. (See performance on each task in Appendix B.)
### Analysis
#### 5.3.1 Different training regimes
**Is imitation learning all you need for a generalist agent?**
To answer the question, in this experiment, we compare DualMind with its counterpart trained only with Imitation Learning objective, i.e., IL-only. In Fig. 4 and Fig. 5, we present the comparison results between the generalist multi-domain agents. As shown in the figures, DualMind outperforms its IL-only counterpart by over 50\(\%\) and 70\(\%\) on Habitat and MetaWorld, respectively. Specifically, DualMind performs well on 39 out of 45 tasks over the 50\(\%\) expert score threshold, while IL-only only performs well on 13 tasks. As the difficulty of the tasks increases, DualMind still maintains good performance, achieving 18 tasks and 28 tasks at the 100\(\%\) expert score and SR, respectively, while IL-only only achieves 5 tasks. Similar observations can also be made when comparing the single-domain specialist agents (Fig. 7 and Fig. 6). (See performance on each tasks in Appendix B.)
Figure 4: Comparisons of **generalist agents** on _Habitat 4 scenes_ with 3 difficulty levels per scene. We roll out the agents 3 times on each scene and average the defined scores, and compare agents by Success Rate (SR) (left) and Success weighted by Path Length (SPL) (right).
Figure 5: Comparisons of **generalist agents** on _MetaWorld 45 tasks_ on Percentage of Expert Score (PES) (left) and Success Rate (SR) (right).
Figure 6: Comparisons of **single-domain specialist** on _Habitat 12 scenes_ by SR (left) and SPL (right).
Figure 7: Comparisons of **single-domain specialist** on _MetaWorld 45 tasks_ by PES (left) and SR (right).
We can infer from this that Imitation Learning alone may not suffice to build a truly general-purpose model, particularly when aiming to tackle tasks that span a broad range of domains. Even within a single domain, variations in embodiments, scenes, and instructions can pose significant challenges. We conducted additional investigations into the generalization capabilities by comparing different approaches on out-of-distribution tasks, as demonstrated in Section 5.3.2.
**Can self-supervised learning well-align with instructions without FT?**
To address this inquiry, we compare DualMind with its self-supervised equivalent, SMART-only, while also evaluating both single- and multi-domain agents. As depicted in Fig. 4 and Fig. 5, DualMind exhibits superior performance compared to SMART-only, with over \(75\%\) and \(78\%\) better results on Habitat and MetaWorld, respectively. Notably, SMART-only is unable to succeed on any tasks when applied to single-domain agents, whereas DualMind maintains a significant advantage, particularly on MetaWorld.
Our hypothesis is that SMART, being a pretrain-finetune pipeline, is unlikely to attain the desired performance without post-finetuning. Even when training SMART-only by providing prompts in the same manner as DualMind, zero-shot prompting may not be achievable due to limitations in the self-supervised training objective not being well-aligned with task instructions, as detailed in Section 3. Additionally, we noted that SMART-only surpasses its single-domain equivalent, suggesting its effectiveness in capturing shared knowledge across diverse data.
**Do we need to train them in two phases?**
As DualMind is trained using different objectives in two phases, one may question the necessity of such an approach. Firstly, from an optimization standpoint, training all four losses jointly may present more challenges in terms of steady optimization. Different optimization directions could potentially conflict with each other, and varying convergence rates could hinder all objectives from being trained to reach optimality. Furthermore, in terms of computational costs, DualMind only needs to optimize a small portion of the model weights in phase 2 (as demonstrated in the ablations presented in Section 5.4). This makes the training process more efficient and cost-effective compared to its jointly trained counterpart. In this experiment, we provide further empirical evidence to support this claim.
As illustrated in Fig. 4, Fig. 5, Fig. 6, and Fig. 7, Jointly outperforms IL-only and SMART-only, thereby confirming the necessity of utilizing all training objectives. However, it lags behind DualMind by a considerable margin in both multi- and single-domain comparisons. Interestingly, Jointly slightly outperforms DualMind in single-domain comparisons. We hypothesize that the optimization challenges may not be as significant as those encountered when training on data from the same domain.
#### 5.3.2 Out-of-distribution tasks
The objective of this experiment is to assess the ability of our model to solve novel tasks. To achieve this, we evaluate our models on 10 held-out tasks from two domains, namely MetaWorld and Habitat. The MetaWorld tasks consist of "hand-insert-v2", "door-unlock-v2", "door-lock-v2", "box-close-v2", and "bin-picking-v2", whereas the Habitat tasks include "Goffs", "Hominy", "Hillsdale", "Micanopy", and "Rosser". To evaluate the performance of our models, we follow the evaluation protocol with GATO, which involves finetuning each agent on a limited number of demonstrations. Specifically, we conduct 10-, 100-, and 1000-shot learning. Further details on the evaluation protocol can be found in Appendix A.
We compare the performance of three models, namely DualMind, IL-only, and Scratch. Scratch refers to the model that is trained on few-shot demonstrations from randomly initialized model weights. As demonstrated in Fig. 8, Scratch performs the worst among the three mod
Figure 8: Few-shot comparisons of generalist agents on out-of-distribution tasks. The performance of the success rate (left axis, bar-chart) or Return/SPL (right axis, line-chart) on different tasks after we performed 10-, 100-, and 1000-shot-shot learning on DualMind(red), IL-only(pink), and Scratch(blue).
els in most cases.
Upon comparing DualMind with IL-only, we observe that DualMind exhibits superior performance across various shot settings. Specifically, in terms of the SR metric, DualMind outperforms IL-only on 8 out of 10 tasks at 10-shot and on 7 tasks at 100- and 1000-shot demonstrations. Furthermore, with respect to the SPL and PES metrics, DualMind achieves better results than IL-only on 9 tasks in the 10-shot experiment. These results provide further evidence that the proposed Dual-phase training approach can enhance the generalization ability of models even when dealing with novel tasks and limited demonstrations.
#### 5.3.3 Attention visualization
To gain insight into how DualMind is able to perform diverse tasks, we conduct attention visualization. We present attention maps for tasks from both Habitat and MetaWorld, where we display a sequence of frames from the episode for each task.
The attention maps reveal that when performing manipulation tasks in MetaWorld, such as and "button-press-v2", the model initially focuses on the execution context and then shifts its attention to the targeting instance, such as the "button", until the task is completed. Notably, for navigation tasks in Habitat, DualMind learns to explore the scene to locate the goal. For example, as shown in Fig. 9, given an image goal, the agent first attends to the entrance to navigate into the restroom. Upon realizing that the goal is not there, it steps out and searches for another room to enter. After spotting the refrigerator, which appears in the image goal, the agent quickly locks onto the goal and completes the task. These attention maps provide insight into how DualMind leverages its generalization ability to solve new tasks.
### Ablation study
#### Training parts in Phase II
In this section, we ablate DualMind by varying model weights that been trained in Phase II, as listed below:
* 1: freeze the entire Enc-Dec Control Transformer arch by only train the cross-attention layers.
* 2: freeze the Transformer Encoder (State tokenizer) and the first 4 layers of Transformer Decoder.
* 3: freeze the Transformer Encoder.
* 4: no frozen part, optimize the entire model in Phase II.
As shown in Fig. 10, 2 and 3 perform the best in most cases. For our experiments, we use 3. However, for future scaled-up models and data, we would recommend using 2 since it saves more computational cost. When training each setting with the same number of iterations, 4 performs poorly, which may be due to slow convergence with more model weights. This result also suggests that after training in phase I, our model has learned useful information, but insufficient re-training in phase II may lead to performance deterioration due to potential forgetting issues.
#### Prompt conditioning
We conducted an ablation study on DualMind by comparing two prompt conditioning approaches: prefix and XAtten. prompting. We used the average success rate of ML10 training tasks as the comparison metric for Metaworld. Results show that XAtten. prompting achieves a 0.76 SR on Metaworld and an 0.11 SR on Habitat, while prefix prompting only achieves 0.29 SR and 0 SR, respectively. The cross-attention mechanism in XAtten. prompting allows the agent to establish a strong connection between prompts and demonstrations, which is particularly useful for goal-conditioned tasks. (See more details and discussion in Appendix B.)
## 6 Conclusion
This paper presents a new training approach for generalist agents called DualMind, which consists of two phases: self-supervised learning of basic and generic knowledge across various tasks, followed by imitation of expert behaviors with different types of prompt conditioning. By utilizing a carefully designed Transformer Encoder-Decoder architecture and a dual-phase training scheme, DualMind is scalable, versatile, and generalizable. Empirical evaluation on two challenging domains, Habitat and MetaWorld, shows that DualMind outperforms previous generalist learning methods and pretraining approaches. Further analysis and ablations demonstrate the effectiveness of the dual-phase design.
Future work includes expanding DualMind to more domains and tasks, finding efficient solutions for handling longer context lengths in demonstrations, and enabling practical training in online interactive scenarios.
Figure 10: Comparisons of frozen parts in Phase II.
Figure 9: Attention map visualization. |
2310.04617 | SlotGNN: Unsupervised Discovery of Multi-Object Representations and
Visual Dynamics | Learning multi-object dynamics from visual data using unsupervised techniques
is challenging due to the need for robust, object representations that can be
learned through robot interactions. This paper presents a novel framework with
two new architectures: SlotTransport for discovering object representations
from RGB images and SlotGNN for predicting their collective dynamics from RGB
images and robot interactions. Our SlotTransport architecture is based on slot
attention for unsupervised object discovery and uses a feature transport
mechanism to maintain temporal alignment in object-centric representations.
This enables the discovery of slots that consistently reflect the composition
of multi-object scenes. These slots robustly bind to distinct objects, even
under heavy occlusion or absence. Our SlotGNN, a novel unsupervised graph-based
dynamics model, predicts the future state of multi-object scenes. SlotGNN
learns a graph representation of the scene using the discovered slots from
SlotTransport and performs relational and spatial reasoning to predict the
future appearance of each slot conditioned on robot actions. We demonstrate the
effectiveness of SlotTransport in learning object-centric features that
accurately encode both visual and positional information. Further, we highlight
the accuracy of SlotGNN in downstream robotic tasks, including challenging
multi-object rearrangement and long-horizon prediction. Finally, our
unsupervised approach proves effective in the real world. With only minimal
additional data, our framework robustly predicts slots and their corresponding
dynamics in real-world control tasks. | Alireza Rezazadeh, Athreyi Badithela, Karthik Desingh, Changhyun Choi | 2023-10-06T22:37:34Z | http://arxiv.org/abs/2310.04617v1 | # SlotGNN: Unsupervised Discovery of Multi-Object Representations and Visual Dynamics
###### Abstract
Learning multi-object dynamics from visual data using unsupervised techniques is challenging due to the need for robust, object representations that can be learned through robot interactions. This paper presents a novel framework with two new architectures: SlotTransport for discovering object representations from RGB images and SlotGNN for predicting their collective dynamics from RGB images and robot interactions. Our SlotTransport architecture is based on slot attention for unsupervised object discovery and uses a feature transport mechanism to maintain temporal alignment in object-centric representations. This enables the discovery of slots that consistently reflect the composition of multi-object scenes. These slots robustly bind to distinct objects, even under heavy occlusion or absence. Our SlotGNN, a novel unsupervised graph-based dynamics model, predicts the future state of multi-object scenes. SlotGNN learns a graph representation of the scene using the discovered slots from SlotTransport and performs relational and spatial reasoning to predict the future appearance of each slot conditioned on robot actions. We demonstrate the effectiveness of SlotTransport in learning object-centric features that accurately encode both visual and positional information. Further, we highlight the accuracy of SlotGNN in downstream robotic tasks, including challenging multi-object rearrangement and long-horizon prediction. Finally, our unsupervised approach proves effective in the real world. With only minimal additional data, our framework robustly predicts slots and their corresponding dynamics in real-world control tasks. Our project page: bit.ly/slotgn.
## I Introduction
Studies suggest that the human visual system identifies conceptually distinct visual features, indexes their locations [1], and utilizes this information as the foundation for higher-level cognitive processes, such as comprehending and interacting effectively with the world [2]. A similar principle guides many robotic systems for goal-directed motor planning. In multi-object manipulation, early approaches aimed to directly project the image observation into a unified lower-dimensional space to infer the dynamics [3, 4]. However, such strategies do not reflect the inherent structure of a multi-object system and lack object-level predictions. This limitation not only impedes the model's ability to learn object interactions but also results in inaccurate dynamics predictions. Addressing this limitation, recent methods build dynamics models by decomposing the observation into object-specific lower-dimensional latents and subsequently learning dynamics within these "object-centric" representations [5, 6, 7, 8, 7]. For multi-object systems, recent studies emphasize the effectiveness of learning object-centric representations to enhance the accuracy and sample efficiency of dynamic models [5, 6]. This category of models follows a natural formulation by first learning to represent a scene as a set of object-centric features and then learning the dynamics among them.
In robotics, unsupervised learning of object dynamics is a key challenge particularly given its significance in model-based action planning for real-world applications. Nevertheless, the majority of existing methods of learning multi-object dynamics heavily rely on ground-truth information, including object pose [9, 10, 6] and segmentation masks [6, 5]. This substantially restricts the applicability of such solutions in real-world settings where comprehensive ground-truth information is often unavailable. To address this challenge, our work focuses on discovering unsupervised object representations in multi-object scenarios and harnessing these representations to understand their dynamics. Our primary contributions include:
(1) We introduce **SlotTransport** for unsupervised object discovery, a novel architecture that refines object-centric representation learning through slot attention [11]. Utilizing a feature transport mechanism, SlotTransport ensures temporal alignment of the object-centric representations. The discovered slots capture scene composition, each depicting a visual entity in a multi-object scene, such as objects, the background, and the robot. Notably, each slot maintains a consistent association with a distinct object, even when it's occluded or absent.
(2) We propose **SlotGNN**, a novel unsupervised graph-based model for predicting multi-object scene dynamics from object-centric representations. SlotGNN uses slots identified by SlotTransport to synthesize the scene's future appearance based on the robot's actions. With the temporal alignment from
Fig. 1: Overview of our unsupervised framework. (a) We introduce **SlotTransport** to identify temporally-aligned, object-centric slots, that each consistently represents a unique visual element. (b) We introduce **SlotGNN**, a graph-based model that learns scene dynamics from slots and predicts future states based on the robot’s action. (c) Our unsupervised approach facilitates planning to transition from an initial state to a goal image without requiring extensive ground-truth supervision.
SlotTransport, the scene transforms into a graph where each node consistently represents a slot, and edges capture the slot interactions. SlotGNN performs relational reasoning on the graph and learns to project the future appearance of each slot. (3) We examine the dynamics learned with SlotGNN in challenging downstream robotic tasks. We employ SlotGNN for challenging goal-directed multi-object rearrangement using pushing actions and long-horizon dynamics prediction.
(4) Demonstrating the real-world applications of our unsupervised approach, we successfully transfer SlotTransport and SlotGNN, initially trained in simulation, to the real robot by collecting a minimal dataset of just 20 real robot demonstrations (5% of the amount of simulated training data).
Our results demonstrate the robustness of our unsupervised framework, particularly in downstream robotic applications and real-world scenarios. Our approach consistently predicts accurate multi-object representations and their corresponding dynamics. Throughout this paper, we will use the terms'slots' and 'object-centric representations' interchangeably.
## II Related Work
Learning Multi-object Dynamics ModelEarly models for graph-based dynamics, such as Interaction Networks (IN) [9, 10] and follow-up adaptations [12, 13, 14], represent a multi-object system with a graph where each node is an object ground-truth state (e.g., position, velocity, mass, friction). These models rely on explicit state information. However, for real-world robotic scenarios, obtaining ground-truth state data is infeasible. Recent methods explored learning object representations. Each object is mapped to a lower-dimensional, object-centric representation. The representations are typically a combination of explicit ground-truth states, like position, bounding box, and mask, combined with implicit visual features [15, 7, 16, 6, 5]. However, the primary assumption of the ground-truth state supervision limits their application in the real world. In contrast, our work introduces an unsupervised framework for learning multi-object scene dynamics based on discovering unsupervised slots. This eliminates the need for explicit ground-truth state supervision.
Unsupervised Object-centric RepresentationOur work builds on learning object-centric representations using slot attention [11]. Slot attention interfaces with visual outputs to generate a set of slots. For robotics applications, ensuring temporal consistency in these slots is vital for accurate scene dynamics understanding. This consistency is essential for formulating a planning objective or training loss. Thus, we explicitly incorporate a feature transport mechanism in our SlotTransport to maintain consistency across image pairs from different observation timesteps inspired by [17]. While slot attention has been recently adopted for object localization and behavior cloning [18], the experiments were limited to basic untextured objects and did not consider learning dynamics that is required for online planning. On another front, while keypoint-based methods such as [19] learned unsupervised multi-object dynamics, they face difficulties handling occlusions--a common challenge in robotics. In contrast, our SlotTransport reliably handles occlusions and consistently associates slots with specific objects.
## III Methods
Our framework has two main components:
(1) **SlotTransport**: An unsupervised multi-object discovery model that efficiently extracts robust and temporally consistent object-centric representation slots from multi-object scenes.
(2) **SlotGNN**: Building on top of the slot discovery, this unsupervised graph-based model learns the dynamics of the object-centric representations. Importantly, SlotGNN is conditioned on the robot's action which enables applications such as model-based action planning.
### _SlotTransport: Unsupervised Multi-Object Discovery_
The SlotTransport's role is to map the image to underlying object-centric representations. A detailed architecture of SlotTransport is shown in Fig. 2. We build on the slot attention [11] to extract slots from image frames while ensuring temporal alignment of the slots. Given an RGB image \(I\), a convolutional encoder \(f_{enc}\) augmented with positional embeddings, maps the image to an intermediate representation of \(\mathcal{W}\in\mathbb{R}^{h\times w\times c}\)
Fig. 2: **SlotTransport**: Unsupervised Multi-Object Discovery. From an RGB image, \(f_{slot}\) identifies object-centric slots \(z_{1:K}\). Through slot attention [11], slots bind to visual features, and \(f_{dec}\) produces feature maps \(\Phi^{i}\) and masks \(\mathcal{M}^{i}\). Temporal alignment is ensured by transporting slot features between source and target images. \(f_{rec}\) reconstructs each object slot \(\mathcal{R}^{i}_{T}\), which together compose the target image. The model is trained using only reconstruction error. During inference, objects are reconstructed from a single image using learned slots.
Using the slot attention [11], slots \(z_{1:K}\in\mathbb{R}^{d}\) are derived that uniquely represent distinct portions of \(\mathcal{W}\).
We recognize that our ultimate goal of learning dynamics in an object-centric latent space requires temporal consistency of the slots. To explicitly enforce this, we introduce the transport mechanism in SlotTransport to establish temporal alignment in slots. Inspired by [17], this mechanism transports slot features between a pair of source and target images \((I_{S},I_{T})\) sampled from a given scene.
For each image, slots are extracted as \((z_{1:K}^{S},z_{1:K}^{T})\). First, using a single convolutional decoder \(f_{dec}\), we decode each slot into a feature map \((\Phi_{S}^{1:K},\Phi_{T}^{1:K})\in\mathbb{R}^{h\times w\times m}\) and an alpha mask \((\mathcal{M}_{S}^{1:K},\mathcal{M}_{T}^{1:K})\in\mathbb{R}^{h\times w}\). These alpha masks serve as mixture weights to inpaint each slot's feature map from the target image onto the source image (see Fig. 2). We produce a transported feature map \(\Phi_{T\gets S}\) by nullifying the source feature map outside the slot's predicted mask for both the target and source \((1-\mathcal{M}_{T}^{i})\cdot(1-\mathcal{M}_{S}^{i})\cdot\Phi_{S}^{i}\), followed by overlaying the masked target feature map \(\mathcal{M}_{T}^{i}\).\(\Phi_{T}^{i}\). Finally, a convolutional reconstruction module \(f_{rec}\) reconstructs each slot as an RGB image \(\mathcal{R}_{T}^{i}\in\mathbb{R}^{h\times w\times 3}\) based on the transported feature map. The reconstructed slots together reconstructed target image \(\hat{I}_{T}\).
The transport mechanism in SlotTransport enforces temporal alignment between image pairs during training. Notably, the learned slots through SlotTransport consistently register to a unique object even under heavy occlusion or absence of an object. During inference, SlotTransport can discover and reconstruct slots from a single image by directly reconstructing the extracted per-slot features and masks. SlotTransport ensures that each slot's feature map aligns well with its mask for learning consistent object representation across time. Importantly, this temporal alignment is achieved without adding additional learnable parameters; the same \(f_{slot}\) and \(f_{dec}\) are used when processing both source and target images.
### _SlotGNN: Unsupervised Multi-Object Dynamics_
The main purpose of SlotGNN is to learn the dynamics and model interactions between the visual elements in a multi-object scene, such as the robot, objects, and the background. It does so using a graph-based representation, where each object-centric slot corresponds to a node in the graph. Crucially, SlotGNN enables learning unsupervised multi-object dynamics, eliminating the need for supervised trajectory labels that require access to the system's ground-truth state. This feature becomes essential in real-world scenarios where obtaining accurate ground-truth data is challenging or impractical. Refer to the detailed architecture illustrated in Fig. 3-a.
Given an image \(I_{t}\) with its associated slots \(z_{1:K}^{t}\) discovered through SlotTransport, we construct a fully connected graph \(\mathcal{G}_{t}=(\mathcal{V},\mathcal{E})\). Each node \(v_{i}\in\mathcal{V}\) in the graph represents a slot, and each edge \(e_{ij}\in\mathcal{E}\) represents the interaction between the pair of slots \(z_{i},z_{j}\). For each node, we associate an embedding \(n_{i}\) which is initialized with the slot representations \(z_{i}\). The edge embeddings, representing interactions, are initialized based on augmenting the connected slots representations. To process the information in the graph representation, SlotGNN employs a message-passing neural network architecture [20, 10] to update node and edge embeddings. Incoming information from neighboring nodes is aggregated to update each node's state, capturing the dynamics and interactions in the scene.
The message-passing operation in the graph consists of two primary steps (see 3). First, the edge embeddings, \(e_{ij}\), are updated based on their connecting node embeddings: \(e_{ij}^{\prime}\gets f_{edge}(e_{ij},n_{i},n_{j})\). Secondly, the node embeddings are updated using the updated edge embeddings associated with them and the robot action: \(n_{k}^{\prime}\gets f_{node}(n_{k},\sum_{i\in\mathcal{N}(k)}e_{ik}^{ \prime},a_{t})\). Here, \(f_{edge}\) and \(f_{node}\) are multi-layer perception update functions for edges and nodes, respectively. \(\mathcal{N}(k)\) denotes the neighbors of node \(k\), which in the context of a fully connected graph is all other nodes. To condition on external action, the robot action \(a_{t}\in\mathbb{R}^{4}\), characterized as a point-to-point push vector in image coordinates, is integrated as an input to \(f_{node}\) in SlotGNN. This ensures the learned dynamics are conditioned on the robot's action and can be used for planning in the downstream robotics control task.
Fig. 3: **SlotGNN**: Unsupervised Multi-Object Dynamics. (a) SlotGNN predicts slot changes \(\Delta z_{t}^{t}\) after applying a pushing action \(a_{t}\). Using SlotTransport’s slots (\(f_{slot}\)), a graph \(\mathcal{G}_{t}\) is formed with slots as nodes and slot interactions as edges. Edges and nodes are updated via \(f_{edge}\) and \(f_{node}\), resulting in next timestep slots \(z_{i}^{t+1}\). The next image \(\hat{I}_{t+1}\) is then reconstructed. (b) With a sequence of robot actions, SlotGNN projects future multi-object dynamics and synthesizes future scenes. (c) SlotGNN also facilitates goal-directed planning to optimize actions towards a desired goal.
After message-passing, the updated node embeddings are used to predict the evolution of slots \(\Delta z_{i}^{t+1}\) in the next timestep conditioned on the action \(a_{t}\). This allows for the roll-out of the dynamics into the future and enables synthesizing the future appearance of the scene.
### _Training SlotTransport and SlotGNN_
SlotTransport is trained using only the image reconstruction error for supervision. The image reconstruction loss \(\mathcal{L}_{rec}(I_{T},\hat{I}_{T})\), is defined using a pixel-wise Mean Squared Error (MSE) between the target and reconstructed images. In Fig. 2, modules with learnable parameters are distinctly highlighted in blue. Once SlotTransport is trained, it supervises the training of SlotGNN to learn visual dynamics.
We use a slot prediction MSE loss to train SlotGNN, \(\mathcal{L}_{slot}(z_{1:K}^{t+1},z_{1:K}^{t+1})\). This loss reduces the distance between the slots directly predicted from the next timestep image using SlotTransport \(z_{1:K}^{t+1}\) and slots from the single-step dynamics with SlotGNN \(z_{1:K}^{t+1}\), as visualized in Fig. 3-a (modules with leanable parameters are highlighted in blue). Importantly, employing a per-slot prediction loss requires temporal alignment that is ensured through SlotTransport. Furthermore, we use the single-step slot dynamics to reconstruct the image and also minimize the image reconstruction MSE loss \(\mathcal{L}_{rec}(I_{t+1},\hat{I}_{t+1})\).
### _Long-Horizon Multi-Object Dynamics Rollout_
Given only an initial image frame \(I_{0}\) and a sequence of robot pushing actions \(a_{0:h}\), SlotGNN can predict \(\hat{I}_{1:h}\) by recurrently running on the previous step's prediction. As shown in Fig. 3-b, the single-step dynamics predictions are cascaded to predict slots over extended future horizons. This capability for accurate dynamics rollout is possible due to the temporal alignment achieved with SlotTransport. Furthermore, for any arbitrary future timestep, \(f_{rec}\) can synthesize an image of the scene from the predicted slots.
### _Goal-Directed Planning_
Learning the scene dynamics facilitates goal-directed sequential action planning for multi-object manipulation. As shown in Fig. 3-c, with SlotGNN, we optimize robot actions to align a scene's state with a target goal image. This is pivotal when the robot interacts with several objects to reach a desired state. Given a scene image, we sample possible action sequences over a planning horizon \(h\geq 1\) and forecast the slot representation \(\hat{z}_{1:K}^{t+h}\) by rolling the dynamics. Using Model-predictive control (MPC) [22], the optimal action sequence is chosen by minimizing the slot loss \(\mathcal{L}_{slot}(z_{1:K}^{T},z_{1:K}^{G})\), which quantifies the variance between predicted slots and the goal image slots.
## IV Experiments
We structure our experiments around: (1) How accurately and consistently do slots extracted by SlotTransport represent each visual element in the scene? (2) How effective is SlotGNN in predicting multi-object scene dynamics? (3) How well does our framework apply to downstream robotic tasks?
### _Data_
_Simulation:_ Using Mujoco [23, 24], we simulate a multi-object tabletop scene with YCB objects [25] and a UR5e robot with a cylindrical end-effector. The robot performs planar pushing action, captured by an RGB camera. The data is formatted as image-action tuples \((I_{t},a_{t},I_{t+1})\) containing pre- and post-action images, and action vectors in the image coordinates. We generate \(\sim 750\) episodes \(\times 20\) steps of random pushes for a given subset of objects. SlotTransport is trained by randomly sampling target and source images across all episodes. We then use the learned SlotTransport to discover slots and train SlotGNN on the image-action tuples. Evaluations on SlotTransport are done with five objects using images from a top-view camera. Experiments involving single-step dynamics, long-horizon predictions, and object rearrangements are on scenes with three objects with an angled camera.
_Real-world:_ We use a UR5e robot with a custom-printed cylindrical end-effector and an RGB camera. We collect data \(\sim 20\) episodes \(\times 40\) steps of random pushes for subsets of 3 real YCB objects. Models trained in the simulation for the same object subset are retrained on this real-world data.
### _Baselines_
We evaluate our approach against various methods:
_Object Discovery:_ Our SlotTransport, is compared with the original slot attention approach [11]. We follow the implementation of this baseline by excluding the transport mechanism introduced in our SlotTransport during training. Furthermore, we compare our approach with the off-the-shelf SAM [21].
Fig. 4: Visualizations of per-slot masks and reconstructions. SlotTransport exhibits superior performance in accuracy and consistency of object-centric representation, even under occlusion, compared to the SlotAttention baseline [11]. We also showcase predicted segments from SAM [21].
Fig. 5: Examples of single-step dynamics prediction using SlotGNN. Given the current scene image and the robot’s pushing action, our model precisely predicts the future state of each slot and synthesizes the future scene image.
_Multi-Object Dynamics:_ For action-conditioned graph-based dynamics, we consider ForwGNN [6], which uses supervision of ground-truth object masks. The scene graph's nodes are embedded with the ground-truth object positions and masks to directly reconstruct the future image. We also compare with KINet [19], an unsupervised model that determines dynamics by identifying a set of keypoints from the scene image. Lastly, we compare with the SlotMLP variant. While it utilizes slots from SlotTransport, it models dynamics with MLPs rather than the graph-based approach of SlotGNN.
_Evaluation Metrics:_ We compute pixel-wise mean squared error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS) [26] to measure the accuracy of the slots in reconstructing the scene composition. Additionally, to quantify the quality and consistency of the slot masks, we compute the mean Intersection over Union (IoU) for slot masks produced by SlotTransport, SlotAttention, and SAM, comparing them against the ground-truth masks from simulation.
## V Results
### _Object Discovery Performance_
Figure 4 showcases the slot masks and slot reconstructions. In a scene with five objects, SlotTransport qualitatively outperforms the SlotAttention baseline [11]. SlotTransport accurately identifies all distinct visual elements, and predicts an accurate mask for each--even under heavy occlusion. However, as seen in Fig. 4, the SlotAttention baseline overlooks the spam object occluded by the power drill. Moreover, SlotTransport delineates clear boundaries for each slot and accurately reconstructs their appearance. In contrast, the SlotAttention baseline presents indistinct, blurred object masks and reconstructions. We further show that relying on off-the-shelf segmentation methods, such as SAM [21], is not optimal for learning object representations. This is primarily due to SAM's tendency to over-segment textured objects (e.g., backgrounds) and under-segment cluttered objects.
Table I summarizes the quantitative evaluation of both the visual quality of reconstructed slots and the precision of slot masks. SlotTransport distinctly outperforms the SlotAttention baseline by achieving significantly better visual fidelity, measured in MSE and LPIPS. Furthermore, object masks produced by SlotTransport demonstrate superior alignment with ground-truth masks derived from simulated data. In contrast, SlotAttention often struggles to align slots accurately to cluttered objects, as shown in Fig 4. This limitation is evident in the lower mIoU for SlotAttention compared to SlotTransport.
### _Dynamics Prediction Performance_
Figure 5 illustrates the single-step dynamics prediction of SlotGNN. By taking as input the current image and the intended robot's pushing action vector, our model accurately predicts the future scene. It does so by predicting the future state of each slot, based on the learned multi-object dynamics of the scene. The quantitative results presented in Table II highlight the accuracy of SlotGNN in single-step dynamics prediction. In single-step dynamics prediction, SlotGNN outperforms all other baselines, including the SlotMLP variant and the unsupervised keypoint dynamics KINet [19]. It's worth noting that while ForwGNN does rely on ground-truth state information for supervision, it still falls short in MSE compared to SlotGNN, which utilizes image-based supervision. This further highlights the robustness of the detected slots
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **Supervision** & **MSE \(\downarrow\)** & **mIoU** (\%) \(\uparrow\) \\ \hline
**SlotGNN (Ours)** & **Img** & **0.14 \(\pm\) 0.05** & **86.9 \(\pm\) 2.9** \\ SlotMLP & Img & 0.32 \(\pm\) 0.09 & 72.6 \(\pm\) 1.1 \\ KINet [19] & Img & 1.86 \(\pm\) 0.09 & N.A. \\ ForwGNN [6] & GT State & 0.50 \(\pm\) 0.14 & N.A. \\ \hline \hline \end{tabular}
\end{table} TABLE II: Single-step dynamics prediction accuracy measured as visual quality (MSE) and mask consistency (mIoU).
Fig. 8: Qualitative results on control. Each row shows the action sequence (highlighted in green) optimized to maximize scene similarity to goal image.
Fig. 6: Long-horizon slot dynamics prediction: SlotGNN has more stability compared to SlotMLP. We also show keypoints detected with KINet [19].
Fig. 7: (a) Long-horizon dynamics rollout error. SlotGNN exhibits robust dynamics predictions as the timestep increases. (b) Planning results: Comparing the distance to the goal image between SlotGNN and SlotMLP.
in SlotTransport in representing objects enabling SlotGNN to learn accurate multi-object dynamics.
As illustrated in Fig. 6, SlotGNN excels in predicting stable long-horizon dynamics compared with SlotMLP. Although the scenes reconstructed with SlotGNN may diverge from the ground-truth due to cumulative prediction errors, it yields physically plausible future scenes. In contrast, SlotMLP struggles to retain the coherence slots over time. Given that both SlotGNN and SlotMLP use slots by SlotTransport, the difference in their long-horizon predictions can be attributed to the graph-based model's enhanced ability to capture multi-object dynamics. In Fig. 6, unsupervised keypoints detected by KINet [19] are also shown. KINet requires stable keypoint-object correspondences to learn multi-object dynamics. This stability is compromised when a robot enters or exits the frame or introduces object occlusions (see the pink keypoint in the last column of Fig. 6). A quantitative summary of the long-horizon rollout outcomes can be found in Fig. 7-a.
### _Planning with SlotGNN_
Fig. 8 shows our method's application in control tasks. In a challenging object rearrangement scenario, the robot plans an action sequence using SlotTransport and SlotGNN. Through accurate multi-object dynamics projections, the robot effectively aligns objects to a desired configuration using just the RGB image. The planning performance of slot-based models are compared in Fig. 7 which emphasizes the effectiveness of a graph-based model in learning object-centric dynamics.
### _Real-World Experiments_
Demonstrating the real-world applicability of our unsupervised approach, we successfully transfer SlotTransport and SlotGNN, initially trained in simulation, to the real robot by collecting a minimal dataset of just 20 real robot demonstrations (5% of the amount of simulated training data). SlotTransport retains its accuracy in the real environment as shown in Fig 10. The slots discovered from the real mutli-object scene, clearly distinguish all the scene elements even under occlusion. For the real-world control, we experiment with two tasks as shown in Fig. 9. The first scenario, presented in the top row, involves rearranging objects to achieve a predetermined goal image. The bottom row showcases a more dynamic scenario where objects are continuously displaced from their target positions by a human with a grabber stick. In response, our robot, using SlotTransport and SlotGNN, finds a sequence of actions to restore the objects to their intended locations.
## VI Conclusion
This work addresses the challenges of unsupervised learning for multi-object dynamics through visual observations. We present SlotTransport, a novel approach based on slot attention for unsupervised object discovery, ensuring temporal consistency in object-centric representations. Alongside this, we introduce SlotGNN, an unsupervised graph-based dynamics model for predicting the future states of multi-object scenes using the slots. Both methods have proven effective in complex robotic control tasks and long-horizon dynamics prediction. Importantly, we demonstrate that our unsupervised approach, using SlotTransport and SlotGNN, successfully transfers to real-world settings and enables object discovery and dynamic modeling solely from RGB images. For limitations, one key aspect we recognize is that our slot discovery process currently necessitates pre-determining the number of slots. In our experiments, we predefined the slot count equal to the anticipated number of elements in the scene. Developing a more adaptive mechanism that automatically determines the required slot count could be a promising future research direction.
Fig. 10: Real-world object slot discovery with SlotTransport. Our unsupervised framework transfers to real settings and discovers accurate object-centric representations that reflect the positional and visual features of the objects.
Fig. 9: Real-world control using SlotTransport and SlotGNN: The top row shows objects being rearranged to align with a goal image. In the bottom row, objects are persistently displaced from their goal positions, the robot comes up with a sequence of actions to push the objects back to their desired locations. Please visit our project page for videos and more examples: bit.ly/slotgnn.
## VII Acknowledgement
We thank Carl Winge for the help with the robot setup, Chahyon Ku for providing helpful feedback on our initial draft, and all other members of the Robotics Perception and Manipulation Lab for their insightful discussions. This project is partially funded by the UROP Program at the University of Minnesota and the MnDRIVE UMII (University of Minnesota Informatics Institute) Seed Award. This project was also supported in part by the Sony Research Award Program and NSF Award 2143730
|
2308.08267 | Perpetual Reconfigurable Intelligent Surfaces Through In-Band Energy
Harvesting: Architectures, Protocols, and Challenges | Reconfigurable intelligent surfaces (RISs) are considered to be a key enabler
of highly energy-efficient 6G and beyond networks. This property arises from
the absence of power amplifiers in the structure, in contrast to active nodes,
such as small cells and relays. However, still an amount of power is required
for their operation. To improve their energy efficiency further, we propose the
notion of perpetual RISs, which secure the power needed to supply their
functionalities through wireless energy harvesting of the impinging transmitted
electromagnetic signals. Towards this, we initially explain the rationale
behind such RIS capability and proceed with the presentation of the main RIS
controller architecture that can realize this vision under an in-band energy
harvesting consideration. Furthermore, we present a typical energy-harvesting
architecture followed by two harvesting protocols. Subsequently, we study the
performance of the two protocols under a typical communications scenario.
Finally, we elaborate on the main research challenges governing the realization
of large-scale networks with perpetual RISs. | Konstantinos Ntontin, Alexandros-Apostolos A. Boulogeorgos, Sergi Abadal, Agapi Mesodiakaki, Symeon Chatzinotas, Björn Ottersten | 2023-08-16T10:07:45Z | http://arxiv.org/abs/2308.08267v1 | Perpetual Reconfigurable Intelligent Surfaces Through In-Band Energy Harvesting: Architectures, Protocols, and Challenges
###### Abstract
Reconfigurable intelligent surfaces (RISs) are considered to be a key enabler of highly energy-efficient 6G and beyond networks. This property arises from the absence of power amplifiers in the structure, in contrast to active nodes, such as small cells and relays. However, still an amount of power is required for their operation. To improve their energy efficiency further, we propose the notion of _perpetual_ RISs, which secure the power needed to supply their functionalities through wireless energy harvesting of the impinging transmitted electromagnetic signals. Towards this, we initially explain the rationale behind such RIS capability and proceed with the presentation of the main RIS controller architecture that can realize this vision under an in-band energy harvesting consideration. Furthermore, we present a typical energy-harvesting architecture followed by two harvesting protocols. Subsequently, we study the performance of the two protocols under a typical communications scenario. Finally, we elaborate on the main research challenges governing the realization of large-scale networks with perpetual RISs.
Perpetual operation, reconfigurable intelligent surfaces (RISs), wireless energy harvesting (EH).
## I Introduction
Although using millimeter-wave (mmWave) bands, in order to prevent the capacity crunch of sub-6 GHz bands, has been envisioned and standardized for 5G networks, wide-scale network deployment on these bands is expected to be realized in their 6G counterparts. The large bandwidth offered in mmWave bands is essential not only for boosting the communication rates, but also for achieving sub-meter localization that is required in several challenging use cases with a high societal impact, such as autonomous driving in urban areas [1], highly accurate localization of Intenet of Things (IoT) devices in a smart factory [2], and indoor navigation of people with impaired vision [3].
However, mmWave bands are more susceptible to fixed and moving blockages in comparison with their sub-6-GHz counterparts. A straightforward solution to counteract this bottleneck is the large network densification with small cells and relays so that line-of-sight (LoS) connections between them and the end users are achieved with very high probability. However, such a solution may be prohibitive from a cost and energy consumption point of view [4].
To counteract the aforementioned bottleneck, reconfigurable intelligent surfaces (RISs) are widely believed to be a viable alternative due to their capability for conformal designs and notably lower power consumption compared with active nodes that are equipped with power amplifiers (PAs). This is due to the lack of PAs in the RIS case, which is the most power consuming electronic component [5]. By adjusting the impedance of their unit cells (UCs), RISs are able to perform a variety of functions, such as reflection, absorption, diffraction, and polarization change of the incident electromagnetic wave. Owing to their ease of deployment, RISs are expected to be ubiquitously deployed in both indoor and outdoor scenarios in the forthcoming 6G and beyond networks, especially for mmWave bands, so as to provide numerous alternative transmitter-RIS and RIS-receiver LoS routes in case of blockages. They can assist not only communications, but also localization simultaneously [6].
### Why Do We Need Perpetual RISs?
Current RIS prototypes base their reconfigurability on field-programmable gate array (FPGA) controllers that normally exhibit power consumption levels that require the RIS to be constantly plugged onto the power grid. This need could impair the requirement for a pervasive RIS deployment due to difficulty of massively wiring them to the grid. In particular, deploying cables involves planning and notable maintenance costs that can immensely grow for massive deployments [7]. Moreover, requests to local authorities for the permission of installing the required wired infrastructure would be needed in several occasions, which are usually time consuming processes. In addition, there are places that the power grid would not be allowed to reach to for preventing urban visual pollution.
Additionally, supplying the energy needs of RISs with single-use batteries is also not a viable solution either because this would give rise to a large effort for regular replacements of a massive number of single-use batteries, let alone the constant monitoring of their level that would be required. Based on the aforementioned powering issues that a massive RIS deployment would induce, the following question arises: _Could RISs perpetually operate by means of wireless energy harvesting from the impinging electromagnetic (EM) signals that are used for communications and localization._
In the remainder of this article, we first present the two main RIS controller architectures and explain why only one of these can potentially result in perpetual operation. Subsequently, we introduce an energy-harvesting (EH) architecture
together with two in-band EH protocols. Furthermore, their performance is compared. Finally, we identify a number of research challenges for the realization of perpetual RISs and conclude this article with the main takeaways.
## Controller Architectures
Let us now present the two basic RIS controller architectures, namely the conventional _FPGA-based architecture_ and the _integrated architecture_. In addition, we will elaborate on why the integrated architecture is the only viable approach for perpetual RIS operation.
### FPGA-based architecture
As depicted in Fig. 1, in this architecture the FPGA acts as an external controller and adjusts the bias voltages of the tuning elements that are attached to the UCs. This, in turn, alters the impedance of the UCs so that the desired metasurface function is realized. The tuning elements normally comprise varactors or variable resistors, positive intrinsic negative diodes or switches, microelectromechanical systems, mechanical parts, or advanced materials, such as graphene, or liquid crystals [8]. The FPGA-based architecture is the conventional architecture with which several proof-of-concept RISs have been designed and manufactured. It offers the advantage of separate design of the metasurfaces and FPGAs. On the other hand, FPGA-based architectures are usually bulky and exhibit a significant power consumption that make perpetual operation challenging [8].
### Integrated architecture
In contrast to the FPGA-based architecture, the integrated architecture relies on the integration of a network of communicating chips within the metasurface containing tuning elements, control circuits, and even sensors. As it is pointed out in [8], integrated architectures are custom-made and are therefore much more optimized than FPGA-based architectures. This means that the control sub-system is less intrusive in terms of EM interference, less bulky, and potentially exhibits lower power consumption. Hence, perpetual operation is envisioned as a possibility for the integrated RIS architecture by means of wireless energy harvesting [8]. The metasurface controlling chips, that would wirelessly receive reconfiguration commands under perpetual operation, may consist of circuitry that reads the UC state and digital-to-analog (DAC) converters that adjust the bias voltage to the tuning elements.
A possible architecture for such a controlling chip, based on application-specific integrated circuits (ASICs) for simultaneously controlling the response of \(4\) UCs, is depicted in Fig. 2[9]. According to the particular example1, the ASIC comprises: i) the control circuit, ii) the DACs, and iii) the radio frequency (RF) tunable loading elements (LEs). The control circuit is responsible for the communication operations of the ASIC by wirelessly receiving reconfiguration commands2 and sending/receiving communication data to/from its neighboring controllers. In the particular implementation of [9], the control circuit consists of an internal memory with 64 cells that store the reconfiguration data that are required by the LEs for adjusting the impedance of the UCs. In addition, the control circuit integrates another internal memory with 18 cells for storing the data for networking among the ASICs. In turn, the cells that store the RIS reconfiguration data drive the inputs of the \(8\) DACs. Furthermore, the output of the DACs drives the input of the LEs. The LEs consist of a metal-oxide-semiconductor field-effect transistor (MOSFET) varistor that adjusts the real part of the UC impedance and a MOSFET varactor that adjusts its imaginary part. Finally, we note that an important feature of the ASIC proposed in [9] is its asynchronous operation, which can result in a notably lower circuit consumption compared with a synchronous implementation.
Footnote 1: The actual ASIC design may change based on the application and the type of UCs used.
Footnote 2: We assume that a wireless receiver is embedded into the control circuit.
Fig. 1: Controller architectures for RISs [8].
Fig. 2: Top-level diagram of the ASIC used in [9] as the controling chip.
## II Energy Harvesting Architecture, Power Consumption Model, and Proposed Harvesting protocols
In this section, we first present a typical EH architecture for supplying the energy needs of a perpetual RIS. Next, we introduce the power consumption model based on the considered integrated architecture for reconfiguring the surface. Finally, we report two harvesting protocols based on either time- or UC-splitting.
### Energy Harvesting Architecture
The EH architecture is depicted in Fig. 3. The absorbed power of subsets of UCs is combined in the RF domain and the combined outputs drive an equal number of rectifying circuits that transform the RF energy to a direct current (DC) one. A DC combining network combines the DC powers and its output charges a battery that is used to power the ASICs.
The presented architecture is a compromise between the two extreme cases of: i) combining in the RF domain the absorbed powers of all the UCs and ii) enabling each UC to drive a single rectifying circuit. The first case may result in substantial insertion losses due to RF combining, if the absorbed powers are not perfectly phase aligned, whereas the absorbed power of each UC in the second case might not be sufficient to turn on the rectifying circuit [10]. Hence, the architecture presents a flexible design and the amount of chains is subject to optimization, based on the specific application and electronic packaging considerations. Finally, as far as the rectifying circuit is concerned, which is a passive device, the three main options for its realization are a diode, where a Schottky diode is the most common implementation, a bridge of diodes, and a voltage rectifier-multiplier [10].
### Power Consumption Model
Due to the fact that the RF/DC combiners and rectifying circuits in the presented energy harvesting architecture are passive devices, the only source of power consumption in the RIS is the ASIC. As with any electronic device, this power consumption consists of the summation of a static and a dynamic part. The latter part is due to the wireless reception of reconfiguration commands, the switching operations and the resulting charging/discharging of internal capacitances each time the impedance of the UCs needs to be reconfigured, and the internal communication among the ASICs. Hence, by denoting the number of reconfigurations of UC \(i\) in a time window \(T\) (this can be the frame duration) by \(N_{\mathrm{rec}_{i}}\) and the energy cost for such reconfigurations by \(E_{\mathrm{rec}_{i}}\), for the dynamic power consumption \(P_{\mathrm{dyn}}\) of the RIS it holds [11, Eq. (4.7)]
\[P_{\mathrm{dyn}}=\sum_{i\in\mathrm{UCs}}\frac{N_{\mathrm{rec}_{i}}E_{\mathrm{ rec}_{i}}}{T}. \tag{1}\]
On the other hand, the static power consumption is mainly attributed to the power consumption of the DACs, as [9] reveals.
### Harvesting Protocols
We now report two protocols for energy harvesting that are based on either a time-splitting or a UC-splitting approach [12].
#### Time-splitting protocol
A typical frame structure is depicted in Fig. 4. Based on it, the preamble interval, which is used for both synchronization and channel estimation of the TX-RIS and RIS-RX links, is followed by an energy harvesting interval in which all the UCs act as perfect absorbers. Finally, the payload transmission interval follows where all
Fig. 4: Frame structure in the time- and UC-splitting harvesting protocols.
Fig. 5: Post-preamble time-splitting protocol functionality.
Fig. 3: Energy harvesting architecture.
the UCs act as perfect reflectors towards the RX. The post-preamble functionality of the RIS is illustrated in Fig. 5.
Let us now denote the number of UCs in the RIS by \(M_{s}\). Regarding the number of UC impedance adjustments that are needed during each frame, apart from \(M_{s}\) adjustments needed for power absorption and another \(M_{s}\) adjustments for the payload transmission, based on the channel estimates, a number of UC adjustments is needed for channel estimation during the preamble interval. The reason for this becomes clear by considering that the RIS does not have active components to perform channel estimation in order to keep its design as simple and low-energy consuming as possible. Hence, channel estimation involves only estimation at either the TX or RX. The simplest protocol for channel estimation relies on activating only one UC at a time to act as perfect reflector while keeping the remaining ones off [13]. Hence, such a channel estimation protocol requires in total \(M_{s}\) UC impedance adjustments. Based on the above, during the transmission of one frame in total \(3M_{s}\) UC reconfigurations are needed for channel estimation, wireless power absorption, and payload transmission.
#### UC-splitting protocol
The frame structure is depicted in Fig. 4. After the preamble transmission, simultaneous wireless power transfer and information transmission is realized by dedicating a subset of UCs for harvesting through perfect absorption and the rest for information transmission by acting as perfect reflectors towards the RX. Illustratively, the functionality of the RIS for the post-preamble frame interval is depicted in Fig. 6.
Regarding the total number of UC reconfigurations needed in the UC-splitting protocol during the transmission of one frame, \(M_{s}\) reconfigurations are needed for channel estimation and another \(M_{s}\) reconfigurations for impedance adjustment related to the simultaneous wireless power transfer and payload transmission interval. Hence, \(2M_{s}\) reconfigurations are needed in total, which are smaller by \(M_{s}\) reconfigurations compared with the time-splitting case.
Finally, we note that for the allocation of the time and UC resources in the time- and UC-splitting harvesting protocols, respectively, average metrics can be considered as the easiest implementation so that the allocation does not depend on instantaneous channel estimates, but only on the channel statistics.
## Performance Comparison of the Time and UC-Splitting protocols
Let us now indicatively compare the performance of the time- and UC-splitting protocols in a typical communications-only scenario in which a mobile user is targeted via an RIS and the average rate maximization is the metric of interest. The simulation parameters are presented in Table I. In addition, the energy-harvesting model and the harvesting circuit parameters of [14] are employed.
As far as the problem of the optimal resource allocation for the time- and UC-splitting protocols, we target the maximization of the average rate provided that the energy consumption requirements of the RIS are covered by the harvested energy. For the time-splitting protocol, such a problem takes the following form:
\[\begin{array}{ll}\underset{\text{Wireless power}}{\text{maximize}}&\text{Average rate}\\ \text{transfer duration}&\\ \text{subject to}&\text{DC harvested}\geq\underset{\text{power}}{\text{ RIS power}}\\ \end{array} \tag{2}\]
On the other hand, in the case of the UC-splitting protocol
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline
Fig. 6: Post-preamble power-splitting protocol functionality.
Fig. 7: Average rate vs. ASIC static power consumption.
the formulation of the optimal resource allocation problem is as follows:
\[\begin{array}{ll}\underset{\text{Number of UCs}}{\text{decicated to}}&\text{Average rate}\\ \text{energy harvesting}&\\ \end{array} \tag{3}\]
\[\begin{array}{ll}\underset{\text{power}}{\text{object to}}&\text{DC harvested}\geq\underset{\text{ consumption}}{\text{RIS power}}\\ \end{array}\]
Based on the solution of the presented problems, in Fig. 7 we illustrate the average rate vs. the static ASIC power consumption that is achieved by the two protocols. The depicted ASIC static power consumption range is in the order of the one achieved in [9]. As we observe, in terms of average rate the UC-splitting protocol notably outperforms its time-splitting counterpart throughout the ASIC static power consumption range for which the solution of the two problems is feasible. This trend is justified by the fact that in the time-splitting case the factor corresponding to the reduction of time resources is a linear multiplicative factor of Shannon's formula. On the other hand, for the UC-splitting protocol case such a term is included inside the logarithm function of Shannon's formula (in the signal-to-noise ratio (SNR) expression) [14].
Finally, it is interesting to examine the ratio of ASIC dynamic power consumption over the static one for the two examined protocols. This is depicted in Fig. 8. As we observe, as the ASIC static power consumption increases it largely dominates over the dynamic part. This is a clear indication that the realization of perpetual RISs dictates the design of ASICs that exhibit a very low static power consumption.
## Challenges
In this section, we present the main research challenges regarding the realization of perpetual RISs and their deployment in future networks.
### Low-Energy Consumption ASIC Design
A key feature in the feasibility of perpetually operating RISs is the design of ASICs that exhibit a very low static power consumption, as the simulation results revealed. This is arguably the greatest obstacle to overcome. According to the indicative simulation results, we saw that the ASICs of the RIS should not consume more than just few \(\mu\)W of static power consumption for the perpetual operation to be feasible. Instead, in the literature we observe that typical ASICs used in integrated architecture designs exhibit a static power consumption of few hundred of \(\mu\)W, which would render the perpetual RIS operation infeasible [9]. More specifically, the most power consuming component of the ASICs considered in [9] is the DACs. In addition, apart from the static power consumption per DAC, the number of DACs and the number of UCs that each ASIC controls can be optimized so that the perpetual operation is realized, based on the estimated amount of impinging power.
### Optimized Protocol Design for Energy Harvesting
We have proposed two protocol architectures for RIS energy harvesting, namely the time- and UC-splitting achitectures. As we saw in the previous section, the latter architecture achieves a higher communication rate at the cost of a reduced SNR, as revealed in [12], since a portion of the UCs is dedicated to energy harvesting while the rest simultaneously convey information. On the other hand, the time-splitting architecture achieves the maximum SNR since all the UCs are dedicated to the transmission of information. Besides this, having a relatively high SNR at the receiver would be also important for the localization accuracy. Hence, there should be a novel investigation of the most suitable energy harvesting architecture for facilitating the demands of both communication and localization. Most likely a stand-alone time- or UC-splitting architecture would not be the way forward, but a dynamic switching between the time- and UC-splitting architectures, depending on real-time demands would be needed in real-world scenarios if it can be supported by the hardware.
### Channel Modeling for Various High Frequency Bands
Suitable high-frequency bands for all three purposes of RIS energy harvesting, communication, and localization, is another innovative concept to investigate. In particular, it is known that due to electronics energy harvesting becomes less efficient when going up the spectrum. However, very high frequency bands, such as THz bands, offer the advantage of a stronger LoS component due to the more directional transmissions and also finer resolution for localization due to the larger bandwidths. In addition, the multipath components that can also be harvested and importantly contribute to the absorbed energy on the RIS, apart from the direct LoS component, can importantly add to the required energy for supplying the energy needs of the RISs [14]. Hence, accurate channel models for the different high frequency bands are required. These aspects create very interesting tradeoffs regarding the potential of different frequency bands for energy harvesting that need to be investigated.
### Network Planning
The particular network planning will be based on achieving the requirements on communications and localization with a certain reliability, while at the same time the probability of not covering the RIS energy demands is lower than a certain
Fig. 8: Dynamic over static ASIC power consumption.
threshold. For such a network planning reliable traffic models in a region are essential since these would determine the statistical availability of the small cells for supplying the energy needs of the RISs. For instance, apart from the energy supply that an RIS can receive during the information transmission of its associated small cells, other, possibly underutilized, small cells in that time instant could act as power beacons for adding to the total harvested energy by the RISs.
### Multi-Band Energy Harvesting
The in-band energy-harvesting case examined in this work can be considered as a lower-bound scenario regarding the system performance considering that as the cost and size of electronics reduces eventually a perpetual RIS can host multi-band circuitry for energy harvesting. For instance, even in 6G and beyond networks that will mostly rely on mmWave bands for communication and localization, sub-6 GHz bands will still exist in multi-band small cells as a backup solution and also as a prime solution for control signals towards the mobile users. Hence, an RIS could incorporate both mmWave and sub-6 GHz circuitry to capture the ambient RF energy in the latter case from the small-cell transmissions. Additionally, another added energy-harvesting layer could relate to capturing solar energy in outdoor scenarios. Hence, the potential of multi-band energy harvesting should be investigated, taking also into account the cost and size of the resulting structure.
### Communication- and Information-Theoretic Fundamental Limits
The possibility of random energy arrivals in the case of multi-band energy harvesting, on top of the deterministic in-band harvesting that has been presented in this article, creates unique communication- and information-theoretic problems to be solved. Apart from the fact that in the presence of a ubiquitous RIS deployment the communication channel becomes programmable, with the existence of perpetual RISs the extent of its programmability depends on a random process that is related to the energy arrivals. From an information-theoretic point of view, a very interesting and challenging problem is the computation of the capacity of such a channel under finite-size batteries. In addition, channel coding theorems are of importance for such a novel system. Moreover, from a communications point of view, there is a need for practical adaptive modulation and coding schemes.
### Real-Time Network Optimization
Accurate analytical models for optimizing the resources in large-scale networks that incorporate perpetual RISs would be intractable to obtain. This is due to the complexity increase with respect to conventional networks that rely on power-grid supplied RISs. In particular, taking into account the real-time energy demands of the RISs substantially increases the optimal resource allocation complexity. Hence, data-driven approaches can be leveraged for the optimization of the available network resources. However, obtaining the massive amount of real-time data for training in centralized servers with the required latency and network energy consumption seems a daunting task. For alleviating this, distributed artificial intelligence methods can be leveraged, but this alone may not be adequate. Consequently, to effectively tackle this issue offline data for training through the use of less reliable analytical models that rely, for instance, on stochastic-geometry approaches, can be examined [15]. This way the amount of real-time training can be notably reduced.
## Conclusions
The idea of perpetual RISs through RF in-band energy harvesting has been introduced in this article. For its realization, it was firstly explained why the integrated architecture is potentially the only viable enabling architecture. Subsequently, we presented a typical EH architecture together with the time- and UC-splitting protocols for in-band EH. An indicative performance comparison between these two protocols followed, under an optimal allocation of resources for maximizing the average rate, which revealed that the UC-splitting protocol largely outperforms its time-splitting counterpart. Moreover, it was revealed that the static power consumption would most likely be the main part of the total ASIC power consumption. Finally, from a hardware, link-level, and network perspective, several challenges, together with enablers to overcome them, have been identified towards the realization of large-scale communication networks with perpetual RISs.
|
2310.05509 | Quartic rigid systems in the plane and in the Poincaré sphere | We consider the planar family of rigid systems of the form $x'=-y+xP(x,y),
y'=x+yP(x,y)$, where $P$ is any polynomial with monomials of degree one and
three. This is the simplest non-trivial family of rigid systems with no
rotatory parameters.
The family can be compactified to the Poincar\'e sphere such that the vector
field along the equator is not identically null. We study the centers, singular
points and limit cycles of that family on the plane and on the sphere. | M. J. Álvarez, J. L. Bravo, L. A. Calderón | 2023-10-09T08:21:29Z | http://arxiv.org/abs/2310.05509v1 | # Quartic rigid systems in the plane and in the Poincare sphere
###### Abstract.
We consider the planar family of rigid systems of the form \(x^{\prime}=-y+xP(x,y),y^{\prime}=x+yP(x,y),\) where \(P\) is any polynomial with monomials of degree one and three. This is the simplest non-trivial family of rigid systems with no rotatory parameters.
The family can be compactified to the Poincare sphere such that the vector field along the equator is not identically null. We study the centers, singular points and limit cycles of that family on the plane and on the sphere.
Key words and phrases:Rigid systems; Limit cycle; Planar systems; Poincare sphere
## 1. Introduction
In this work we are going to study rigid planar polynomial differential systems. These systems are characterized by having the origin as its unique critical point, being always monodromic. Furthermore, the solutions around the origin have constant angular velocity. These systems can be written, after a linear change of variables and a rescaling if necessary, see [5], as follows:
\[\begin{cases}x^{\prime}=-y+xF(x,y),\\ y^{\prime}=\quad x+yF(x,y),\end{cases} \tag{1.1}\]
for some analytic function \(F(x,y)\) such that \(F(0,0)=0.\) Observe that, if the origin is a center, then it is isochronous, that is, all the solutions take the same time to complete a full revolution around the origin. In this case, the center is referred to as uniformly isochronous, as the angular velocity remains constant throughout this rotational motion.
There are several factors that make the rigid family interesting. Firstly, the fact that the origin is its only critical point implies that any potential limit cycles, if they exist, have to be nested around it. Secondly, this family plays an important role in the broader problem of isochronicity, as we will explain later on.
In this work, we are going to study the center and cyclicity problems for a family of planar rigid systems. However, the main contribution of this work lies in the study of this system in the Poincare sphere. Seen in this context, the system exhibits some very interesting properties that, to the best of our knowledge, have not been previously investigated. Understanding the system's dynamics across the whole sphere provides valuable information about the planar system in both its finite and its infinite parts. As an example, in Theorem 3.9 we prove that the system always has a periodic orbit in the sphere. This periodic orbit can not be seen in, nor from, the finite plane, although this solution plays an important role in the organization of the dynamics of the planar system.
Introduction
Let \(F(x,y)\) be a real-valued function on \(\mathbb{R}^{n}\), \(n=1,2,\ldots\). We consider the following system
\[\dot{x}=\sum_{k=1}^{n}F_{k}(x,y),\quad x\in\mathbb{R}^{n}, \tag{1.1}\]
where \(F_{k}(x,y)\) is a real-valued function on \(\mathbb{R}^{n}\), \(k=1,2,\ldots\).
The system (1.1) is a system of differential equations
\[\dot{x}=\sum_{k=1}^{n}F_{k}(x,y),\quad x\in\mathbb{R}^{n}, \tag{1.2}\]
where \(F_{k}(x,y)\) is a real-valued function on \(\mathbb{R}^{n}\), \(k=1,2,\ldots\).
The system (1.1) is a system of differential equations
\[\dot{x}=\sum_{k=1}^{n}F_{k}(x,y),\quad x\in\mathbb{R}^{n}, \tag{1.3}\]
where \(F_{k}(x,y)\) is a real-valued function on \(\mathbb{R}^{n}\), \(k=1,2,\ldots\).
The system (1.1) is a system of differential equations
\[\dot{x}=\sum_{k=1}^{n}F_{k}(x,y),\quad x\in\mathbb{R}^{n}, \tag{1.4}\]
where \(F_{k}(x,y)\) is a real-valued function on \(\mathbb{R}^{n}\), \(k=1,2,\ldots\).
The system (1.1) is a system of differential equations
\[\dot{x}=\sum_{k=1}^{n}F_{k}(x,y),\quad x\in\mathbb{R}^{n}, \tag{1.5}\]
where \(F_{k}(x,y)\) is a real-valued function on \(\mathbb{R}^{n}\), \(k=1,2,\ldots\).
The system (1.1) is a system of differential equations
\[\dot{x}=\sum_{k=1}^{n}F_{k}(x,y),\quad x\in\mathbb{R}^{n}, \tag{1.6}\]
where \(F_{k}(x,y)\) is a real-valued function on \(\mathbb{R}^{n}\), \(k=1,2,\ldots\).
After its introduction by Conti in [5], the rigid systems has attracted the attention of numerous researchers, see for instance [1, 11, 13] and the references therein. In [14] it was proved that any polynomial system with linear part \((-y,x)^{t}\) has an isochronous center if and only it can be transformed by means of a specific analytic change of type
\[(x\to x+P(y^{2}),y\to y+Q(x,y))\]
into a system of the form (1.1). Consequently, the problem of determining whether a center is isochronous passes through the understanding of rigid systems.
Observe that, when the function \(F(x,y)\) in (1.1) is a polynomial of degree \(n\), the system transformed into polar coordinates can be written as the scalar equation
\[\dot{r}=\sum_{k=1}^{n}F_{k}(\cos\theta,\sin\theta)r^{k+1}, \tag{1.7}\]
being \(F(x,y)=\sum_{k=1}^{n}F_{k}(x,y).\) The previous equation is a generalized Abel equation and the solutions of (1.1) are in one-to-one correspondence with positive solutions of (1.7). In the case where \(n=1\), the rigid system has no limit cycles. This fact can be easily proved as the scalar equation reduces to a Riccati one of separable variables. However, when \(n=2\) there are examples with one limit cycle (observe that the constant term in \(F(x,y)\) has been omitted). There is a conjecture suggesting that this is the maximum number of limit cycles that the rigid system can have, see [7].
One of the properties of the rigid systems is that in some cases, such as when the function \(F(x,y)\) is an even-degree polynomial or features a constant term, there is a rotatory parameter, see for instance [8]. When this rotatory parameter exists, the birth, growth and disappearance of a potential limit cycle is, somehow, controlled.
In this work, we are going to study the simplest family of polynomial rigid systems for which none of its parameters is rotatory. Concretely, the family we are going to study is family (1.1) wherein the function \(F(x,y)\) is defined as follows:
\[F(x,y)=b_{1}x+b_{2}y+a_{1}x^{3}+a_{2}x^{2}y+a_{3}xy^{2}+a_{4}y^{3} \tag{1.8}\]
The generalized Abel equation that is in correspondence to the system we are interested in is
\[r^{\prime}=B(\theta)r^{2}+A(\theta)r^{4}, \tag{1.9}\]
where
\[B(\theta) =b_{1}\cos\theta+b_{2}\sin\theta,\] \[A(\theta) =a_{1}\cos^{3}\theta+a_{2}\cos^{2}\theta\sin\theta+a_{3}\cos \theta\sin^{2}\theta+a_{4}\sin^{3}\theta.\]
Without loss of generality, it is possible to consider the parameter \(a_{4}=0.\) This can be achieved by doing a rotation of angle \(\phi\), being \(\phi\) a real root of a specific trigonometric polynomial.
The existence of such a root is guaranteed as the polynomial in question is cubic. Hence, from now on we will consider the family with \(a_{4}=0\) and thus, the concrete family we are studying in this work is
\[\begin{cases}x^{\prime}=-y+x(b_{1}x+b_{2}y+a_{1}x^{3}+a_{2}x^{2}y+a_{3}xy^{2}),\\ y^{\prime}=\quad x+y(b_{1}x+b_{2}y+a_{1}x^{3}+a_{2}x^{2}y+a_{3}xy^{2}),\end{cases} \tag{1.10}\]
The paper is organized as follows: Section 2 is dedicated to the study of the center and cyclicity problems in the finite plane. In Section 3 we study the system
in the Poincare sphere; more specifically, we study the classification of the infinite critical points and the periodic orbits within the sphere. Finally, in Section 4 we make some conclusions and conjectures about system (1.5).
## 2. Centers and cyclicity
In the existing literature, rigid systems having a center at the origin are commonly referred to as uniform isochronous centers, as their angular velocity is constant. The center conditions for a general rigid system were given in [1] in terms of the existence of an analytic commutator. Furthermore, in the same paper, the authors proved that the rigid family with \(F(x,y)=F_{1}(x,y)+F_{m}(x,y)\), where \(F_{m}\) is a homogeneous polynomial of degree \(m\), has a center at the origin if and only if it is reversible. Moreover, in [12] the 14 different phase portraits of quartic uniform isochronous centers are given. In the next result we give the explicit conditions in terms of the parameters of the system to have a center at the origin.
**Theorem 2.1**.: _The origin of system (1.5) is a center if and only if one of the following conditions is satisfied:_
1. \(b_{1}=b_{2}=0\)_,_
2. \(3a_{1}b_{2}(b_{2}^{2}-b_{1}^{2})+b_{1}(a_{2}b_{1}^{2}+2a_{3}b_{1}b_{2}-3a_{2}b_ {2}^{2})=0\) _and_ \(b_{2}(-3a_{3}b_{1}^{2}+2a_{2}b_{1}b_{2}+a_{3}b_{2}^{2})=0\)_._
Proof.: We compute the first Lyapunov constants of the origin of system (1.5), getting:
\[l_{2} =\frac{\pi}{2}(a_{2}b_{1}-3a_{1}b_{2}-a_{3}b_{2}),\] \[l_{3} =-\frac{\pi}{2}(-a_{2}b_{1}^{3}+3a_{1}b_{1}^{2}b_{2}+3a_{3}b_{1}^{ 2}b_{2}-9a_{2}b_{1}b_{2}^{2}+23a_{1}b_{2}^{3}+7a_{3}b_{2}^{3}).\]
If we set them to zero, we get the two conditions in the statement of the theorem. It remains to prove that in both cases these conditions imply that the origin is a center.
1. In this case the function \(F(x,y)\) is homogeneous of degree 3. Note that in this case, the system is integrable, and one first integral is \[H(x,y)=\frac{-1+a_{2}x^{3}-3a_{1}x^{2}y-(2a_{1}+a_{3})y^{3}}{3\sqrt{(x^{2}+y^ {2})^{3}}}.\]
2. In this last case the system is reversible with respect to the straight line \(b_{1}x+b_{2}y=0\).
Hence, the result is proved.
_Remark 2.2_.: Observe that the trivial cases \(b_{1}=a_{1}=a_{3}=0\) and \(b_{2}=a_{2}=0\) are included in the second family of the previous result. These two subfamilies are reversible with respect to the straight lines \(y=0\) and \(x=0\), respectively.
The center conditions are closely related to the order of weakness of the focus and, hence, with the cyclicity of the critical point. In [1], it is proved that the maximum order of a fine focus of the rigid system with \(F(x,y)=F_{1}(x,y)+F_{m}(x,y)\) is \(\lfloor m/2\rfloor+1\), that in our case is 2. In the following result we prove that this fact leads to the existence of at least one limit cycle inside the family (1.5).
**Proposition 2.3**.: _There are systems inside the family (1.5) having at least one limit cycle._
Proof.: Consider next system inside the family (1.5):
\[\begin{cases}x^{\prime}=-y+x\left(5x+y+\frac{1+120a_{2}\pi-82\varepsilon}{74\pi}x ^{3}+a_{2}x^{2}y+\frac{-3+10a_{2}\pi+98\varepsilon}{74\pi}xy^{2}\right),\\ y^{\prime}=\quad x+y\left(5x+y+\frac{1+120a_{2}\pi-82\varepsilon}{74\pi}x^{3}+a _{2}x^{2}y+\frac{-3+10a_{2}\pi+98\varepsilon}{74\pi}xy^{2}\right).\end{cases} \tag{2.6}\]
Doing some simple computations, one can prove that its Lyapunov constants are
\[l_{2}=\varepsilon,\quad l_{3}=1.\]
Choosing \(\varepsilon<0\) small enough, one limit cycle is born from the origin by a degenerate Hopf bifurcation.
_Remark 2.4_.: Although the order of weakness of the focus is two, no more limit cycles can be created by a Hopf bifurcation inside the family (1.5). This is because for the family we are studying the divergence of the system is zero.
Concerning the number of limit cycles that system (1.5) can have, in [9] the authors studied rigid systems with \(F(x,y)=F_{0}(x,y)+F_{m}(x,y)+F_{n}(x,y)\), being \(F_{k}\) a homogeneous polynomial of degree \(k\). For low degrees of \(m,n\) they found lower bounds for the number of limit cycles. The best result they obtained for \(F_{0}(x,y)\equiv 0\) and \(m=1\) (which is the case of system (1.5)) was \(1\), which matches what we have obtained in the previous result.
As it has been mentioned in the introduction, the solutions of the family (1.5) are in one-to-one correspondence with the solution of the Abel equation (1.4). In the following result we use a specific criteria for Abel equations in order to bound the number of limit cycles for a subfamily of system (1.5).
For this result, instead of setting the parameter \(a_{4}\) equal to zero, it is more convenient to fix \(b_{1}=0\). This can be done without loss of generality by doing a rotation of \(\theta\). In the rest of the paper we will be working with \(a_{4}=0\), this is, with (1.5).
**Theorem 2.5**.: _Consider the family (1.1) with the function \(F\) being the one defined in (1.3), for which it is not restrictive to assume \(b_{1}=0,\) that is, the family_
\[\begin{cases}x^{\prime}=-y+x(b_{2}y+a_{1}x^{3}+a_{2}x^{2}y+a_{3}xy^{2}+a_{4}y^ {3}),\\ y^{\prime}=\quad x+y(b_{2}y+a_{1}x^{3}+a_{2}x^{2}y+a_{3}xy^{2}+a_{4}y^{3}). \end{cases}\]
_If \(a_{1}a_{3}\geq 0\) then the system has no limit cycles._
Proof.: We transform the system into the Abel equation (1.4) and denote
\[f(\theta) =\cos(\theta)(a_{1}\cos^{2}(\theta)+a_{3}\sin^{2}(\theta)),\] \[g(\theta) =\sin(\theta)(a_{2}\cos^{2}(\theta)+a_{4}\sin^{2}(\theta)),\] \[h(\theta) =b_{2}\sin(\theta).\]
Now, applying Theorem 2.4 of [4] we conclude.
## 3. Vector field on the Poincare sphere
In order to understand the full behaviour of a planar system, it is necessary to look at the solutions that approach or escape from infinity. To do this, one must compactify the plane. There are different compactifications in the literature, and
the most common ones are those that allow infinity to be seen as a point or as a circle. In this paper we will use the Poincare compactification, which transforms the planar system into two copies of a vector field but now defined in the sphere. These two copies are separated by the equator of the sphere, which contains the information about the dynamics at infinity. For a more detailed description of this compactification see [6].
In our case, as in all systems having its highest degree as a radial one (see [3]), the infinity is a circle of singularities. Therefore, we have to apply a slight modification of the classical Poincare compactification in order to deal with it, see again [3].
We exclude from our study the case where \(a_{1}=a_{2}=a_{3}=0\), since in this case, as we have mentioned before, the resulting system is a Riccati equation of separable variables and it can be easily integrated.
### Poincare compactification
We consider the real plane embedded in \(\mathbb{R}^{3}\) as the tangent plane to \(\mathbb{S}^{2}\) in the north pole. Consequently, the points of the plane are of the form \((x,y,1)\). We can project each point of the plane over the upper half sphere taking the straight line between the point and the center of the sphere, this is, if \((z_{1},z_{2},z_{3})\) are the coordinates of the assigned point in the sphere, then \(x=z_{1}/z_{3}\) and \(y=z_{2}/z_{3}\), or conversely
\[z_{1}=\frac{x}{\Delta},\quad z_{2}=\frac{y}{\Delta},\quad z_{3}=\frac{1}{ \Delta},\]
where \(\Delta=\sqrt{x^{2}+y^{2}+1}\). The projection in the south hemisphere is the same, changing the sign of every component of the field.
Using this construction, we can project the vector field in the plane given by (1.5) over each of the hemispheres.
**Proposition 3.1**.: _The vector field (1.5) is topologically equivalent to the restriction to the northern hemisphere of the system_
\[\begin{cases}z_{1}^{\prime}&=z_{3}\left(-z_{2}z_{3}+b_{1}z_{1}^{2}z_{3}^{2}+b _{2}z_{1}z_{2}z_{3}^{2}+a_{1}z_{1}^{4}+a_{2}z_{1}^{3}z_{2}+a_{3}z_{1}^{2}z_{2} ^{2}\right)\\ z_{2}^{\prime}&=z_{3}\left(z_{1}z_{3}+b_{1}z_{1}z_{2}z_{3}^{2}+b_{2}z_{2}^{2}z _{3}^{2}+a_{1}z_{1}^{3}z_{2}+a_{2}z_{1}^{2}z_{2}^{2}+a_{3}z_{1}z_{2}^{3}\right) \\ z_{3}^{\prime}&=\left(z_{3}^{2}-1\right)\left(b_{1}z_{1}z_{3}^{2}+b_{2}z_{2}z _{3}^{2}+a_{1}z_{1}^{3}+a_{2}z_{1}^{2}z_{2}+a_{3}z_{1}z_{2}^{2}\right).\end{cases} \tag{3.7}\]
Proof.: Consider the projection of the plane \((x,y,1)\) to the Poincare sphere defined by \(z_{1}=x/\Delta\), \(z_{2}=y/\Delta\), \(z_{3}=1/\Delta\), where \(\Delta=\sqrt{x^{2}+y^{2}+1}\) and \(z_{1}^{2}+z_{2}^{2}+z_{3}^{2}=1\). Now, deriving the projection along the solutions we obtain that the projection of the vector field (1.5) to the northern hemisphere is
\[\begin{cases}z_{1}^{\prime}&=(1-z_{1}^{2})P(z_{1}/z_{3},z_{2}/z_{3})-z_{1}z_{2 }Q(z_{1}/z_{3},z_{2}/z_{3})\\ z_{2}^{\prime}&=-z_{1}z_{2}P(z_{1}/z_{3},z_{2}/z_{3})+(1-z_{2}^{2})Q(z_{1}/z_ {3},z_{2}/z_{3})\\ z_{3}^{\prime}&=-z_{3}\left(z_{1}P(z_{1}/z_{3},z_{2}/z_{3})+z_{2}Q(z_{1}/z_ {3},z_{2}/z_{3})\right).\end{cases}\]
Reparametrizing time by a factor \(z_{3}^{3}\) and using that \(z_{1}^{2}+z_{2}^{2}+z_{3}^{2}=1\), we obtain (3.7).
### Critical points at infinity
In order to study the infinite critical points, we have to use local charts. Since we have taken the parameter \(a_{4}=0\), we have a common factor \(x\) in the highest degree of the original system which becomes a common factor \(z_{1}\) in the highest degree of the system in the sphere; therefore the
chart that will better suit the study of the infinite critical points, in order to see all of them, will be \(\mathcal{U}_{2}\).
Remember that here the infinite critical points are present in symmetric pairs. This is, after we know how many critical points we have in the chart \(\mathcal{U}_{2}\), the total number of infinite critical points of the system will be the double of this amount, as for each point in the chart \(\mathcal{U}_{2}\) we have its symmetric.
In the chart \(\mathcal{U}_{2}\), the expression of the variables is \((x,y)=(\frac{u}{v},\frac{1}{v})\) and the system, after a re-scaling of \(v^{4}\), is
\[\begin{cases}u^{\prime}&=-v^{2}(u^{2}+1),\\ v^{\prime}&=-a_{1}u^{3}-b_{1}uv^{2}-uv^{3}-a_{2}u^{2}-b_{2}v^{2}-a_{3}u.\end{cases}\]
The infinite critical points at this chart will be \((\hat{u},0)\), with \(\hat{u}\) being a root of the cubic \(g(u)=-u(a_{1}u^{2}+a_{2}u+a_{3})\). Consequently, there will be as many infinite critical points in this chart as roots of \(g(u)\).
For the rest of this section, when we refer to "simple", "double" or "triple" critical points, we will just make reference to the multiplicity of \(\hat{u}\) as a root of \(g(u)\), and not to their description as critical points.
Note that, if \(a_{1}=0\), \(a_{2}\neq 0\) or if \(a_{1}=a_{2}=0\), we are no longer able to see one or two of the infinite critical points in the chart \(\mathcal{U}_{2}\), respectively. If we study the chart \(\mathcal{U}_{1}\), the critical points will be \((\tilde{u},0)\) with \(\tilde{u}\) being a root of \(h(u)=-(a_{3}u^{2}+a_{2}u+a_{1})\). Observe that in these cases the critical points that cannot be seen in the chart \(\mathcal{U}_{2}\) will correspond with the root zero of \(h(u)\) in the chart \(\mathcal{U}_{1}\).
We will divide the study of the infinite critical points in two main cases: if they are all simple or if not. If \(a_{1}\neq 0\), note that, if we denote by \(D=a_{2}^{2}-4a_{1}a_{3}\) the discriminant of the quadratic factor of \(g(u)\), then the infinite critical points will be simple if and only if \(a_{3}\,D\neq 0\).
Again, if \(a_{1}=0\) the critical points that we can no longer see in the chart \(\mathcal{U}_{2}\) correspond to the root zero of \(h(u)\) in the chart \(\mathcal{U}_{1}\). The only case in which all the roots are simple in both charts is \(a_{2}\,a_{3}\neq 0\).
**Proposition 3.2**.: _Consider family (3.7). If all the infinite critical points of the system are simple, then they are cusps._
_More concretely, denoting \(D=a_{2}^{2}-4a_{1}a_{3}\), (3.7) has only simple critical points if and only if \((a_{1}^{2}+a_{2}^{2})\,a_{3}\,D\neq 0\). Moreover,_
* _If_ \(D>0\)_, there are three infinite critical points in the chart_ \(\mathcal{U}_{2}\)_, which are cusps._
* _If_ \(D<0\)_, there is only one infinite critical point in the chart_ \(\mathcal{U}_{2}\)_, which is a cusp._
Proof.: Assume first that \(a_{1}\neq 0\). Either \(D>0\) or \(D<0\). We remind that in this case the infinite critical points in the chart \(\mathcal{U}_{2}\) are \((\hat{u},0)\), with \(\hat{u}\) being a root of the cubic \(g(u)=-u(a_{1}u^{2}+a_{2}u+a_{3})\), and that \(D\) is the discriminant of the quadratic factor of \(g(u)\).
If \(D>0\), the three simple critical points are nilpotent ones. We proceed to apply Andreev Nilpotent Theorem (see [2] or [6]). We will do the computations for \((0,0)\) and the other two critical points follow in a similar way.
First of all, we do a re-scaling and we interchange the name of the variables, in order to transform the system in its nilpotent normal form. Hence, \(\tilde{u}=v,\tilde{v}=u\) and \(t=-a_{3}s.\) Remember that in this case \(a_{3}\neq 0\). Now, dropping the tildes and
denoting by a dot the derivative with respect to \(s\), the system is transformed into
\[\begin{cases}\dot{u}&=v+\frac{a_{1}v^{3}+b_{1}u^{2}v+u^{3}v+a_{2}v^{2}+b_{2}u^{2}}{ a_{3}},\\ \dot{v}&=\frac{u^{3}(v^{2}+1)}{a_{3}}.\end{cases}\]
We denote \(\dot{u}=v+A(u,v),\dot{v}=B(u,v).\) If we solve the equation \(v+A(u,v)=0\), we get the function \(v=f(u)=-\frac{b_{2}}{a_{3}}u^{2}+O(u^{3})\).
Now, we can compute the functions \(F(u)=B(u,f(u))\) and
\(G(u)=\left(\frac{\partial A}{\partial u}+\frac{\partial B}{\partial v}\right)( u,f(u))\), and we get
\[\begin{cases}F(u)&=\frac{u^{2}}{a_{3}}+O(u^{3}),\\ G(u)&=\frac{2b_{2}}{a_{3}}u+O(u^{2}).\end{cases}\]
Thus, in the Andreev Nilpotent Theorem, we get that \(m=2,n=1.\) Consequently, this critical point is a cusp point.
We can proceed in the same way with the other two critical points, getting the same result: in this case, all the infinite critical points are cusps.
If \(D<0\), there exists only one infinite critical point, \((0,0)\), being also a nilpotent one. We can proceed exactly in the same way as before, getting that the infinite critical point is also a cusp.
If \(a_{1}=0\), we would have to study the two simple roots of \(g(u)\) in the chart \(\mathcal{U}_{2}\), and the simple root zero of \(h(u)\) in the chart \(\mathcal{U}_{1}\).
Following an equivalent procedure that in the previous cases, we get that the two simple points in \(\mathcal{U}_{2}\) are nilpotent and, applying again Andreev Nilpotent Theorem, they are cusps.
Concerning the study of the root of \(h(u)\) in the other chart, note that the case \(a_{1}=0\) is equivalent to the case \(a_{4}=0\) by swapping \(x\) and \(y\) in the original system. Consequently, the computations would be equivalent to the ones already done, and this point will be a cusp.
We focus now on the case in which we have infinite critical points in the chart \(\mathcal{U}_{2}\) with multiplicity greater than one. With a suitable change of variables, we can assume that \(u=0\) is the root of greater multiplicity of \(g(u)\), with new parameters \(\bar{b}_{1},\bar{b}_{2},\bar{a}_{1},\bar{a}_{2},\bar{a}_{3}\). Assume first that \(\bar{a}_{1}\neq 0\).
Then note that \(g(u)\) will have \(u=0\) as a double root if and only if \(\bar{a}_{3}=0\), \(\bar{a}_{2}\neq 0\), and as a triple one if and only if \(\bar{a}_{2}=\bar{a}_{3}=0\).
Assume now that \(\bar{a}_{1}=0\). The only option in which there are critical points with multiplicity greater that one is \(\bar{a}_{3}=0\), \(\bar{a}_{2}\neq 0\), when we have a double critical point in the chart \(\mathcal{U}_{2}\) and a simple one in the chart \(\mathcal{U}_{1}\). Note that if \(\bar{a}_{2}=0\), \(\bar{a}_{3}\neq 0\), the double point would be in the chart \(\mathcal{U}_{1}\) and not in the zero of chart \(\mathcal{U}_{2}\), which is not possible after our change of variables.
**Proposition 3.3**.: _Consider family (3.7) with infinite critical points with multiplicity greater than one, and after the change of variable described above. The infinite critical points of the system are either cusps or consist of two parabolic and two hyperbolic sectors._
_More concretely, if \(\bar{a}_{1}\neq 0\):_
* _If_ \(\bar{a}_{3}=0\)_,_ \(\bar{a}_{2}\neq 0\)_, the system in the chart_ \(\mathcal{U}_{2}\) _has a simple critical point, which is a cusp, and a double one that has two hyperbolic and two parabolic sectors._
_._
* _If_ \(\bar{a}_{2}=\bar{a}_{3}=0\)_, the system in the chart_ \(\mathcal{U}_{2}\) _only has a triple critical point, which has two hyperbolic and two parabolic sectors when_ \(\bar{b}_{2}\neq 0\)_, and is a cusp when_ \(\bar{b}_{2}=0\)_._
_If \(\bar{a}_{1}=0\), in which case the only possibility is \(\bar{a}_{3}=0\), \(\bar{a}_{2}\neq 0\), the system in the chart \(\mathcal{U}_{2}\) has a double critical point, which has two hyperbolic and two parabolic sectors, and a simple one that is a cusp._
Proof.: For clarity, we will organize this proof in cases and subcases. Only some of them will be presented with full detail, as the other ones follow in a similar way.
**Case 1. \(\bar{\boldsymbol{a}}_{1}\neq\boldsymbol{0}\).** We will divide this case depending on the multiplicity of the root zero of \(g(u)\).
**Case 1.1. \(\bar{\boldsymbol{a}}_{3}=\boldsymbol{0},\bar{\boldsymbol{a}}_{2}\neq \boldsymbol{0}\)**. This is, we have a double critical point at \((0,0)\) and another simple critical point.
In this case, following a similar reasoning as in the previous result, the simple critical point is a cusp. In order to study the double point, we make a directional blow-up, \(u=u_{1},v=u_{1}v_{1}\). After the change of time \(s=t/u_{1}\), the system is
\[\begin{cases}u_{1}^{\prime}=\frac{du_{1}}{ds}=-u_{1}(1+u_{1}^{2})v_{1}^{2},\\ v_{1}^{\prime}=-\bar{a}_{2}-\bar{a}_{1}u_{1}-\bar{b}_{2}v_{1}^{2}-\bar{b}_{1}u _{1}v_{1}^{2}+v_{1}^{3}.\end{cases}\]
We need to study the critical points on \(u_{1}=0:\)
\[v_{1}^{\prime}\Big{|}_{u_{1}=0}=-\bar{a}_{2}-\bar{b}_{2}v_{1}^{2}+v_{1}^{3}=: \hat{g}(v_{1}).\]
This cubic polynomial will have one, two or three roots depending on the sign of the discriminant \(-\bar{a}_{2}(27\bar{a}_{2}+4\bar{b}_{2}^{3}).\) It is very useful to work from this point ahead in terms of the roots of \(\hat{g}(v_{1})\) to simplify the analysis of the system after the blow up.
Let \(\alpha_{1},\alpha_{2},\alpha_{3}\) be the roots of \(\hat{g}(v_{1})\). First, note that if \(\alpha_{2}=-\alpha_{3}\), having into consideration the well known relationships between the coefficient and roots of a cubic, necessarily \(\alpha_{2}=\alpha_{3}=0,\) implying \(\bar{a}_{2}=0\), which is not our current case. Consequently, we can assume \(\alpha_{2}\neq-\alpha_{3}\).
In this case, by the usual relationships between the roots of a cubic, it follows that \(\alpha_{1}=-\alpha_{2}\alpha_{3}(\alpha_{2}+\alpha_{3})^{-1}.\) If the cubic had a triple root, then it would be located at zero, so \(\bar{a}_{2}=\bar{b}_{2}=0\), which again is not the case we are currently studying. Consequently, the critical points are either double or simple, so two new subcases appear.
**Case 1.1.1. \(\hat{g}(\boldsymbol{v_{1}})\) has three simple roots.** In this case we can pick \(\alpha_{2}\) and \(\alpha_{3}\) as new coefficients of the system with the change of parameters
\[\bar{a}_{2}=-\alpha_{2}^{2}\alpha_{3}^{2}(\alpha_{2}+\alpha_{3})^{-1},\quad \bar{b}_{2}=(\alpha_{2}^{2}+\alpha_{2}\alpha_{3}+\alpha_{3}^{2})(\alpha_{2}+ \alpha_{3})^{-1}.\]
When studying the eigenvalues of the jacobian matrix at the three roots, we get that:
1. The eigenvalues of the jacobian matrix at \(\alpha_{2}\) are \(-\alpha_{2}^{2}\) and \(\alpha_{2}(\alpha_{2}-\alpha_{3})(\alpha_{2}+2\alpha_{3})(\alpha_{2}+\alpha_ {3})^{-1}\).
2. The eigenvalues of the jacobian matrix at \(\alpha_{3}\) are \(-\alpha_{3}^{2}\) and \(-\alpha_{3}(\alpha_{2}-\alpha_{3})(2\alpha_{2}+\alpha_{3})(\alpha_{2}+\alpha_ {3})^{-1}.\)
3. The eigenvalues of the jacobian matrix at \(-\alpha_{2}\alpha_{3}(\alpha_{2}+\alpha_{3})^{-1}\) are \(-\alpha_{2}^{2}\alpha_{3}^{2}(\alpha_{2}+\alpha_{3})^{-2}\) and \(\alpha_{2}\alpha_{3}(2\alpha_{2}+\alpha_{3})(\alpha_{2}+2\alpha_{3})(\alpha_{2}+ \alpha_{3})^{-2}.\)
If any of these eigenvalues is zero, we can conclude that two of the three roots coincide, which is not our current case. Thus every critical point has a negative
eigenvalue and, studying the sign of the rest of them, we get that two of them must be positive, and the other one, negative.
Consequently, two of the critical points are saddle points and the other one an attractor node. Regardless of their location, undoing the blow up we conclude that the original double critical point has two hyperbolic sectors and two parabolic ones.
Observe that we have to make the blow up in the other direction (\(u_{2}=uv,v_{2}=v\)) to ensure that there are no orbits arriving at the critical point tangent to the vertical direction. Some simple computations show that this is not the case, and no orbit arrives tangent to the vertical axis.
**Case 1.1.2. \(\boldsymbol{\hat{g}(v_{1})}\) has a double root and a simple one.** Here, making a similar study as in the previous case, the double point must be a saddle-node and the simple one, a saddle point. Undoing the blow up, the original double critical point has two hyperbolic sectors and two parabolic ones, as in the previous case.
Again, no orbit arrives to the critical point tangent to the vertical axis after doing the blow up in the other direction.
**Case 1.2. \(\boldsymbol{\bar{a}_{2}=\bar{a}_{3}=0}\)**. This is, the only critical point is triple. We have to make the same directional blow up, getting the system
\[\begin{cases}u_{1}^{\prime}=\frac{du_{1}}{ds}=-u_{1}(1+u_{1}^{2})v_{1}^{2},\\ v_{1}^{\prime}=\frac{dv_{1}}{ds}=-\bar{a}_{1}u_{1}-\bar{b}_{2}v_{1}^{2}-\bar{b }_{1}u_{1}v_{1}^{2}+v_{1}^{3}.\end{cases}\]
We study the critical points on \(u_{1}=0\), this is, the roots of
\[v_{1}^{\prime}\Big{|}_{u_{1}=0}=v_{1}^{2}(-\bar{b}_{2}+v_{1}).\]
Here, we have two different subcases.
**Case 1.2.1. \(\boldsymbol{\bar{b}_{2}\neq 0}\)**. In this case, there is a double critical point corresponding to \(v_{1}=0\), and a simple one for \(v_{1}=\bar{b}_{2}\). The simple critical point is a saddle point while the double one is nilpotent. We follow the usual procedure with the Andreev normal form and conclude that the double point is a saddle-node. Undoing the blow up, again we get that the triple critical point has two hyperbolic sectors and two parabolic ones.
As with all these blow ups, we also have to do the other directional blow up, and, as previously, the computations show that no orbit arrives to the origin with vertical slope.
**Case 1.2.2. \(\boldsymbol{\bar{b}_{2}=0}\)**. In this case, there is a triple critical point corresponding to \(v_{1}=0\), making the point very degenerate. Hence, it is necessary to perform some additional blow ups in order to desingularize the critical point. After all the process, and proceeding as in the previous cases, we conclude that the critical point is a cusp.
**Case 2. \(\boldsymbol{\bar{a}_{1}=0}\).** We recall that the only possibility in order to have multiple critical points is \(\bar{a}_{3}=0,\bar{a}_{2}\neq 0.\) Studying the double critical point in the chart \(\mathcal{U}_{2}\) in the same way that we have done in the case 1.1.2., we get that this point has two parabolic and two hyperbolic sectors.
Finally, as we reasoned in the previous result, the computations for the simple root zero in the chart \(\mathcal{U}_{1}\) are equivalent to the ones already done, and the point is a cusp.
**Corollary 3.4**.: _The infinite critical points of family (3.7) are either cusps, or have two hyperbolic and two parabolic sectors._
### Centers in the sphere
Now, let us consider the centers of the system in the sphere.
**Theorem 3.5**.: _If (1.5) has a center at the origin, then it is a global center of the vector field (3.7) on the sphere._
Proof.: By Theorem 2.1, the origin is a center if and only if one of the following conditions is satisfied:
1. \(b_{1}=b_{2}=0\),
2. \(3a_{1}b_{2}(b_{2}^{2}-b_{1}^{2})+b_{1}(a_{2}b_{1}^{2}+2a_{3}b_{1}b_{2}-3a_{2}b_ {2}^{2})=0\) and \(b_{2}(-3a_{3}b_{1}^{2}+2a_{2}b_{1}b_{2}+a_{3}b_{2}^{2})=0\).
Let us check that both conditions are also global centers in the sphere.
1. In this case, the first integral of the planar system extends to the following first integral of the system on the sphere \[H(z_{1},z_{2},z_{3})=\frac{-z_{3}^{3}+a_{2}z_{1}^{3}-3a_{1}z_{1}^{2}z_{2}-(2a_ {1}+a_{3})z_{2}^{3}}{3\sqrt{(z_{1}^{2}+z_{2}^{2})^{3}}}.\] It can be checked simply by differentiating \(H\) along the solutions of (3.7).
2. In this case the system is reversible with respect to the meridian determined by \(b_{1}z_{1}+b_{2}z_{2}=0\).
An open question is whether the converse is true, that is, if it is possible or not to have an annulus formed by a continuum of periodic solutions crossing the equator without having a global center. As we have not been able to prove or disprove it, but we conjecture that it is true, we will assume it as a hypothesis for the rest of the paper.
_Hypothesis 3.6_.: Every annulus of periodic orbits of (3.7) is a global center.
### Geometry of the vector field on the sphere
Consider the vector field extended to the whole sphere by (3.7). In this case, the system has the two poles as the only critical points outside the equator, corresponding to the critical point at \((0,0)\) of the original system. Furthermore, the vector field on the sphere has a symmetry with respect to the origin of coordinates.
**Proposition 3.7**.: _Assume that \(t\to(z_{1}(t),z_{2}(t),z_{3}(t))\) is a solution of (3.7) for certain functions \(z_{1},z_{2},z_{3}\). Then \(t\to-(z_{1}(t),z_{2}(t),z_{3}(t))\) is also a solution._
Proof.: The proof follows by direct computation.
As a consequence, both poles have the same local phase portrait. Recall that the origin of the original rigid system is always monodromic, so both poles are also monodromic, and by the symmetry, with the same stability and opposite orientation.
A second consideration is that solutions intersect the equator, \(Q\), orthogonally.
**Proposition 3.8**.: _The vector field is orthogonal to any regular point on the equator. Moreover, if \((z_{1},z_{2},0)\) is a point on the equator, the direction of the vector field is determined by the sign of \(a_{1}z_{1}^{3}+a_{2}z_{1}^{2}z_{2}+a_{3}z_{1}z_{2}^{2}\)._
Proof.: It suffices to note that the system (3.7) at any point on the equator is
\[\begin{cases}z_{1}^{\prime}&=0,\\ z_{2}^{\prime}&=0,\\ z_{3}^{\prime}&=a_{1}z_{1}^{3}+a_{2}z_{1}^{2}z_{2}+a_{3}z_{1}z_{2}^{2}.\end{cases}\]
We remind that, as seen in Section 3.2, the system on the equator either has two cusps; six cusps; two critical points with two hyperbolic and two parabolic sectors; or two cusps and two critical points with two hyperbolic and two parabolic sectors.
In the previous section we have studied a subfamily of system (1.5) which has no finite limit cycles. Nevertheless, it turns out that when the system has either two or six cusps (and no other critical points) on the equator, the system in the Poincare sphere, system (3.7), has either a periodic solution or a homoclinic or heteroclinic connection. We divide the result in two cases, corresponding with the number of cusps, two or six.
**Theorem 3.9**.: _If system (3.7) has two cusps and no other critical points on the equator, then it always has a periodic solution in the sphere, symmetric with respect to the origin and its intersection with the equator consists of two (symmetric) regular points._
Proof.: Consider the vector field (3.7) defined on the sphere and denote the equator, \(z_{3}=0\), by \(Q.\) A meridian is a great circle joining the two poles, \((0,0,1)\) and \((0,0,-1)\). Note that the vector field is transversal to the meridians except at the poles and the equator. Moreover, the solutions rotate in both hemispheres clockwise.
If the vector field has two simple critical points, then the vector field has two changes of direction along the equator. Without loss of generality, we can assume that these changes are at \((1,0,0)\) and \((-1,0,0).\) We will use the following notation:
\[Q^{+}=\{(z_{1},z_{2},0)\in Q,:\,z_{2}>0\},\quad Q^{-}=\{(z_{1},z_{2},0)\in Q, :\,z_{2}<0\}.\]
**Claim. All solutions starting in \(Q^{+}\) or all solutions starting in \(Q^{-}\) intersect again \(Q\).** Assume that not all solutions starting in \(Q^{+}\) intersect \(Q\). Then there exists \(p\in Q^{+}\) such that if \(u(t)\) is the solution of (3.7) with initial condition \(u(0)=p\), then \(u(t)\not\in Q\) for \(t>0\).
Consider any point \(q\in Q^{-}\), and the solution \(v(t)\) starting in \(q\). For negative time, if \(v(t)\) does not cross the equator, then it is contained in the region \(R\) limited by \(Q\), \(u(t)\), and the meridian passing by \(q\). Moreover, its \(\alpha\)-limit set must contain a critical point, as in the interior of \(R\) there are neither critical points nor limit cycles.
The unique critical point in \(R\) is \((1,0,0)\), but if if belongs to the \(\alpha\)-limit set, then \(v(t)\) and \(Q^{-}\) limit a negative invariant region, so \((1,0,0)\) has a nodal sector, in contradiction with Corollary 3.4. Therefore, all solutions starting in \(Q^{-}\) intersect \(Q^{+}\) and the claim follows.
Therefore, we have a map from \(Q^{+}\) to \(Q^{-}\) (or the reverse). Composing this map with the symmetry with respect to the center of the sphere, we have a map from \(Q^{+}\) to \(Q^{+}\). By Brower's fixed point theorem, this map has a fixed point which is a solution crossing \(Q\) at symmetric points. By Proposition 3.7, we conclude this solution is periodic.
The previous result has been proved under the hypothesis that the system (3.7) has exclusively two cusps on the equator. In the following result we will prove that when having six cusps on the equator the system also has a periodic solution, provided that there are not homoclinic nor heteroclinic connections.
**Theorem 3.10**.: _If system (3.7) has six cusps on the equator and there are no homoclinic nor heteroclinic connections, then it always has a periodic solution in the sphere, symmetric with respect to the origin and its intersection with the equator consists of two or six (symmetric) regular points._
Proof.: Consider the vector field (3.7). In this case, we assume that the infinite critical points, so changes of directions of the vector field at the equator \(Q\), are at clockwise-ordered points \(p_{1}\), \(p_{2}\), \(p_{3}\), \(p_{4}\), \(p_{5}\), \(p_{6}\), with \(p_{1}=(-1,0,0)\) (see Figure 2). Moreover, by Corollary 3.4, they are cusps. Note that the stable and unstable varieties of consecutive cusps must be in opposite hemispheres. Denote \(u_{i}\) to the unstable variety of \(p_{i}\) and \(s_{i}\) to the stable variety.
We divide the equator in sectors \(Q_{1}\), \(Q_{2}\), \(Q_{3}\), \(Q_{4}\), \(Q_{5}\), \(Q_{6}\), in clockwise order, where \(Q_{1}\) is limited by \(p_{1}\) and \(p_{2}\), and so on.
Note that the points \(p_{i},p_{j}\) are symmetric if \(|i-j|=3\), and the same holds for \(Q_{i},Q_{j}\). Moreover, as we have a cyclic ordering, we will work with all the indices in \(\mathbb{Z}/6\mathbb{Z}\).
Figure 1. Region \(R\) in stereographic projection.
Figure 2. Critical points and sectors in stereographic projection.
As the solutions of the vector field always rotate clockwise, we will consider the intersections of \(u_{i},s_{i}\) with \(Q\) in the first turn around the center. Note that by the directions of the vector field, \(u_{i}\) can not cut \(Q_{i}\) in its first turn and \(s_{i}\) can not cut \(Q_{i-1}\) in its first turn.
**Claim 1. There exists \(i\in\{1,\dots,6\}\) such that either \(u_{i}\) cuts the equator in the first turn in the sector symmetric to \(Q_{i}\), or \(s_{i}\) cuts the equator in the first turn in the sector symmetric to \(Q_{i-1}\).**
To simplify the exposition, we assume the directions of the vector field are those shown in Figure 2. We call the northern hemisphere to be the bounded one in the figure, and southern to the unbounded.
We will divide the proof of this claim in several cases depending on the intersections of the stable and unstable varieties of the critical points at the equator. Recall that \(u_{1}\) does not intersect \(Q_{1}\) and if \(u_{1}\) intersects \(Q_{2}\), it will be its first intersection with \(Q\). Moreover, if \(u_{1}\) intersects \(Q_{4}\) in the first turn, the claim follows, so we will not consider the possibility in which the first intersection of \(u_{1}\) with \(Q\) is at \(Q_{4}\). Finally, also recall that because of the direction of the vector field, the first intersection of \(u_{1}\) with \(Q\) can not be at \(Q_{3}\) or \(Q_{5}\). Consequently, the following possibilities remain:
1. \(u_{1}\) intersects \(Q_{2}\), but then it does not intersect \(Q_{3}\) in the first turn around the center.
2. \(u_{1}\) intersects \(Q_{2}\) and \(Q_{3}\) in the first turn around the center.
3. \(u_{1}\) does not intersect \(Q_{2}\) in the first turn around the center.
**Case 1. \(u_{1}\) intersects \(Q_{2}\), but then it does not intersect \(Q_{3}\) in the first turn around the center.** The variety \(u_{1}\) is in the southern hemisphere from \(p_{1}\) to its first intersection with \(Q_{2}\), and does not intersect \(Q_{3}\) by hypothesis and \(Q_{4}\) because of the direction of the vector field, remaining in the northern hemisphere. By Proposition 3.7, \(u_{4}\) is symmetric to \(u_{1}\), so it remains in the northern hemisphere until it cuts \(Q_{5}\), and then it can not intersect \(Q_{6}\) or \(Q_{1}\), staying in the southern hemisphere (see Figure 3).
On the other hand, the variety \(s_{3}\), in its first turn, can not cut the equator in the sectors \(Q_{2}\), \(Q_{1}\), \(Q_{6}\). If it cuts the equator in \(Q_{5}\) we prove the claim. If it does not cut \(Q_{5}\), then \(u_{5}\) is bounded by \(s_{3}\), \(u_{1}\) and \(u_{4}\), so it must cut \(Q_{2}\), and the claim follows, see again Figure 3.
**Case 2. \(u_{1}\) intersects \(Q_{2}\) and then it intersects \(Q_{3}\) in the first turn around the center.** As in the previous case, the variety \(u_{1}\) is in the southern hemisphere from \(p_{1}\) to its first intersection with \(Q_{2}\), and then comes back to the southern hemisphere when it intersects \(Q_{3}\). In this situation, its symmetric variety \(u_{4}\) remains in the northern hemisphere until it intersects \(Q_{5}\) and then returns to that hemisphere at \(Q_{6}\) (see figure 4.)
On the other hand, the variety \(s_{3}\) starts in the southern hemisphere and can not intersect \(Q_{1}\) because of the position of \(u_{1}\), or \(Q_{2},Q_{6}\) because of the direction of the vector field. If it intersects \(Q_{5}\), then the claim is proved, so we will assume it does not intersect \(Q_{5}\). In this situation, we have three possibilities depending on the behaviour of \(u_{5}\) (see Figure 5). The variety \(u_{5}\) starts in the southern hemisphere and can not intersect \(Q_{5}\). There are the following possibilities:
1. If \(u_{5}\) does not intersect \(Q_{6}\), then, because of the positions of \(u_{1}\) and \(s_{3}\) it must intersect \(Q_{2}\), and the claim follows.
2. If \(u_{5}\) intersects \(Q_{6}\), and then it intersects \(Q_{1}\), then because of the position of \(u_{1}\) it must intersect \(Q_{2}\), and the claim follows.
3. If \(u_{5}\) intersects \(Q_{6}\), and then it does not intersect \(Q_{1}\), \(u_{5}\) is in the already-proven Case 1 (relabeling it as \(u_{1}\)).
**Case 3. \(u_{1}\) does not intersect \(Q_{2}\) in the first turn around the center.**
In this case, the variety \(u_{1}\) does not intersect \(Q_{2}\) or \(Q_{4}\) by hypothesis, nor \(Q_{1}\), \(Q_{3}\) or \(Q_{5}\) because of the direction of the vector field.
But then \(s_{3}\), that can not intersect \(Q_{2}\) by the direction of the vector field, must intersect \(Q_{1}\) because it is bounded by \(u_{1}\). Consequently, reversing time, \(s_{3}\) falls in Case 1 or Case 2, and the claim follows (see Figure 6).
Figure 4. Relevant cusp varieties in Case 2 in stereographic projection.
Figure 3. Relevant cusp varieties in Case 1 in stereographic projection.
We have proved Claim 1, and now we can proceed with the second part of the proof.
**Claim 2. There exists a periodic solution crossing the equator.**
Without lack of generality, reversing time if necessary, we may assume that \(u_{1}\) intersects \(Q_{4}\) in the first turn.
Assume first that \(u_{1}\) does not intersect the equator before \(Q_{4}\) and denote by \(c_{1}\) the intersection point of \(u_{1}\) and \(Q_{4}\).
We have to distinguish two cases according to whether \(s_{4}\) intersects \(Q_{1}\) or not.
If \(s_{4}\) intersects \(Q_{1}\) in a point \(c_{4}\), then any solution starting in the arc limited by \(p_{1},c_{4}\) must intersect the arc limited by \(p_{4},c_{1}\) (eventually passing through the northern hemisphere), except for the point corresponding to the intersection of \(s_{3}\) with \(Q_{1}\), but it can be extended in a continuous way to that point. See Figure 7.
Therefore, we may define a continuous map \(\phi\) from the first arc into the second, such that
\[\lim_{p\to p_{1}}\phi(p)=c_{1},\quad\lim_{p\to c_{4}}\phi(p)=p_{4}.\]
As the angle of the sector limited by \(p,\phi(p)\) is greater than \(\pi\) for \(p\) close to \(c_{1}\) and lower than \(\pi\) for \(p\) close to \(c_{4}\), there there exists a point \(p\) such that the angle is \(\pi\), and we conclude by Proposition 3.7, except if \(p\) correspond with a point in \(s_{3}\)
Figure 5. Possibilities in Case 2 regarding the behaviour of \(u_{5}\)
Figure 6. Relevant cusp varieties in Case 3 in stereographic projection.
But in that case, again by Proposition 3.7, there are two heteroclinic connections between \(p_{3}\) and \(p_{6}\).
Now, assume that \(s_{4}\) does not intersect \(Q_{1}\) in a point. Then \(u_{2}\) must intersect \(Q_{4}\) in a point \(c_{2}\). Therefore, a solution starting at a point of \(Q_{1}\) intersects \(Q_{4}\) (in the arc defined by \(c_{1},c_{2}\)). Define a map \(\phi\colon Q_{1}\to Q_{4}\) as \(\phi(p)\) the first intersection of the solution starting at \(p\) with \(Q_{4}\). Composing this map with the symmetry with respect to the origin and applying Brower's fixed point theorem, we obtain \(p\in Q_{1}\) such that \(\phi(p)=-p\). By Proposition 3.7, we conclude.
Finally, if \(u_{1}\) intersects \(Q\) at one point previous to \(Q_{4}\), it must be at \(Q_{2}\), but in order to intersect \(Q_{4}\) afterwards, it must intersect \(Q_{3}\) as well. Note that in this situation, there is no obstruction to repeat the previous argument in exactly the same way, and complete the proof.
**Corollary 3.11**.: _Assuming Hypothesis (3.6), there are systems of the form (3.7) with at least three limit cycles in the sphere._
Proof.: If we compactify system (2.6) through the Poincare compactification, we get that there is at least one limit cycle in each one of both hemispheres. Now we will prove that for this system \(D=a_{2}^{2}-4a_{1}a_{3}<0\) and, applying Theorem 3.9 we will conclude that another periodic orbit exists. Using Hypothesis 3.6, this periodic orbit will be a limit cycle and hence the system will have, at least, three limit cycles.
If we compute \(D\) for system (2.6) we get
\[D=a_{2}^{2}-\frac{(-3+10a_{2}\pi+98\varepsilon)(1+120a_{2}\pi-82\varepsilon) }{74^{2}\pi^{2}}.\]
We have to prove that, for \(\varepsilon<0\) small enough, there exist values of \(a_{2}\) such that \(D<0.\) To do that, we will compute the discriminant of \(D\) and show that, for \(\varepsilon<0\) small enough, the discriminant is positive. The discriminant of \(D\) is
\[\Delta=\frac{8}{74^{2}\pi^{2}}(11+10432\varepsilon^{2}-678\varepsilon).\]
Hence, for \(\varepsilon<0\) the previous discriminant is positive, and therefore, there exist values of \(a_{2}\) such that \(D<0\) and we can apply Theorem 3.9.
### Invariant lines
Invariant lines of the planar vector field (1.5) correspond in the compactified vector field (3.7) to either heteroclinic connections, if they intersect
Figure 7. Relative positions of \(u_{1}\) and \(s_{3}\).
the equator in critical points, or to maximal circles, if not. In this subsection we will prove that the last case can not happen, and therefore there are systems inside family (3.7) having heteroclinic connections.
If \(q(x,y)=c_{1}x+c_{2}y+c_{3}\) is an invariant line, then \(c_{3}\neq 0\), since it can not pass through the critical point, and it must satisfy the relationship
\[q_{x}(x,y)P(x,y)+q_{y}(x,y)Q(x,y)=q(x,y)K(x,y),\]
where \(x^{\prime}=P(x,y),y^{\prime}=Q(x,y)\) and for some cofactor \(K(x,y)\).
Therefore, a simple way to obtain the conditions to have an invariant line is by computing the remainder of \(q_{x}P+q_{y}Q\) divided by \(q\), considered both as polynomials in \(x\). Now, equaling the remainder to zero, we get that \(c_{1}=-b_{2}c_{3}\) (induced by the assumption \(a_{4}=0\)), and
\[a_{1}=\frac{b_{2}^{2}(c_{2}-b_{1}c_{3})}{c_{3}},\quad a_{2}=-\frac{2c_{2}b_{2 }(c_{2}-b_{1}c_{3})}{c_{3}^{2}},\quad a_{3}=\frac{c_{2}^{2}(c_{2}-b_{1}c_{3}) }{c_{3}^{3}}.\]
Now, eliminating \(c_{2},c_{3}\), we obtain the conditions for (1.5) to have an invariant line
\[a_{2}b_{2}^{3}+2a_{1}\left(a_{1}+b_{1}b_{2}^{2}\right)=0,\quad a_{3}b_{2}^{6} -a_{1}\left(a_{1}+b_{1}b_{2}^{2}\right)^{2}=0. \tag{3.8}\]
Moreover, there is exactly one invariant line when the previous equations hold and its expression is given by
\[-b_{2}^{3}x+(a_{1}+b_{1}b_{2}^{2})y+b_{2}^{2}=0. \tag{3.9}\]
We remind that in the chart \(\mathcal{U}_{2}\), the infinite critical points are \((\hat{u},0)\), with \(\hat{u}\) being a root of the cubic \(g(u)=-u(a_{1}u^{2}+a_{2}u+a_{3})\).
It is immediate to check that the invariant line (3.9) in coordinates \((u,v)\) always crosses some of the infinite critical points when the conditions (3.8) are satisfied. Furthermore, under (3.8) we have \(D=a_{2}^{2}-4a_{1}a_{3}=0\), so in this case the infinite critical points through which the invariant line crosses the equator are degenerate.
As a conclusion, the invariant lines of the planar vector field correspond always to heteroclinic connections.
If this was not the case, _i.e._ if in some cases the invariant straight lines did not pass through a critical point, then they would constitute a periodic orbit. Hence, the results in [10] could be applied in order to know whether the existing algebraic periodic orbit (proved in Theorems 3.9 and 3.10) is a limit cycle. But, as proved in the current section, this is never the case.
## 4. Conclusions and open questions
In this paper we have studied the rigid quartic family of planar vector fields (1.1) with \(F(x,y)\) denoting the function defined in expression (1.3). Within this family, we have stated and proved the conditions that determine the existence of a center, and additionally, we have identified a subfamily characterized by the absence of limit cycles in the plane.
Under our point of view, the main contribution of the article lies in the study of the rigid system by means of its associated system (3.7), which is defined in the Poincare sphere. By using local charts, we have characterized the infinite critical points of the rigid system, classifying them as either cusps or as the union of two hyperbolic and two parabolic sectors.
Despite all the previous proven results, there are still some open problems that should be solved in order to have a complete understanding of the studied family.
The main question that remains open deals with the number of limit cycles of system (3.7).
On the one hand, we would need to determine the number of limit cycles that do not cross the equator. Those limit cycles would then be confined into one of the hemispheres and should also be limit cycles of the finite system (1.5). In this paper we have found examples inside the family (1.5) with at least one limit cycle. Furthermore, we have conducted several numerical experiments indicating that the maximum number of limit cycles in the finite family (1.5) is one. Thus, our first open question would be the following one:
_Open Question 1_.: Is one the maximum number of limit cycles of system (1.5)?
On the other hand, we have to deal with the limit cycles that intersect the equator. In Theorems 3.9 and 3.10 we have proved that when all the critical points in the equator are cusps, system (3.7) always has either a periodic orbit intersecting the equator or a heteroclinic connection. Some questions arise from this result.
The first one deals with Hypothesis 3.6. As demonstrated in Theorem 3.5, we have established that all finite centers (around the origin), when they exist, are global centers. However, for system (3.7), the reciprocity question remains unsolved. Specifically, we have adopted it as Hypothesis 3.6, constituting the next open question we pose:
_Open Question 2_.: If system (3.7) has an annulus or periodic orbits, is it a global center?
Note that if the answer to this question is affirmative, then this periodic orbit that intersects the equator, when it exists, will be a limit cycle of (3.7), except for the cases of global centers stated in Theorem 2.1.
Also related to the limit cycles in the sphere, in Corollary 3.11 we have proved that, if the previous open question has an affirmative answer, there are systems inside family (3.7) having at least three limit cycles. Hence, we have a lower bound for the number of limit cycles for this family. The next question, much more difficult, has to deal with the upper bound:
_Open Question 3_.: Which is the maximum number of limit cycles that system (3.7) can have?
Another natural question would be to check if the same result proved in Theorems 3.9 and 3.10 is also true even when the infinite critical points are not all cusps. This turns out to be our next open question:
_Open Question 4_.: Does system (3.7) always have either a periodic orbit that intersects the equator or a heteroclinic connection?
Recall that we have proved that there exist invariant straight lines for system (3.7) and that, in all the cases they exist, they form heteroclinic connections between the infinite critical points. The fact that in these cases the heteroclinic connections are algebraic suggests to us that the existence of this kind of connections is very unlikely. This leads us to the following open question:
_Open Question 5_.: Do the systems inside family (3.7) having heteroclinic connections constitute a zero measure set in the set of parameters?
In summary, if open questions 2, 4 and 5 have an affirmative answer, then we would have proved that system (3.7) always has at least one limit cycle that crosses the equator, except for a zero measure set in the set of parameters.
Finally, we can present a last open question related to the nature of the periodic orbit crossing the equator. The invariant straight lines of system (3.7) studied in Section 3.5 are always heteroclinic connections, never periodic orbits. This fact leads us to ask the following question.
_Open Question 6_.: Can the periodic orbit that crosses the equator be algebraic?
## Acknowledgments
The authors are partially supported by grant number PID2020-118726GB-I00 funded by MCIN/AEI/10.13039/501100011033.
|
2306.00397 | Decomposition of the longest element of the Weyl group using factors
corresponding to the highest roots | Let $\varPhi$ be a root system of a finite Weyl group $W$ with simple roots
$\Delta$ and corresponding simple reflections $S$. For $J \subseteq S$, denote
by $W_J$ the standard parabolic subgroup of $W$ generated by $J$, and by
$\Delta_J \subseteq \Delta$ the subset corresponding to $J$. We show that the
longest element of $W$ is decomposed into a product of several ($\le |\Delta|$)
reflections corresponding to mutually orthogonal roots, each of which is either
the highest root of some subset $\Delta_J \subseteq \Delta$ or is a simple
root. For each type of the root system, the factors of the specified
decomposition are listed. The relationship between the longest elements of
different types is found out. The uniqueness of the considered decomposition is
shown. It turns out that subsets of highest roots, which give the decomposition
of longest elements in the Weyl group, coincide with the cascade of orthogonal
roots constructed by B.Kostant and A.Joseph for calculations in the universal
enveloping algebra. | Rafael Stekolshchik | 2023-06-01T06:58:18Z | http://arxiv.org/abs/2306.00397v3 | Decomposition of the longest element of the Weyl group using factors corresponding to the highest roots
###### Abstract.
Let \(\Phi\) be a root system of a finite Weyl group \(W\) with simple roots \(\Delta\) and corresponding simple reflections \(S\). For \(J\subseteq S\), denote by \(W_{J}\) the standard parabolic subgroup of \(W\) generated by \(J\), and by \(\Delta_{J}\subseteq\Delta\) the subset corresponding to \(J\). We show that the longest element of \(W\) is decomposed into a product of several (\(\leq|\Delta|\)) reflections corresponding to mutually orthogonal roots, each of which is either the highest root of some subset \(\Delta_{J}\subseteq\Delta\) or is a simple root. For each type of the root system, the factors of the specified decomposition are listed. The relationship between the longest elements of different types is found out. The uniqueness of the considered decomposition is shown. It turns out that subsets of highest roots, which give the decomposition of longest elements in the Weyl group, coincide with the cascade of orthogonal roots constructed by B.Kostant and A.Joseph for calculations in the universal enveloping algebra.
## 1. **Introduction**
### The longest element
Let \(\Phi\) be a root system of a finite Weyl group \(W\) with simple roots \(\Delta=\{\alpha_{1},\ldots,\alpha_{n}\}\) and the corresponding simple reflections \(S=\{s_{\alpha_{1}},\ldots,s_{\alpha_{n}}\}\), \(\Phi^{+}\) be the subset of positive roots in \(\Phi\).
For \(w\in W\), define \(l(w)\) (resp., \(l_{a}(w)\)) to be the minimal number of factors occurring amongst all expressions of \(w\) as a product of simple reflections \(S\) (resp., reflections). The function \(l\) (resp., \(l_{a}\)) is called the standard (resp., absolute) length function of \((W,S)\). An expression \(w=s_{1}\ldots s_{n}\) with \(s_{i}\in S\) and \(n=l(w)\) is called a _reduced expression_ for \(w\).
There exists an element \(w_{0}\in W\) sending the subset of positive roots \(\Phi^{+}\) to the subset of negative roots \(\Phi^{-}\). Such an element \(w_{0}\) is unique in \(W\). The length \(l(w_{0})\) coincides with the number of roots in \(\Phi^{+}\). No other element of \(W\) has such a large length as \(w_{0}\). So, the element \(w_{0}\) is said to be the _longest element_. The element \(w_{0}\) transforms the fundamental chamber \(C\) to the chamber \(-C\), \(w_{0}\) is involution and
\[l(w_{0}w)=l(w_{0})-l(w)\text{ for all }w\in W.\]
The element \(w_{0}\) is the unique element \(w\in W\) satisfying the condition
\[l(ws_{\alpha})<l(w)\text{ for all }\alpha\in\Delta.\]
The longest element \(w\) is \(-1\) except for the following 3 cases1:
Footnote 1: In all these cases \(\varepsilon\) is the involutive automorphism of the corresponding Dynkin diagram. For the numbering of vertices, see Fig. 5.
\[\begin{array}{ll}\mbox{(1) $A_{n},\;n\geq 2$},&w_{0}=-\varepsilon,\mbox{ where }\; \varepsilon(\alpha_{i})=\alpha_{n-i+1},\\ \mbox{(2) $D_{n},\;n$ is odd},&w_{0}=-\varepsilon,\mbox{ where }\varepsilon( \alpha_{n-1})=\alpha_{n},\;\varepsilon(\alpha_{n-1})=\alpha_{n},\\ \mbox{$\varepsilon(\alpha_{i})=\alpha_{i}$ for other $\alpha_{i}$}.\\ \mbox{(3) $E_{6}$},&w_{0}=-\varepsilon,\mbox{ where }\varepsilon( \alpha_{1})=\alpha_{6},\;\varepsilon(\alpha_{6})=\alpha_{1},\\ \mbox{$\varepsilon(\alpha_{3})=\alpha_{5},\;\varepsilon(\alpha_{5})=\alpha_{3 },\;\varepsilon(\alpha_{2})=\alpha_{2},\;\varepsilon(\alpha_{4})=\alpha_{4}, $}\end{array}\]
see [4, Ch.VI,SS1,\(n^{0}\)6,Cor.6], [4, Plates I-X], [9, SS1.8], [2, SS2.3].
### The main results
#### 1.2.1. Decomposition of the longest element
For any \(J\subseteq S\), denote by \(W_{J}\) the subgroup of \(W\) generated by \(J\), and by \(\Delta_{J}\subseteq\Delta\) the subset corresponding to \(J\). The subgroups of the form \(W_{J}\) are referred as _standard parabolic subgroups._
**Theorem 1.1**.: _The longest element \(w_{0}\in W\) is decomposed into a product of several (\(\leq n\)) reflections corresponding to mutually orthogonal roots, each of which is either the highest root of some subset \(\Delta_{J}\subseteq S\) or is a simple root, see Tables 1.1 and 1.2._
_Proof._ The theorem is proved by a case-by-case analysis: see Propositions 3.3\((A_{n})\), 4.3\((B_{n})\), 5.1\((C_{n})\), 6.2\((D_{n})\), 7.1\((E_{6})\), 8.3\((E_{7})\), 9.2\((E_{8})\), 10.1\((F_{4})\), 11.2\((G_{2})\). \(\square\)
#### 1.2.2. Relationship between the longest elements
Let \(W\) be a finite Weyl group and \(\varPhi\) be the corresponding root system, in this case we will write \(W=W(\varPhi)\). Denote by \(w_{0}(W)\) the longest element in \(W\). We also use the notation \(w_{0}(\varPhi)\) instead of \(w_{0}(W)\).
**Theorem 1.2**.: _For any root system \(\varPhi\) with the Weyl group \(W=W(\varPhi)\), there exists a root subsystem \(\varPhi^{\prime}\subset\varPhi\) with the standard parabolic subgroup \(W^{\prime}=W(\varPhi^{\prime})\) in \(W\), such that the longest elements \(w_{0}(\varPhi)\) and \(w_{0}(\varPhi^{\prime})\) are related as follows:_
\[\begin{array}{ll}w_{0}(A_{n})=w_{0}(A_{n-2})s_{\alpha_{max}}&\mbox{ for }n\geq 3,\\ w_{0}(B_{n})=w_{0}(B_{n-2})s_{\alpha_{max}}s_{\alpha_{1}}&\mbox{ for }n\geq 4,\\ w_{0}(C_{n})=w_{0}(C_{n-1})s_{\alpha_{max}}&\mbox{ for }n\geq 3,\\ w_{0}(D_{n})=w_{0}(D_{n-2})s_{\alpha_{max}}s_{\alpha_{1}}&\mbox{ for }n\geq 6,\\ w_{0}(E_{6})=w_{0}(A_{5})s_{\alpha_{max}},\\ w_{0}(E_{7})=w_{0}(D_{6})s_{\alpha_{max}},\\ w_{0}(E_{8})=w_{0}(E_{7})s_{\alpha_{max}},\\ w_{0}(F_{4})=w_{0}(C_{3})s_{\alpha_{max}}.\end{array} \tag{1.1}\]
_In eq. (1.1), the reflection \(s_{\alpha_{max}}\) corresponds to the highest root \(\alpha_{max}\) in the root system \(\varPhi\)._
_Proof._ Let us consider, for example, the case \(W=W(F_{4})\), \(W^{\prime}=W(C_{3})\). By Proposition 10.1
\[w_{0}(F_{4})=s_{\alpha_{2}}s_{\alpha_{2}+2\alpha_{3}}s_{\alpha_{2}+2\alpha_{ 3}+2\alpha_{4}}s_{\alpha_{max}}.\]
The roots \(\alpha_{2}+2\alpha_{3}\) and \(\alpha_{2}+2\alpha_{3}+2\alpha_{4}\) are the highest roots for the root systems \(C_{2}\) and \(C_{3}\), see Fig. 1. Denote them as follows:
\[\alpha_{max}^{c2}:=\alpha_{2}+2\alpha_{3},\quad\alpha_{max}^{c3}:=\alpha_{2}+2 \alpha_{3}+2\alpha_{4},\]
although they differ form definitions of \(\alpha_{max}^{c2}\) and \(\alpha_{max}^{c3}\) for case \(C_{3}\) in Table 1.1. So,
\[w_{0}(F_{4})=s_{\alpha_{2}}s_{\alpha_{max}^{c2}}s_{\alpha_{max}^{c3}}s_{\alpha _{max}}=w_{0}(C_{3})s_{\alpha_{max}}.\]
Such a difference in definitions of the highest roots also takes place in other cases, see Remark 1.7. The definition of the highest roots of root subsystems depends on the root system in which this root is considered.
Other cases in eq. (1.1) are treated in a similar way.
#### 1.2.3. Uniqueness of decomposition
**Definition 1.3**.: Let \(T=\{\tau_{1},\tau_{2},\ldots,\tau_{m}\}\) be the subset of distinct roots in \(\varPhi\). This set is said to be _max-orthogonal_ if
1. the roots of \(T\) are _mutually orthogonal_,
2. all non-simple roots in \(T\) form a _linearly ordered subset_ in \(\varPhi\).
3. each non-simple root \(\tau_{i}\) is the _highest root_ in some root subsystem \(J\subset\varPhi\) corresponding to a standard parabolic subgroup \(W_{J}\subset W\). The root \(\tau_{1}\) is the highest root in \(\varPhi\), i.e., \(\tau_{1}=s_{\alpha_{max}}\).
Note that each root subset used in decompositions of Theorem 1.1 is max-orthogonal.
**Theorem 1.4**.: _For any longest element \(w_{0}\), there exists a unique max-orthogonal subset \(\{\tau_{1},\tau_{2},\ldots,\tau_{m}\}\), where \(m\leq n\), such that_
\[w_{0}=\prod_{i=1}^{m}s_{\tau_{i}}. \tag{1.2}\]
**Definition 1.5**.: _The decomposition (1.2) corresponding to some max-orthogonal subset \(\{\tau_{1},\tau_{2},\ldots,\tau_{m}\}\) is said to be the max-orthogonal decomposition._
We will construct the max-orthogonal decomposition for each type of root systems, see Tables 1.1 and Table 1.2.
_Proof of Theorem 1.4._ In eq. (1.1), for each relation, there exists a factor that connects two different longest elements. We refer to this factor as the _linking factor_. Denote it by \(\mathcal{L}\). There are two cases for the linking factor \(\mathcal{L}\):
1. \(\mathcal{L}=s_{\alpha_{max}}\), this holds for \(A_{n}\), \(C_{n}\), \(E_{n}\) and \(F_{4}\),
2. \(\mathcal{L}=s_{\alpha_{max}}s_{\alpha_{1}}\), this holds for \(B_{n}\) and \(D_{n}\).
Figure 1. Numbering of \(C_{3}\) vertices which is different from Bourbaki’s numbering
The uniqueness of the max-orthogonal decomposition (1.2) will be proved by induction on the length of the longest element.
(a) Let \(w_{0}(\varPhi)=w_{0}(\varPhi^{\prime})s_{\alpha_{max}}\). By definition of the max-orthogonal decomposition, the decomposition (1.2) of \(w_{0}(\varPhi)\) contains the factor \(s_{\alpha_{max}}\). By induction hypothesis \(w_{0}(\varPhi^{\prime})\) has an unique max-orthogonal decomposition. Then, the max-orthogonal decomposition of \(w_{0}(\varPhi)\) is unique.
(b) Let \(w_{0}(\varPhi)=w_{0}(\varPhi^{\prime})s_{\alpha_{max}}s_{\alpha_{1}}\). In the root systems \(B_{n}\) and \(D_{n}\), the only simple root non-orthogonal to the highest root \(\alpha_{max}\) is \(\alpha_{2}\). Consider the root subset \(\Delta_{\alpha_{max}}\) consisting of roots orthogonal to \(\alpha_{max}\). The subset \(\Delta_{\alpha_{max}}\) consists of two mutually orthogonal subsets: \(\{\alpha_{1}\}\) and \(V(\alpha_{3},\ldots,\alpha_{n})\), the subset of roots spanned by the roots \(\{\alpha_{3},\ldots,\alpha_{n}\}\):
\[\Delta_{\alpha_{max}}=\{\alpha_{i}\in\varPhi\mid(\alpha_{i},\alpha_{max})=0\} =\{\alpha_{1}\}\oplus V(\alpha_{3},\ldots,\alpha_{n}), \tag{1.3}\]
see Fig. 2. Since \(s_{\alpha_{1}}\) is the factor in the max-orthogonal decomposition of \(w_{0}(\varPhi)\) given by (1.1) and \(\alpha_{1}\) is disconnected from \(V(\alpha_{3},\ldots,\alpha_{n})\), then \(s_{\alpha_{1}}\) is the factor in any other max-orthogonal decomposition of \(w_{0}(\varPhi)\). Thus, any max-orthogonal decomposition of \(w_{0}(\varPhi)\) contains \(s_{\alpha_{max}}s_{\alpha_{1}}\). Further, we apply induction as in case (a).
The proof of uniqueness in Theorem 1.4 echoes with the cascade of orthogonal roots constructed by B.Kostant and A.Joseph. It turns out that subsets of highest roots in the max-orthogonal decomposition coincide with the cascade of orthogonal roots constructed by B.Kostant and A.Joseph for calculations in the universal enveloping algebra, see SS1.4.
**Remark 1.6** (notations).: (i) We follow Bourbaki's numbering for simple roots in Dynkin diagrams. see Fig. 5.
(ii) Table 1.1 contains definitions of highest roots \(\alpha_{max}^{xi}\) (= \(\alpha_{max}^{x,i}\)), where \(x\) is one of the indices \(a,b,c,d,e,f,g\), and \(i\) is the number of vertices in the corresponding Dynkin diagram.
(iii) In Tables A.3 and A.4, for each root system \(\varPhi\), we list the highest roots of the root subsystems used in Table 1.1.
(iv) The reflection corresponding to \(\alpha_{max}\) is denoted by \(s_{\alpha_{max}}\).
(v) Sometimes we prefer to use the notation \(s_{i}\) instead of \(s_{\alpha_{i}}\), which is the same.
**Remark 1.7** (summands in \(\alpha_{max}^{xi}\)).: The indices of simple roots that appears as summands in \(\alpha_{max}^{a5}\) for the case \(E_{6}\) are different from the indices contained in \(\alpha_{max}^{a5}\) for the case \(A_{n}\), see Fig. 3.
The indices of simple roots that appears as summands in \(\alpha_{max}^{d4}\) (resp. \(\alpha_{max}^{d6}\)) for the case \(E_{7}\) are different from the indices contained in \(\alpha_{max}^{d4}\) (resp. \(\alpha_{max}^{d6}\)) for the case \(D_{n}\), see Fig. 3.
### Acknowledgments
I thank Valdemar Tsanov for pointing out on papers [8], [14], as well as the connection between _max-orthogonal subsets_ from Theorem 1.4 and _Kostant's cascade_ construction, see SS1.4.
Figure 5. Bourbaki’s numbering of simple roots, [4]
\begin{tabular}{|c|c|c|} \hline Weyl & Longest element \(w_{0}\) & \(l(w_{0})\) \\ group & & \\ \hline \(W(A_{n})\) & \(\prod\limits_{i=1}^{k}s_{\alpha_{max}^{a,2i}},\ \ \text{where}\ \ \alpha_{max}^{a,2i}:=\sum\limits_{j=k-i+1}^{k+i}\alpha_{j},\ \ n=2k.\) & \\ & & \(\frac{n(n+1)}{2}\) \\ & \(s_{\alpha_{k+1}}\prod\limits_{i=1}^{k}s_{\alpha_{max}^{a,2i+1}},\ \ \text{where}\ \ \alpha_{max}^{a,2i+1}:=\sum\limits_{j=k-i+1}^{k+i+1}\alpha_{j},\ \ n=2k+1,\) & \\ \hline \(W(B_{n})\) & \(s_{\alpha_{n}}\prod\limits_{i=1}^{n-2}(s_{\alpha_{i}}s_{\alpha_{max}^{b,n-i+1 }})\ \ \ \text{for}\ n=2k+1,\) & \\ & where \(\ \alpha_{max}^{bi}:=\alpha_{n-i+1}+2\sum\limits_{j=n-i+2}^{n}\alpha_{j}.\) & \\ \hline \(W(C_{n})\) & \(s_{\alpha_{n}}\prod\limits_{i=1}^{n-1}s_{\alpha_{max}^{c,n-i+1}},\ \ \text{where}\ \ \alpha_{max}^{c,n-i+1}:=\alpha_{n}+2\sum\limits_{j=i}^{n-1}\alpha_{j}.\) & \(n^{2}\) \\ \hline \(W(D_{n})\) & \(\prod\limits_{i=1}^{n-2}(s_{\alpha_{i}}s_{\alpha_{max}^{d,n-i+1}})\ \ \ \text{ for}\ n=2k+1,\) & \(n(n-1)\) \\ & where \(\ \alpha_{max}^{di}:=\alpha_{n-i+1}+2\sum\limits_{j=n-i+2}^{n-2}\alpha_{j}+ \alpha_{n-1}+\alpha_{n}\) for \(i\geq 4,\) & \\ & \(\alpha_{max}^{d3}:=\alpha_{max}^{a3}:=\alpha_{n-2}+\alpha_{n-1}+\alpha_{n}.\) & \\ \hline \(W(E_{6})\) & \(s_{\alpha_{4}}s_{\alpha_{max}^{a3}}^{a}s_{\alpha_{max}^{a5}}^{a5}s_{\alpha_{ max}},\ \text{where}\) & \\ \(W(E_{6})\) & \(\alpha_{max}^{a3}:=\alpha_{3}+\alpha_{4}+\alpha_{5},\ \ \alpha_{max}^{a5}:=\alpha_{1}+ \alpha_{3}+\alpha_{4}+\alpha_{5}+\alpha_{6},\) & 36 \\ & \(\alpha_{max}=\alpha_{1}+2\alpha_{2}+2\alpha_{3}+3\alpha_{4}+2\alpha_{5}+ \alpha_{6}.\) & \\ \hline \(W(E_{7})\) & \(s_{\alpha_{2}}s_{\alpha_{3}}s_{\alpha_{5}}s_{\alpha_{7}}s_{\alpha_{max}^{d4}}s_ {\alpha_{max}^{d6}}s_{\alpha_{max}},\) & \\ & where \(\alpha_{max}^{d4}:=\alpha_{2}+\alpha_{3}+2\alpha_{4}+\alpha_{5},\) & \\ \(W(E_{8})\) & \(\alpha_{max}^{d6}:=\alpha_{2}+\alpha_{3}+2\alpha_{4}+2\alpha_{5}+2\alpha_{6}+ \alpha_{7},\) & \\ & \(\alpha_{max}^{e7}:=2\alpha_{1}+2\alpha_{2}+3\alpha_{3}+4\alpha_{4}+3\alpha_{5} +2\alpha_{6}+\alpha_{7},\) & \\ & \(\alpha_{max}=2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+6\alpha_{4}+5\alpha_{5}+4 \alpha_{6}+3\alpha_{7}+2\alpha_{8}.\) & \\ \hline \(W(F_{4})\) & \(s_{\alpha_{2}}s_{\alpha_{2}+2\alpha_{3}}s_{\alpha_{2}+2\alpha_{3}+2\alpha_{4} }s_{\alpha_{max}},\) & 24 \\ & where \(\alpha_{max}=2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+2\alpha_{4}.\) & \\ \hline \(W(G_{2})\) & \(s_{\alpha_{1}+\alpha_{2}}s_{\alpha_{2}+3\alpha_{1}}\) & 6 \\ \hline \end{tabular}
Table 1.1: Longest elements in Weyl groups, see Tables A.3 and A.4.
See Remark 1.7 on definitions of \(\alpha_{max}^{a5}\), \(\alpha_{max}^{d4}\) and \(\alpha_{max}^{d6}\).
### B.Kostant and A.Joseph: The cascade of orthogonal roots
#### 1.4.1. The cascade construction
A subset \(\{\beta_{1},\beta_{2},...,\beta_{r}\in\Phi\}\) is said to be a _strongly orthogonal_ set of roots if \(\beta_{i}\pm\beta_{j}\) is not a root for all pairs \(\{i,j\}\). Denote by \(\Delta_{\lambda}\) the subset of roots orthogonal to \(\lambda\). The sequence of roots obtained by taking the highest root \(\alpha_{max}\) of \(\Phi\), the corresponding highest roots of components of \(\Delta_{\alpha_{max}}\) and so on (see SS1.4.2), is a strongly orthogonal set. In [10, p.5], Joseph refers to a private communication with Kostant, where the latter notes that: _"...any orthogonal set of roots determines a strongly orthogonal set (by taking sums and differences) and any maximal strongly orthogonal set is unique up to W_". In his following paper [11], Joseph finds maximal such sets for each root system and uses them for calculations in the universal enveloping algebra \(U(\mathfrak{g})\), where \(\mathfrak{g}\) is a semisimple Lie algebra. Apparently, these maximal subsets were first presented in [11, Table III]. Today they are known as _Kostant's cascade_.
Kostant and Joseph using the cascade construction, see [15, SS1.1], independently of each other, using very different methods, obtained a number of structure theorems for the center \(U(\mathfrak{n})\), where \(\mathfrak{n}\) is the span of the positive root spaces in \(\mathfrak{g}\). In [8], Dimitrov and Tsanov obtained a complete list of homogeneous hypercomplex structures on the compact Lie groups using root subsets called _stem_, which were later recognized as the cascade constructed by Kostant and Joseph. The cascade is also used in [18] by Lipsman and Wolf for constructing certain elements in symmetric algebra \(S(\mathfrak{g})\) and by
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Root & Max-orthogonal set & \(n\) & \(l_{a}(w_{0})\) \\ system & & & \\ \hline \(A_{n}\) & \(\alpha_{max}^{a2}<\alpha_{max}^{a4}<\cdots<\alpha_{max}^{a,2k}\) & \(n=2k\) & \(\frac{n}{2}\) \\ & \(\alpha_{k+1}<\) & \(\alpha_{max}^{a3}<\alpha_{max}^{a5}<\cdots<\alpha_{max}^{a,2k+1}\) & \(n=2k+1\) & \(\frac{n+1}{2}\) \\ \hline & \(\alpha_{max}^{b2}<\alpha_{max}^{b4}<\cdots<\alpha_{max}^{b}\), & \(n=2k\) & \\ & simple roots: \(\alpha_{1},\alpha_{3},\ldots,\alpha_{n-1}\), & & \\ \(B_{n}\) & \(\alpha_{max}^{b3}<\alpha_{max}^{b5}<\cdots<\alpha_{max}^{bn}\), & \(n=2k+1\) & \(n\) \\ & simple roots: \(\alpha_{1},\alpha_{3},\ldots,\alpha_{n-2},\alpha_{n}\). & & \\ \hline \(C_{n}\) & \(\alpha_{n}<\alpha_{max}^{c2}<\alpha_{max}^{c3}<\cdots<\alpha_{max}^{c}\). & any & \(n\) \\ \hline & \(\alpha_{max}^{a4}<\alpha_{max}^{a6}<\cdots<\alpha_{max}^{a,n-2}<\alpha_{max}^{a }\) & \(n=2k\) & \(n\) \\ & simple roots: \(\alpha_{1},\alpha_{3},\ldots,\alpha_{n-3}\) and \(\alpha_{n-1},\alpha_{n}\), & \(n=2k+1\) & \(n-1\) \\ \(D_{n}\) & \(\alpha_{max}^{d3}<\alpha_{max}^{d5}<\cdots<\alpha_{max}^{d,n-2}<\alpha_{max}^{d }\) & \(n=2k+1\) & \(n-1\) \\ & simple roots: \(\alpha_{1},\alpha_{3},\ldots,\alpha_{n-4},\alpha_{n-2}\) & & \\ \hline \(E_{6}\) & \(\alpha_{4}<\alpha_{max}^{a3}<\alpha_{max}^{e5}<\alpha_{max}^{e6}\). & 6 & 4 \\ \hline \(E_{7}\) & \(\alpha_{max}^{a4}<\alpha_{max}^{a6}<\alpha_{max}^{e7}\), & & 7 & 7 \\ & simple roots: \(\alpha_{2},\alpha_{3},\alpha_{5},\alpha_{7}\) & & \\ \hline \(E_{8}\) & \(\alpha_{max}^{a4}<\alpha_{max}^{d6}<\alpha_{max}^{e7}<\alpha_{max}^{e8}\). & & 8 & 8 \\ & simple roots: \(\alpha_{2},\alpha_{3},\alpha_{5},\alpha_{7}\), & & \\ \hline \(F_{4}\) & \(\alpha_{2}<\alpha_{max}^{c2}<\alpha_{max}^{c3}<\alpha_{max}\). & 4 & 4 \\ \hline \(G_{2}\) & \(\alpha_{1}<\alpha_{max}\) & 2 & 2 \\ \hline \end{tabular}
\end{table}
Table 1.2. Max-orthogonal set and absolute length function \(l_{a}(w_{0})\).
Panyushev in [25] for classification of a certain class parabolic subalgebras of \(\mathfrak{g}\).
#### 1.4.2. Example: the \(D_{8}\)-cascade
The \(D_{8}\)-cascade of orthogonal roots is constructed as follows:
(1) The element \(\beta_{0}\) of cascade is the highest root \(\beta_{0}=\alpha_{max}(\varPhi)\).
(2) Consider the root subset \(\Delta_{\beta_{0}}\) consisting of roots orthogonal to \(\beta_{0}\):
\[\Delta_{\beta_{0}}=\{\alpha_{i}\in\varPhi\mid(\alpha_{i},\beta_{0})=0\}.\]
This subset splits into 2 connected components:
\[\Delta_{\beta_{0}}=\bigcup_{i_{1}}\Delta_{0i_{1}},\text{ where }\Delta_{00}=\{ \alpha_{1}\}\text{ and }\Delta_{01}=\varPhi(D_{6}).\]
see Fig. 6.
(3) For each \(\Delta_{0i_{1}}\), we recursively repeat steps (1) and (2). In other words, we find the highest root \(\beta_{0i_{1}}\) for each component \(\Delta_{0i_{1}}\) which splits into components \(\bigcup_{i_{2}}\Delta_{0i_{1}i_{2}}\) and so on. For details, see [11, SS2.2] or [18, SS3].
(4) For construction of \(D_{8}\)-cascade, see Fig. 6. The result \(D_{8}\)-cascade is as follows:
\[\beta_{0}=\alpha_{max}(D_{8}),\;\beta_{1}=\alpha_{1},\;\beta_{2}= \alpha_{max}(D_{6}),\;\beta_{3}=\alpha_{3},\] \[\beta_{4}=\alpha_{max}(D_{4}),\;\beta_{5}=\alpha_{5},\;\beta_{6}= \alpha_{6},\;\beta_{7}=\alpha_{7}.\]
|
2306.10184 | The uniform supertrees with the extremal spectral radius | For a $hypergraph$ $\mathcal{G}=(V, E)$ consisting of a nonempty vertex set
$V=V(\mathcal{G})$ and an edge set $E=E(\mathcal{G})$, its $adjacency$ $matrix$
$\mathcal {A}_{\mathcal{G}}=[(\mathcal {A}_{\mathcal{G}})_{ij}]$ is defined as
$(\mathcal {A}_{\mathcal{G}})_{ij}=\sum_{e\in E_{ij}}\frac{1}{|e| - 1}$, where
$E_{ij} = \{e \in E : i, j \in e\}$. The $spectral$ $radius$ of a hypergraph
$\mathcal{G}$, denoted by $\rho(\mathcal {G})$, is the maximum modulus among
all eigenvalues of $\mathcal {A}_{\mathcal{G}}$. In this paper, among all
$k$-uniform ($k\geq 3$) supertrees with fixed number of vertices, the
supertrees with the maximum, the second maximum and the minimum spectral radius
are completely determined, respectively. | Guanglong Yu, Lin Sun | 2023-06-16T21:36:27Z | http://arxiv.org/abs/2306.10184v3 | # The uniform supertrees with the extremal spectral radius+
###### Abstract
For a \(hypergraph\)\({\cal G}=(V,E)\) consisting of a nonempty vertex set \(V=V({\cal G})\) and an edge set \(E=E({\cal G})\), its \(adjacency\ matrix\)\({\cal A}_{\cal G}=[({\cal A}_{\cal G})_{ij}]\) is defined as \(({\cal A}_{\cal G})_{ij}=\sum_{e\in E_{ij}}\frac{1}{|e|-1}\), where \(E_{ij}=\{e\in E:i,j\in e\}\). The \(spectral\ radius\) of a hypergraph \({\cal G}\), denoted by \(\rho({\cal G})\), is the maximum modulus among all eigenvalues of \({\cal A}_{\cal G}\). In this paper, among all \(k\)-uniform (\(k\geq 3\)) supertrees with fixed number of vertices, the supertrees with the maximum, the second maximum and the minimum spectral radius are completely determined, respectively.
**AMS Classification:** 05C50
**Keywords:** Spectral radius; supertree; hypergraph
## 1 Introduction
In the past twenty years, some different connectivity hypermatrices (or tensors) had been defined and been developed to explore spectral hypergraph theory [1]-[5], [9], [12]-[20], [24], [27]-[32], [35, 36]. Using different hypermatrices for general hypergraphs, many interesting spectral properties have been studied that many properties of spectral graph theory have been extended to spectral hypergraph theory. A lot of interesting results have emerged and the spectra of hypergraphs have been further studied [6, 7, 8, 10, 21, 22, 25, 34], [39]-[42]. In [2], A. Banerjee introduced an adjacency matrix and use its spectrum so that some spectral and structural properties of hypergraphs are revealed. In this paper, we go on studying the spectra of hypergraphs according to the adjacency matrix introduced in [2].
Now we recall some notations and definitions related to hypergraphs. For a set \(S\), we denote by \(|S|\) its cardinality. A \(hypergraph\)\({\cal G}=(V,E)\) consists of a nonempty vertex set \(V=V({\cal G})\) and an edge set \(E=E({\cal G})\), where each edge \(e\in E({\cal G})\) is a subset of \(V({\cal G})\) containing at least two vertices. The cardinality \(n=|V({\cal G})|\) is called the order; \(m=|E({\cal G})|\) is called the edge number of hypergraph \({\cal G}\). Denote by \(t\)-set a set with size (cardinality) \(t\). We say that a hypergraph \({\cal G}\) is \(uniform\) if its every edge has the same size, and call it \(k\)-\(uniform\) if its every edge has size \(k\) (i.e. every edge is a \(k\)-subset). It is known that a 2-uniform graph is always called a ordinary graph or graph for short.
For a hypergraph \({\cal G}\), we define \({\cal G}-e\)\(({\cal G}+e)\) to be the hypergraph obtained from \({\cal G}\) by deleting the edge \(e\in E({\cal G})\) (by adding a new edge \(e\) if \(e\notin E({\cal G})\)); for an edge subset \(B\subseteq E({\cal G})\), we define \({\cal G}-B\) to be the hypergraph obtained from \({\cal G}\) by deleting each edge \(e\in B\); for a vertex subset \(S\subseteq V({\cal G})\), we define \({\cal G}-S\) to be the hypergraph obtained from \({\cal G}\) by deleting all the vertices in \(S\) and deleting the edges incident with any vertex in \(S\). For two \(k\)-uniform hypergraphs \({\cal G}_{1}=(V_{1},E_{1})\) and \({\cal G}_{2}=(V_{2},E_{2})\), we say the two graphs are \(isomorphic\) if there is a bijection \(f\) from \(V_{1}\) to \(V_{2}\), and there is a bijection \(g\) from \(E_{1}\) to \(E_{2}\) that maps each edge \(\{v_{1},\)\(v_{2},\)\(\ldots\), \(v_{k}\}\) to \(\{f(v_{1}),\)\(f(v_{2}),\)\(\ldots\), \(f(v_{k})\}\).
In a hypergraph, two vertices are said to be \(adjacent\) if both of them are contained in an edge. Two edges are said to be \(adjacent\) if their intersection is not empty. An edge \(e\) is said to be \(incident\) with a vertex \(v\) if \(v\in e\). The \(neighbor\)\(set\) of vertex \(v\) in hypergraph \(\mathcal{G}\), denoted by \(N_{\mathcal{G}}(v)\), is the set of vertices adjacent to \(v\) in \(\mathcal{G}\). The \(degree\) of a vertex \(v\) in \(\mathcal{G}\), denoted by \(deg_{\mathcal{G}}(v)\) (or \(deg(v)\) for short), is the number of the edges incident with \(v\). For a hypergraph \(\mathcal{G}\), among all of its vertices, we denote by \(\Delta(\mathcal{G})\) (or \(\Delta\) for short) the \(maximal\)\(degree\), and denote by \(\delta(\mathcal{G})\) (or \(\delta\) for short) the \(minimal\)\(degree\) respectively. A vertex of degree \(1\) is called a \(pendant\)\(vertex\). A \(pendant\)\(edge\) is an edge with at most one vertex of degree more than one and other vertices in this edge being all pendant vertices.
In a hypergraph, a \(hyperpath\) of length \(q\) (\(q\)-\(hyperpath\)) is defined to be an alternating sequence of vertices and edges \(v_{1}e_{1}v_{2}e_{2}\cdots v_{q}e_{q}v_{q+1}\) such that (1) \(v_{1}\), \(v_{2}\), \(\ldots\), \(v_{q+1}\) are all distinct vertices; (2) \(e_{1}\), \(e_{2}\), \(\ldots\), \(e_{q}\) are all distinct edges; (3) \(v_{i}\), \(v_{i+1}\in e_{i}\) for \(i=1\), \(2\), \(\ldots\), \(q\); (4) \(e_{i}\cap e_{i+1}=v_{i+1}\) for \(i=1\), \(2\), \(\ldots\), \(q-1\); (5) \(e_{i}\cap e_{j}=\emptyset\) if \(|i-j|\geq 2\). If there is no discrimination, a hyperpath is sometimes written as \(e_{1}e_{2}\cdots e_{q-1}e_{q}\), \(e_{1}v_{2}e_{2}\cdots v_{q}e_{q}\) or \(v_{1}e_{1}v_{2}e_{2}\cdots v_{q}e_{q}\). A \(hypercycle\) of length \(q\) (\(q\)-\(hypercycle\)) \(v_{1}e_{1}v_{2}e_{2}\cdots v_{q-1}e_{q-1}v_{q}e_{q}v_{1}\) is obtained from a hyperpath \(v_{1}e_{1}v_{2}e_{2}\cdots v_{q-1}e_{q-1}v_{q}\) by adding a new edge \(e_{q}\) between \(v_{1}\) and \(v_{q}\) where \(e_{q}\cap e_{1}=\{v_{1}\}\), \(e_{q}\cap e_{q-1}=\{v_{q}\}\), \(e_{q}\cap e_{j}=\emptyset\) if \(j\neq 1,q-1\) and \(|q-j|\geq 2\). The length of a hyperpath \(P\) (or a hypercycle \(C\)), denoted by \(L(P)\) (or \(L(C)\)), is the number of the edges in \(P\) (or \(C\)). A hypergraph \(\mathcal{G}\) is connected if there exists a hyperpath from \(v\) to \(u\) for all \(v,u\in V\), and \(\mathcal{G}\) is called \(acyclic\) if it contains no hypercycle.
Recall that a tree is an ordinary graph which is \(2\)-uniform, connected and acyclic. A \(supertree\) is similarly defined to be a hypergraph which is both connected and acyclic. Clearly, in a supertree, its each pair of the edges have at most one common vertex. Therefore, the edge number of a \(k\)-uniform supertree of order \(n\) is \(m=\frac{n-1}{k-1}\).
Let \(G=(V,E)\) be an ordinary graph (\(2\)-uniform). For every \(k\geq 3\), the kth power of \(G\), denoted by \(G^{k}=(V^{k},E^{k})\), is defined as the \(k\)-uniform hypergraph with the edge set \(E^{k}=\{e\cup\{v_{e_{1}},\,v_{e_{2}},\,\ldots,\,v_{e_{k-2}}\}:e\in E\}\) and the vertex set \(V^{k}=V\cup(\cup_{e\in E}\{v_{e_{1}},\,v_{e_{2}},\,\ldots,\,v_{e_{k-2}}\})\), where \(V\cap(\cup_{e\in E}\{v_{e_{1}},\,v_{e_{2}},\,\ldots,\,v_{e_{k-2}}\})=\emptyset\), \(\{v_{e_{1}},\,v_{e_{2}},\,\ldots,\,v_{e_{k-2}}\}\cap\{v_{f_{1}},\,v_{f_{2}},\, \ldots,\,v_{f_{k-2}}\}=\emptyset\) for \(e\neq f\), \(e,f\in E\). The kth power of an ordinary tree is called a \(hypertree\). Obviously, a hypertree is a supertree.
Denote by \(\mathcal{P}(n,k)\) the \(k\)-uniform hyperpath of order \(n\). A \(k\)-uniform \(superstar\) of order \(n\), denoted by \(\mathcal{S}^{*}(n,k)\) (see Fig. 1.1), is a supertree in which all edges intersect at just one common vertex. A \(k\)-uniform \(double\)\(hyperstar\) of order \(n\), denote by \(\mathcal{S}(n,k;l_{1},l_{2})\) where \(l_{1},l_{2}\geq 1\) (see Fig. 1.1), is a supertree obtained by attaching \(l_{1}\) pendant edges at vertex \(u_{1}\) of an edge \(e\), and attaching \(l_{2}\) pendant edges at the other vertex \(u_{2}\) of edge \(e\), where \(u_{1}\neq u_{2}\).
Fig. 1.1. \(\mathcal{S}^{*}(n,k)\) and \(\mathcal{S}(n;2,2)\)
Let \(E_{ij}=\{e\in E:i,j\in e\}\). The _adjacency matrix_\(\mathcal{A}_{\mathcal{G}}=[(\mathcal{A}_{\mathcal{G}})_{ij}]\) of a hypergraph \(\mathcal{G}\) is defined as
\[(\mathcal{A}_{\mathcal{G}})_{ij}=\sum_{e\in E_{ij}}\frac{1}{|e|-1}.\]
It is easy to find that \(\mathcal{A}_{\mathcal{G}}\) is symmetric if there is no requirement for direction on hypergraph \(\mathcal{G}\), and find that
\({\cal A}_{\cal G}\) is very convenient to be used to investigate the spectum of a hypergraph even without the requirement for edge uniformity. The \(spectral\ radius\ \rho({\cal G})\) of a hypergraph \({\cal G}\) is defined to be the spectral radius \(\rho({\cal A}_{\cal G})\), which is the maximum modulus among all eigenvalues of \({\cal A}_{\cal G}\). In spectral theory of hypergraphs, the spectral radius is an index that attracts much attention due to its fine properties [4, 7, 8, 17, 20, 22, 25, 33, 35, 37, 41].
We assume that the hypergraphs throughout this paper are simple, i.e. \(e_{i}\neq e_{j}\) if \(i\neq j\), and assume the hypergraphs throughout this paper are undirected. In this paper, among all \(k\)-uniform (\(k\geq 3\)) supertrees with fixed number of vertices, the supertrees with the maximum, the second maximum and the minimum spectral radius are completely determined respectively, getting the following result:
**Theorem 1.1**: _Let \({\cal G}\) be a \(k\)-uniform (\(k\geq 3\)) supertree of order \(n\). Then \(\rho({\cal G})\leq\rho({\cal S}^{*}(n,k))\) with equality if and only if \({\cal G}\cong{\cal S}^{*}(n,k)\)._
**Theorem 1.2**: _Let \({\cal G}\) be a \(k\)-uniform (\(k\geq 3\)) supertree of order \(n\) and with \(m({\cal G})\geq 3\) satisfying that \({\cal G}\not\cong{\cal S}^{*}(n,k)\). Then \(\rho({\cal G})\leq\rho({\cal S}(n,k;\frac{n-1}{k-1}-2,1))\) with equality if and only if \({\cal G}\cong{\cal S}(n,k;\frac{n-1}{k-1}-2,1)\)._
**Theorem 1.3**: _Let \({\cal G}\) be a \(k\)-uniform (\(k\geq 3\)) supertree of order \(n\). Then \(\rho({\cal P}(n,k))\leq\rho({\cal G})\) with equality if and only if \({\cal G}\cong{\cal P}(n,k)\)._
**Corollary 1.4**: _Suppose \(T^{k}\) (\(k\geq 3\)) of order \(n\) is the kth power of ordinary tree \(T\). Then_
_(1) \(\rho(T^{k})\leq{\cal S}^{*}(n,k)\) with equality if and only if \(T^{k}\cong{\cal S}^{*}(n,k)\)._
_(2) \(\rho({\cal P}(n,k))\leq\rho(T^{k})\) with equality if and only if \(T^{k}\cong{\cal P}(n,k)\)._
The layout of this paper is as follows: section 2 introduces some basic knowledge and working lemmas; section 3 represents our results.
## 2 Preliminary
For the requirements in the narrations afterward, we need some prepares. For a hypergraph \({\cal G}\) with vertex set \(\{v_{1},\,v_{2},\,\ldots,\,v_{n}\}\), a vector on \({\cal G}\) is a vector \(X=(x_{v_{1}},x_{v_{2}},\ldots,x_{v_{n}})^{T}\in R^{n}\) that entry \(x_{v_{i}}\) is mapped to vertex \(v_{i}\) for \(i\leq i\leq n\).
From [26], by the famous Perron-Frobenius theorem, for \({\cal A}_{\cal G}\) of a connected uniform hypergraph \({\cal G}\) of order \(n\), we know that there is unique one positive eigenvector \(X=(x_{v_{1}},\,x_{v_{2}},\,\ldots,\,x_{v_{n}})^{T}\in R^{n}_{++}\) (\(R^{n}_{++}\) means the set of positive real vectors of dimension \(n\)) corresponding to \(\rho({\cal G})\), where \(\sum_{i=1}^{n}x_{v_{i}}^{2}=1\) and each entry \(x_{v_{i}}\) is mapped to each vertex \(v_{i}\) for \(i\leq i\leq n\). We call such an eigenvector \(X\) the \(principal\ eigenvector\) of \({\cal G}\).
Let \(A\) be an irreducible nonnegative \(n\times n\) real matrix (with every entry being real number) with spectral radius \(\rho(A)\). The following extremal representation (Rayleigh quotient) will be useful:
\[\rho(A)=\max_{X\in R^{n},X\neq 0}\frac{X^{T}AX}{X^{T}X},\]
and if a vector \(X\) satisfies that \(\frac{X^{T}AX}{X^{T}X}=\rho(A)\), then \(AX=\rho(A)X\).
**Lemma 2.1**: _Let \(A\) be an irreducible nonnegative square real matrix with order \(n\) and spectral radius \(\rho\), \(Y\in(R^{n}_{+}\setminus\{0\}^{n})\) be a nonnegative vector (\(R^{n}_{+}\) means the set of nonnegative real vectors of dimension \(n\), \(\{0\}^{n}=\{(0,0,\ldots,0)^{T}\}\)). If \(AY\geq\rho Y\), then \(AY=\rho Y\)._
**Proof.** Note the relation between the spectral radius and Rayleigh quotient for an irreducible nonnegative square real matrix. It follows that \(\frac{Y^{T}AY}{Y^{T}Y}=\rho\), and \(AY=\rho Y\). Thus the result follows. This completes the proof.
**Lemma 2.2**: **[**38**]** _Let \(A\) be an irreducible nonnegative square symmetric real matrix with order \(n\) and spectral radius \(\rho\), \(Y\in(R_{+}^{n}\setminus\{0\}^{n})\) be a nonnegative vector. If there exists \(r\in R_{+}\) such that \(AY\leq rY\), then \(\rho\leq r\). Similarly, if there exists \(r\in R_{+}\) such that \(AY\geq rY\), then \(\rho\geq r\)._
## 3 Main results
Let \(X\) be an eigenvector of a connected \(k\)-uniform hypergraph \(G\). For the simplicity, we let \(x_{e}=\sum_{i<j,v_{i},v_{j}\in e}x_{v_{i}}x_{v_{j}}\) for an edge \(e=\{v_{1},\,v_{2},\,\ldots,\,v_{k}\}\).
**Lemma 3.1**: _Let \(e_{1}=\{v_{1,1},\,v_{1,2},\,\ldots,\,v_{1,k}\}\), \(e_{2}=\{v_{2,1},\,v_{2,2},\,\ldots,\,v_{2,k}\}\), \(\ldots\), \(e_{j}=\{v_{j,1},\,v_{j,2},\,\ldots,\,v_{j,k}\}\) be some edges in a connected \(k\)-uniform hypergraph \({\cal G}\); \(v_{u,1}\), \(v_{u,2}\), \(\ldots\), \(v_{u,t}\) be vertices in \({\cal G}\) that \(t<k\). For \(1\leq i\leq j\), \(\{v_{u,1},v_{u,2},\ldots,v_{u,t}\}\nsubset
From Lemma 3.2, combining with hypergraph, we can get the following corollary naturally.
**Corollary 3.3**: _For a connected hypergraph \(\mathcal{G}\), we have \(\delta\leq\rho(\mathcal{G})\leq\Delta\) with either one equality if and only if \(\mathcal{G}\) is regular, where \(\delta\) is the minimum degree, \(\Delta\) is the maximum degree._
Using Lemma 3.2, we can get an improvement for Lemma 2.2.
**Lemma 3.4**: _Let \(A\) be an irreducible nonnegative square symmetric real matrix with order \(n\) and spectral radius \(\rho\), \(y\in R_{++}^{n}\) be a positive vector. If there exists \(r\in R_{+}\) such that \(Ay\leq ry\), then \(\rho\leq r\) with equality if and only if \(Ay=ry\). Similarly, if there exists \(r\in R_{+}\) such that \(Ay\geq ry\), then \(\rho\geq r\) with equality if and only if \(Ay=ry\)._
**Proof.** Using Lemma 2.2 gets that \(\rho\leq r\) if \(Ay\leq ry\); \(\rho\geq r\) if \(Ay\geq ry\). Next we prove the conclusion for \(\rho=r\).
We first prove the conclusion that \(\rho=r\) if and only if \(Ay=ry\) under the condition \(Ay\leq ry\). Suppose
\[y=(y_{1},y_{2},\ldots,y_{n})^{T}\]
. Let
\[B=\left(\begin{array}{ccccc}\frac{1}{y_{1}}&0&0&\cdots&0\\ 0&\frac{1}{y_{2}}&0&\vdots&0\\ \vdots&&&&\\ 0&0&\vdots&0&\frac{1}{y_{n}}\end{array}\right)A\left(\begin{array}{ccccc}y_ {1}&0&0&\cdots&0\\ 0&y_{2}&0&\vdots&0\\ \vdots&&&&\\ 0&0&\vdots&0&y_{n}\end{array}\right)\]
. Denote by \(\rho(B)\) the spectral radius of \(B\). Note that the eigenvalues of \(B\) are the same to the eigenvalues of \(A\); \(\rho=r\) means \(\rho(B)=r\). Note that \(Ay\leq ry\) means \(S_{B}\leq r\); \(\rho(B)=r\) means all of the row sums of \(B\) equals \(r\) by Lemma 3.2, which implies that \(Ay=ry\). As a result, it follows that under the condition that \(Ay\leq ry\), if \(\rho=r\), then \(Ay=ry\). Conversely, if \(Ay=ry\), then all of the row sums of \(B\) equals \(r\), and then \(\rho(B)=r=\rho\).
In the same way, we get that \(\rho=r\) if and only if \(Ay=ry\) under the condition that \(Ay\geq ry\). This completes the proof. \(\Box\)
**Lemma 3.5**: _(1) Suppose \(c>0\), \(d>0\), \(a-c>0\), \(b-d>0\). If \(\frac{a}{b}\geq\frac{c}{d}\), then \(\frac{a-c}{b-d}\geq\frac{a}{b}\) with equality if and only if \(\frac{a}{b}=\frac{c}{d}\). Moreover, if \(\frac{a}{b}>\frac{c}{d}\), then \(\frac{a-c}{b-d}>\frac{a}{b}\)._
_(2) Suppose \(c>0\), \(d>0\), \(a-c>0\), \(b-d>0\). If \(\frac{a}{b}\geq\frac{c}{d}\), then \(\frac{a+c}{b+d}\leq\frac{a}{b}\) with equality if and only if \(\frac{a}{b}=\frac{c}{d}\). Moreover, if \(\frac{a}{b}>\frac{c}{d}\), then \(\frac{a+c}{b+d}<\frac{a}{b}\)._
_(3) Suppose \(\frac{a}{b}\geq 1\), \(b>c>0\). Then \(\frac{a-c}{b-c}\geq\frac{a}{b}\)._
**Proof.** (1) From \(\frac{a}{b}\geq\frac{c}{d}\), it follows that \(ab-bc\geq ab-ad\), which induces \(\frac{a-c}{b-d}\geq\frac{a}{b}\). In the same way, we get that \(\frac{a-c}{b-d}>\frac{a}{b}\) if \(\frac{a}{b}>\frac{c}{d}\). Then (1) follows.
(2) is proved as (1). (3) is a corollary following from (1). This completes the proof. \(\Box\)
**Lemma 3.6**: _Let \(\mathcal{G}\) be a hypergraph with spectral radius \(\rho\), \(e_{0}\), \(e_{1}\), \(e_{2}\) be three edges in \(\mathcal{G}\) with \(e_{0}=\{v_{1}\), \(v_{2}\), \(\ldots\), \(v_{k-1}\), \(v_{k}\}\), satisfying that \(deg_{\mathcal{G}}(v_{2})=deg_{\mathcal{G}}(v_{3})=\cdots deg_{\mathcal{G}}(v_{k -1})=1\) (\(k\geq 3\)), \(e_{1}\cap e_{0}=\{v_{1}\}\), \(e_{2}\cap e_{0}=\{v_{k}\}\) (see Fig. 3.1.). Let \(X\) be the principal eigenvector of hypergraph \(\mathcal{G}\). Then \(x_{v_{2}}=x_{v_{3}}=\cdots=x_{v_{k-1}}=\frac{x_{v_{1}}+x_{v_{k}}}{(k-1)\rho-(k -3)}<\min\{x_{v_{1}},x_{v_{k}}\}\)._
**Proof.** For \(2\leq i\leq k-1\), we prove \(x_{v_{i}}<\min\{x_{v_{1}},x_{v_{k}}\}\) by contradiction. Suppose that \(\min\{x_{v_{1}},x_{v_{k}}\}=x_{v_{1}}\), and \(x_{v_{s}}\geq\min\{x_{v_{1}},x_{v_{k}}\}\) for some \(2\leq z\leq k-1\). Let \(e_{1}^{{}^{\prime}}=(e_{1}\setminus\{v_{1}\})\cup\{v_{z}\}\) and \(\mathcal{G}_{1}=\mathcal{G}-e_{1}+e_{1}^{{}^{\prime}}\). Using Lemma 3.1 gets \(\rho(\mathcal{G}_{1})>\rho(\mathcal{G})\). But it contradicts \(\rho(\mathcal{G}_{1})=\rho(\mathcal{G})\) because \(\mathcal{G}_{1}\cong\mathcal{G}\). As a result, for \(2\leq i\leq k-1\), it follows that \(x_{v_{i}}<\min\{x_{v_{1}},x_{v_{k}}\}\).
Note that \(\rho x_{v_{2}}=\frac{1}{k-1}(x_{v_{1}}+x_{v_{3}}+\sum_{i=4}^{k}x_{v_{i}})\), \(\rho x_{v_{3}}=\frac{1}{k-1}(x_{v_{1}}+x_{v_{2}}+\sum_{i=4}^{k}x_{v_{i}})\). It follows that \((\rho+\frac{1}{k-1})(x_{v_{2}}-x_{v_{3}})=0\). Note that \(\rho>1\) by Corollary 3. Then we get \(x_{v_{2}}=x_{v_{3}}\). Proceeding like this, we get that \(x_{v_{2}}=x_{v_{3}}=\cdots=x_{v_{k-1}}\). Thus from \(\rho x_{v_{2}}=\frac{1}{k-1}((k-3)x_{v_{3}}+x_{v_{1}}+x_{v_{k}})\), it follows that \(x_{v_{2}}=\frac{x_{v_{1}}+x_{v_{3}}}{(k-1)\rho-(k-3)}\). Thus the result follows. This completes the proof. \(\Box\)
Similar to Lemma 3.6, we get the following Lemma 3.7.
**Lemma 3.7**: _Let \(\mathcal{G}\) be a hypergraph with spectral radius \(\rho\), \(e=\{u,\)\(v_{1}\), \(v_{2}\), \(\ldots\), \(v_{k-1}\}\) be a pendant edge in \(\mathcal{G}\) (\(k\geq 2\)), where \(deg_{\mathcal{G}}(u)\geq 2\). Then in the principal eigenvector \(X\) of \(\mathcal{G}\), \(x_{v_{1}}=x_{v_{2}}=\cdots=x_{v_{k-1}}=\frac{x_{v}}{(k-1)\rho-(k-2)}<x_{u}\)._
\(e_{4}\)\(e_{1}\)\(e_{2}\)\(e_{3}\)\(e_{3}\)\(e_{4}\)\(e_{4}\)\(e_{2}\)\(e_{1}\)\(e_{2}\)\(e_{3}\)Fig. 3.2. \(\mathcal{G}_{0}\), \(\mathcal{G}_{1}\), \(\mathcal{G}_{2}\)
**Lemma 3.8**: _Suppose \(\mathcal{G}\) is a connected hypergraph with spectral radius \(\rho\) and principal eigenvector \(X\). \(e_{1}\), \(e_{2}\), \(e_{3}\), \(e_{4}\) are edges in \(\mathcal{G}\), where \(|e_{1}|,|e_{2}|,|e_{3}|,|e_{4}|\geq 3\), \(e_{1}\cap e_{2}=\{v_{1}\}\), \(e_{1}\cap e_{4}=\{v_{2}\}\), \(e_{2}\cap e_{3}=\{v_{3}\}\), \(deg_{\mathcal{G}}(v_{1})=deg_{\mathcal{G}}(v_{2})=deg_{\mathcal{G}}(v_{3})=2\), \(deg_{\mathcal{G}}(v)=1\) for \(v\in(e_{1}\cup e_{2})\setminus\{v_{1},v_{2},v_{3}\}\)._
_(1) Let \(e^{{}^{\prime}}\subset(e_{1}\cup e_{2})\) satisfy that \(\{v_{2},v_{3}\}\subseteq e^{{}^{\prime}}\), \(e^{{}^{\prime}}\notin E(\mathcal{G})\). Let \(\mathcal{G}_{0}=\mathcal{G}-e_{1}-e_{2}+e^{{}^{\prime}}\) and \(t=|e^{{}^{\prime}}|\) (see Fig. 3.2)._
_(1.1) If \(t\geq\max\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\geq x_{v_{2}}\), \(x_{v_{1}}\geq x_{v_{3}}\), then \(\rho(\mathcal{G}_{0})\leq\rho(\mathcal{G})\) with equality if and only if \(|e^{{}^{\prime}}|=|e_{1}|=|e_{2}|\) and \(x_{v_{1}}=x_{v_{2}}=x_{v_{3}}\). Moreover, if \(t>\max\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\geq x_{v_{2}}\), \(x_{v_{1}}\geq x_{v_{3}}\), then \(\rho(\mathcal{G}_{0})<\rho(\mathcal{G})\)._
_(1.2) If \(t\leq\max\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\leq x_{v_{2}}\), \(x_{v_{1}}\leq x_{v_{3}}\), then \(\rho(\mathcal{G}_{0})\geq\rho(\mathcal{G})\) with equality if and only if \(|e^{{}^{\prime}}|=|e_{1}|=|e_{2}|\) and \(x_{v_{1}}=x_{v_{2}}=x_{v_{3}}\). Moreover, if \(t<\max\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\leq x_{v_{2}}\), \(x_{v_{1}}\leq x_{v_{3}}\), then \(\rho(\mathcal{G}_{0})>\rho(\mathcal{G})\)._
_(2) Let \(e^{{}^{\prime}}_{1}=(e_{1}\setminus\{v_{1}\})\cup\{u\}\), \(e^{{}^{\prime}}_{2}=(e_{2}\setminus\{v_{1}\})\cup\{u\}\), \(e^{{}^{\prime}}=\{v_{1}\), \(u_{1}\), \(u_{2}\), \(\ldots\), \(u_{t-2}\), \(u\}\) where \(u\notin V(\mathcal{G})\), \(u_{i}\notin V(\mathcal{G})\) for \(1\leq i\leq t-2\), \(\mathcal{G}_{1}=\mathcal{G}-e_{1}+e^{{}^{\prime}}_{1}+e^{{}^{\prime}}\), \(\mathcal{G}_{2}=\mathcal{G}-e_{2}+e^{{}^{\prime}}_{2}+e^{{}^{\prime}}\) (see Fig. 3.2)._
_(2.1) If \(t\leq\min\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\geq x_{v_{2}}\), \(x_{v_{1}}\geq x_{v_{3}}\), then \(\rho(\mathcal{G}_{1})\geq\rho(\mathcal{G})\), \(\rho(\mathcal{G}_{2})\geq\rho(\mathcal{G})\) with either equality holding if and only if \(|e^{{}^{\prime}}|=|e_{1}|=|e_{2}|\), \(x_{v_{1}}=x_{v_{2}}=x_{v_{3}}=x_{u}\) and \(x_{z}=x_{w}\) for \(z,\omega\in(e_{1}\cup e_{2}\cup e^{{}^{\prime}})\setminus\{v_{1},v_{2},u\}\). Moreover, if \(t<\min\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\geq x_{v_{2}}\), \(x_{v_{1}}\geq x_{v_{3}}\), then \(\rho(\mathcal{G}_{1})>\rho(\mathcal{G})\), \(\rho(\mathcal{G}_{2})>\rho(\mathcal{G})\)._
_(2.2) If \(t\geq\max\{e_{1},e_{2}\}\), \(x_{v_{1}}\leq x_{v_{2}}\), \(x_{v_{1}}\leq x_{v_{3}}\), then \(\rho(\mathcal{G}_{1})\leq\rho(\mathcal{G})\), \(\rho(\mathcal{G}_{2})\leq\rho(\mathcal{G})\) with either equality holding if and only if \(|e^{{}^{\prime}}|=|e_{1}|=|e_{2}|\), \(x_{v_{1}}=x_{v_{2}}=x_{v_{3}}=x_{u}\) and \(x_{z}=x_{w}\) for \(z,\omega\in(e_{1}\cup e_{2}\cup e^{{}^{\prime}})\setminus\{v_{1},v_{2},u\}\). Moreover, if \(t>\max\{e_{1},\,e_{2}\}\), \(x_{v_{1}}\leq x_{v_{2}}\), \(x_{v_{1}}\leq x_{v_{3}}\), then \(\rho(\mathcal{G}_{1})<\rho(\mathcal{G})\), \(\rho(\mathcal{G}_{2})<\rho(\mathcal{G})\)._
**Proof.** (1.1) Suppose \(e_{1}=\{v_{1}\), \(v_{\alpha(1,1)}\), \(v_{\alpha(1,2)}\), \(\ldots\), \(v_{\alpha(1,j_{1}-2)}\), \(v_{2}\}\), \(e_{2}=\{v_{1}\), \(v_{\alpha(2,1)}\), \(v_{\alpha(2,2)}\), \(\ldots\), \(v_{\alpha(2,j_{2}-2)}\)
\(v_{3}\)). By Lemma 3.6, we have \(x_{v_{a(1,v)}}=x_{v_{a(1,v)}}<\min\{x_{v_{1}},x_{v_{2}}\}\) for \(1\leq w<z\leq j_{1}-2\), \(x_{v_{a(2,v)}}=x_{v_{a(2,v)}}<\min\{x_{v_{1}},x_{v_{3}}\}\) for \(1\leq w<z\leq j_{2}-2\). Let \(Y\) be a vector on \(\mathcal{G}_{0}\) satisfying that
\[\left\{\begin{array}{ll}y_{v}=\min\{x_{z}:z\in(e_{1}\cup e_{2})\setminus\{v_ {1},v_{2},v_{3}\}\},&\quad v\in e^{{}^{\prime}}\setminus\{v_{2},v_{3}\}\\ y_{v}=x_{v},&\quad others.\end{array}\right.\]
Note that \(|e^{{}^{\prime}}|\geq\max\{|e_{1}|,|e_{2}|\}\), \(x(v_{1})\geq x(v_{2})\), \(x(v_{1})\geq x(v_{3})\). Without loss of generality, suppose \(\min\{x_{v}:v\in(e_{1}\cup e_{2})\}=x_{v_{a(2,1)}}\). For \(v\in(e^{{}^{\prime}}\setminus\{v_{2},v_{3}\})\), noting that \(deg_{\mathcal{G}_{0}}(v)=1\) and \(x_{v_{a(2,1)}}<\min\{x_{v_{1}},x_{v_{3}}\}\), we have
\[(\mathcal{A}_{\mathcal{G}_{0}}Y)_{v} =\frac{(t-3)y_{v}+y_{v_{2}}+y_{v_{3}}}{t-1}=\frac{(t-3)x_{v_{a(2, 1)}}+x_{v_{2}}+x_{v_{3}}}{t-1}\] \[=\frac{(j_{2}-3)x_{v_{a(2,1)}}+x_{v_{2}}+x_{v_{3}}+(t-j_{2})x_{v_ {a(2,1)}}}{j_{2}-1+t-j_{2}}\] \[\leq\frac{(j_{2}-3)x_{v_{a(2,1)}}+x_{v_{1}}+x_{v_{3}}+(t-j_{2})x_ {v_{a(2,1)}}}{j_{2}-1+t-j_{2}}\] \[\leq\frac{(j_{2}-3)x_{v_{a(2,1)}}+x_{v_{1}}+x_{v_{3}}}{j_{2}-1} \text{(by Lemma \ref{lem:2.2})}\] \[=\rho x_{v_{a(2,1)}}=\rho y_{v}.\]
In the same way, we get
\[(\mathcal{A}_{\mathcal{G}_{0}}Y)_{v_{2}} =\frac{(t-2)x_{v_{a(2,1)}}+y_{v_{3}}}{t-1}=\frac{(t-2)x_{v_{a(2,1 )}}+x_{v_{3}}}{t-1}\leq\rho x_{v_{2}}=\rho y_{v_{2}};\] \[(\mathcal{A}_{\mathcal{G}_{0}}Y)_{v_{3}} =\frac{(t-2)x_{v_{a(2,1)}}+y_{v_{2}}}{t-1}=\frac{(t-2)x_{v_{a(2,1 )}}+x_{v_{2}}}{t-1}\leq\rho x_{v_{3}}=\rho y_{v_{3}};\]
for \(v\in(V(\mathcal{G}_{0})\setminus e^{{}^{\prime}})\), \((\mathcal{A}_{\mathcal{G}_{0}}Y)_{v}=(\mathcal{A}_{\mathcal{G}}X)_{v}=\rho y_ {v}\). By lemma 2.2, it follows that \(\rho(\mathcal{G}_{0})\leq\rho(\mathcal{G})\). Note that \(Y\) is positive. Combining Lemma 3.4, we find that if \(\rho(\mathcal{G}_{0})=\rho(\mathcal{G})\), then \(\mathcal{A}_{\mathcal{G}_{0}}Y=\rho Y\). Thus it follows that \(|e^{{}^{\prime}}|=|e_{1}|=|e_{2}|\) and \(x_{v_{1}}=x_{v_{2}}=x_{v_{3}}\). Conversely, if \(|e^{{}^{\prime}}|=|e_{1}|=|e_{2}|\) and \(x_{v_{1}}=x_{v_{2}}=x_{v_{3}}\), then it can be checked as above that for \(v\in(e^{{}^{\prime}}\setminus\{v_{2},v_{3}\})\), \((\mathcal{A}_{\mathcal{G}_{0}}Y)_{v}=\rho y_{v}\); \((\mathcal{A}_{\mathcal{G}_{0}}Y)_{v_{2}}=\rho y_{v_{2}}\); \((\mathcal{A}_{\mathcal{G}_{0}}Y)_{v_{3}}=\rho y_{v_{3}}\); for \(v\in(V(\mathcal{G}_{0})\setminus e^{{}^{\prime}})\), \((\mathcal{A}_{\mathcal{G}_{0}}Y)_{v}=(\mathcal{A}_{\mathcal{G}}X)_{v}=\rho y_ {v}\). Thus it follows that \(\rho(\mathcal{G}_{0})=\rho(\mathcal{G})\).
If \(t>\max\{|e_{1}|,|e_{2}|\}\), \(x_{v_{1}}\geq x_{v_{2}}\), \(x_{v_{1}}\geq x_{v_{3}}\), combining Lemma 3.5, as the above proof, we get \(\rho(\mathcal{G}_{0})<\rho(\mathcal{G})\).
From the above narrations, then (1.1) follows as desired.
(1.2) Let \(Y\) be a vector on \(\mathcal{G}_{0}\) satisfying that
\[\left\{\begin{array}{ll}y(v)=\max\{x_{z}:z\in(e_{1}\cup e_{2})\setminus\{v_ {1},v_{2},v_{3}\}\},&\quad v\in e^{{}^{\prime}}\setminus\{v_{2},v_{3}\}\\ y(v)=x_{v},&\quad others.\end{array}\right.\]
Then (1.2) is proved as (1.1).
(2.1) For both \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\), let \(Y\) be a vector satisfying that
\[\left\{\begin{array}{ll}y(u)=x_{v_{1}}\\ y(v)=\max\{x_{z}:z\in(e_{1}\cup e_{2})\setminus\{v_{1},v_{2},v_{3}\}\},&\quad v \in e^{{}^{\prime}}\setminus\{v_{1},u\}\\ y(v)=x_{v},&\quad others.\end{array}\right.\]
Then (2.1) is proved as (1.1).
(2.2) For both \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\), let \(Y\) be a vector satisfying that
\[\left\{\begin{array}{ll}y(u)=x_{v_{1}}\\ y(v)=\min\{x_{z}:z\in(e_{1}\cup e_{2})\setminus\{v_{1},v_{2},v_{3}\}\},&\quad v \in e^{{}^{\prime}}\setminus\{v_{1},u\}\\ y(v)=x_{v},&\quad others.\end{array}\right.\]
Then (2.2) is proved as (1.1).
Thus the result follows. This completes the proof.
**Lemma 3.9**: _Let \(e\) be a new edge not containing in connected hypergraph \({\cal G}\). Let \({\cal G}^{{}^{\prime}}={\cal G}+e\). If \({\cal G}^{{}^{\prime}}\) is also connected, then \(\rho({\cal G}^{{}^{\prime}})>\rho({\cal G})\)._
**Proof.** Let \(X\) be the \(principal\)\(eigenvector\) of \({\cal G}\), \(Y\) be a vector on \({\cal G}^{{}^{\prime}}\) satisfying that
\[\left\{\begin{array}{ll}y_{v}=x_{v},&\quad v\in V({\cal G})\\ \\ y_{v}=0,&\quad others.\end{array}\right.\]
Then \(Y^{T}{\cal A}_{{\cal G}^{{}^{\prime}}}Y-X^{T}{\cal A}_{{\cal G}}X\geq 0\), \(Y^{T}Y=X^{T}X\). It follows that \(\rho({\cal G}^{{}^{\prime}})\geq\rho({\cal G})\). Suppose that \(\rho({\cal G}^{{}^{\prime}})=\rho({\cal G})\). Then \(\rho({\cal G}^{{}^{\prime}})=Y^{T}{\cal A}_{{\cal G}^{{}^{\prime}}}Y=X^{T}{ \cal A}_{{\cal G}}X=\rho({\cal G})\), and then \(Y\) is a principal eigenvector of \({\cal G}^{{}^{\prime}}\). If there exists \(y_{v}=0\), then we get a contradiction because the principal eigenvector of \({\cal G}^{{}^{\prime}}\) is positive by Perron-Frobenius theorem.
Suppose \(Y\) is positive next. Note that \(Y=X\) if \(Y\) is positive. It follows that \(V({\cal G}^{{}^{\prime}})=V({\cal G})\) now. Denote by \(e=\{v_{1},\,v_{2},\,\ldots,\,v_{k}\}\). Then \(e\subseteq V({\cal G})\), and
\[\rho({\cal G}^{{}^{\prime}})y_{v_{1}}=({\cal A}_{{\cal G}^{{}^{\prime}}}Y)_{v _{1}}=({\cal A}_{{\cal G}^{{}^{\prime}}}X)_{v_{1}}+\frac{1}{k-1}\sum_{i=2}^{k} x_{v_{i}}=\rho({\cal G})x_{v_{1}}+\frac{1}{k-1}\sum_{i=2}^{k}x_{v_{i}}>\rho({\cal G })x_{v_{1}},\]
which contradicts \(\rho({\cal G}^{{}^{\prime}})=\rho({\cal G})\). As a result, we get that \(\rho({\cal G}^{{}^{\prime}})>\rho({\cal G})\). This completes the proof. \(\Box\)
Denote by \({\cal G}({\cal D}v;p,q;v_{p+q}{\cal H})\) the \(k\)-uniform connected hypergraph obtained from \(k\)-uniform hypergraph \({\cal D}\) and \(k\)-uniform hypergraph \({\cal H}\) by adding a pendant path \(P_{1}\) with length \(p\) at vertex \(v\) of \({\cal D}\), and adding a path \(P_{2}\) with length \(q\) between vertex \(v\) and vertex \(v_{p+q}\) of \({\cal H}\), where \({\cal D}\) and \({\cal H}\) are two disjoint, \(V(P_{1})\cap V({\cal D})=\{v\}\), \(V(P_{2})\cap V({\cal D})=\{v\}\), \(V(P_{2})\cap V({\cal D})=\{v\}\), \(V(P_{2})\cap V({\cal H})=\{v_{p+q}\}\) (see two examples in Fig. 3.3). In particular, if \(H=v_{p+q}\), we denote by \({\cal G}({\cal D}v;p,q;v_{p+q})\) for \({\cal G}({\cal D}v;p,q;v_{p+q}H)\) for short.
\(\cal{D}\)\(v_{1}\)\(v_{2}\)\(v_{p+q}\)\(v_{p}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p+q}\)\(v_{p+q}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p+1}\)\(v_{p+1}\)\(v_{p+1}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p+1}\)\(v_{p+1}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\)\(v_{p}\)\(v_{p}\)\(v_{p+1}\)\(v_{p}\
\((1)\)\(p\leq t-1\).
_Let \(t=p+q\). Moreover, we have_
\((2)\) _if \(p>0\), then \(x_{v_{i}}\leq x_{v_{i+1}}\) for \(0\leq i\leq p-1\), \(x_{v_{i}}\geq x_{v_{i+1}}\) for \(p\leq i\leq t-1\); if \(p=0\), then \(x_{v_{i}}\geq x_{v_{i+1}}\) for \(0\leq i\leq t-1\)._
\((3)\) _if \(p>0\) and there exists \(\omega\leq p\) and \(\eta\leq q\) such that \(x_{v_{p-\omega}}\geq x_{v_{p+\eta}}\), then_
\((3.1)\) _if \(\omega\leq\eta\), then \(x_{v_{p-\omega+i}}\geq x_{v_{p+\eta-i}}\) for \(0\leq i\leq\omega\), \(x_{v_{(\eta-\omega+i,1)}}\geq x_{v_{a(p+\eta-i+1,1)}}\) for \(1\leq i\leq\omega\)._
\((3.2)\) _if \(\omega\geq\eta\), then \(x_{v_{p-\omega+i}}\geq x_{v_{p+\eta-i}}\) for \(0\leq i\leq\eta-1\), \(x_{v_{j}}=x_{v_{p}}\) for \(p-\omega+\eta\leq j\leq p\), and \(x_{v_{a(p-\omega+i,1)}}\geq x_{v_{a(p+\eta-i+1,1)}}\) for \(1\leq i\leq\eta\)._
\((4)\) _if \(p>0\) and there exists \(\omega\leq p\) and \(\eta\leq q\) such that \(x_{v_{p-\omega}}\leq x_{v_{p+\eta}}\), then_
\((4.1)\) _if \(\omega\leq\eta\), then \(x_{v_{p-\omega+i}}\leq x_{v_{p+\eta-i}}\) for \(0\leq i\leq\omega-1\), \(x_{v_{j}}=x_{v_{p}}\) for \(p+1\leq j\leq p+\eta-\omega\), and \(x_{v_{a(p-\omega+i,1)}}\leq x_{v_{a(p+\eta-i+1,1)}}\) for \(1\leq i\leq\omega\)._
\((4.2)\) _if \(\omega\geq\eta\), then \(x_{v_{p-\omega+i}}\leq x_{v_{p+\eta-i}}\) for \(0\leq i\leq\eta\), \(x_{v_{a(p-\omega+i,1)}}\leq x_{v_{a(p+\eta-i+1,1)}}\) for \(1\leq i\leq\eta\)._
\((5)\) _if \(p>0\) and there exists \(\omega\leq p\), \(\eta\leq q\) such that \(x_{v_{p-\omega}}=x_{v_{p+\eta}}\), then_
\((5.1)\) _if \(\omega\leq\eta\), then \(x_{v_{p-\omega+i}}=x_{v_{p+\eta-i}}\) for \(0\leq i\leq\omega-1\), \(x_{v_{j}}=x_{v_{p}}\) for \(p+1\leq j\leq p+\eta-\omega\), and \(x_{v_{a(p-\omega+i,1)}}=x_{v_{a(p+\eta-i+1,1)}}\) for \(1\leq i\leq\omega\)._
\((5.2)\) _if \(\omega\geq\eta\), then \(x_{v_{p-\omega+i}}=x_{v_{p+\eta-i}}\) for \(0\leq i\leq\eta-1\), \(x_{v_{j}}=x_{v_{p}}\) for \(p-\omega+\eta\leq j\leq p\), \(x_{v_{a(p-\omega+i,1)}}=x_{v_{a(p+\eta-i+1,1)}}\) for \(1\leq i\leq\eta\)._
**Proof.** (1) By Lemma 3.7, it follows that \(x_{v_{k}}<x_{v_{k-1}}\). Thus \(p\leq t-1\).
\((2)\) Using Lemma 3.6 gets that for \(1\leq i\leq t-1\),
\[x_{v_{a(t,1)}}=x_{v_{a(t,2)}}=\cdots=x_{v_{a(t,k-2)}}<\min\{x_{v_{k-1}},x_{v_{ k}}\}.\]
Note that \({\cal G}({\cal D}v_{0};0,t;v_{t})\) is uniform and \(p\leq t-1\).
**Case 1**\(p>0\).
**Claim \(x_{v_{0}}\leq x_{v_{1}}\leq x_{v_{2}}\leq\cdots\leq x_{v_{p}}\). If \(p=1\), this claim hold naturally.
For \(p\geq 2\), we prove this claim by contradiction. Suppose that \(x_{v_{s}}\)\((0\leq z\leq p-1)\) is the first vertex from \(0\) to \(p-1\) such that \(x_{v_{s}}>x_{v_{s+1}}\). Then there exists \(z+1\leq\zeta\leq p-1\) such that \(x_{v_{s}}>x_{v_{s+1}}\geq\cdots\geq x_{v_{\zeta}}\leq x_{v_{\zeta+1}}\). Let \(e^{{}^{\prime}}=\{v_{\zeta},\)\(u_{1},\)\(u_{2},\)\(\ldots,\)\(u_{k-2},\)\(u\}\), \(e^{{}^{\prime}}_{\zeta}=(e_{\zeta}\setminus\{v_{\zeta}\})\cup\{u\}\), where \(u\notin V({\cal G})\), \(u_{i}\notin V({\cal G})\) for \(1\leq i\leq t-2\), \({\cal G}_{1}={\cal G}-e_{\zeta}+e^{{}^{\prime}}_{\zeta}+e^{{}^{\prime}}\), \({\cal G}_{2}={\cal G}_{1}-\{v_{a(t,1)},\)\(v_{a(t,2)},\)\(\ldots,\)\(v_{a(t,k-2)},\)\(v_{t}\}\). By Lemma 3.8 and Lemma 3.9, we get that \(\rho({\cal G}_{2})<\rho({\cal G}_{1})\leq\rho({\cal G})\), which contradicts \(\rho({\cal G})=\rho({\cal G}_{2})\) because \({\cal G}_{2}\cong{\cal G}\). Thus our claim holds.
In the same way, it is proved that \(x_{v_{p}}\geq x_{v_{p+1}}\geq x_{v_{p+2}}\geq\cdots\geq x_{v_{t-1}}\). Combining Lemma 3.7, we get that \(x_{v_{p}}\geq x_{v_{p+1}}\geq x_{v_{p+2}}\geq\cdots\geq x_{v_{t-1}}>x_{v_{t}}\).
**Case 2**\(p=0\). As Case 1, it is proved that \(x_{v_{i}}\geq x_{v_{i+1}}\) for \(0\leq i\leq t-1\). Thus \((2)\) follows.
\((3)\) If \(\omega\leq\eta\), we let \(Y\) be a vector on \({\cal G}({\cal D}v_{0};0,t;v_{t})\) satisfying that
\[\left\{\begin{array}{ll}y_{v_{p-\omega+i}}=\max\{x_{v_{p-\omega+i}},x_{v_{p+ \eta-i}}\}&0\leq i\leq\eta;\\ y_{v_{a(p-\omega+i,j)}}=\max\{x_{v_{a(p-\omega+i,1)}},x_{v_{a(p+\eta-i+1,1)}}\}& 1\leq i\leq\omega,1\leq j\leq k-2;\\ y_{v}=x_{v}&others.\end{array}\right.\]
As the proof of Lemma 3.8, it is proved that \({\cal A}({\cal G}({\cal D}v_{0};0,t;v_{t}))Y\geq\rho({\cal G}({\cal D}v_{0};0,t;v _{t}))Y\). Using Lemma 2.1 gets that \({\cal A}({\cal G}({\cal D}v_{0};0,t;v_{t}))Y=\rho({\cal G}({\cal D}v_{0};0,t;v _{t}))Y\). Note that \({\cal G}({\cal D}v_{0};0,t;v_{t})\) is connected. Then \({\cal A}({\cal G}({\cal D}v_{0};0,t;v_{t}))\) is irreducible. Consequently, it follows that the dimension of the eigenspace of the eigenvalue \(\rho({\cal G}({\cal D}v_{0};0,t;v_{t}))\) is one. Then \(Y=lX\) for some \(l>0\). Then \((3.1)\) follows.
\((3.2)\), \((4.1)\) and \((4.2)\) are proved in the same way. \((5)\) follows from \((3)\) or \((4)\).
This completes the proof.
**Lemma 3.12**: \(L_{0},L_{1},L_{2},\ldots,L_{f}\) _(\(f\geq 1\)) are positive integers satisfying \(L_{1}\leq L_{2}\leq\cdots\leq L_{f}\leq L_{0}-1\) and \(\sum_{i=0}^{f}L_{i}=t\). For an integer \(\mu>0\), if \(t-\mu\geq L_{0}\), then there exists some \(1\leq j\leq f\) such that \(t-\mu>L_{1}+\cdots+L_{j}\), but \(L_{0}+L_{1}+\cdots+L_{j}>t-\mu\)._
**Proof.** If \(\sum_{i=1}^{f}L_{i}<t-\mu\), then \(t=\sum_{i=0}^{f}L_{i}>t-\mu\) follows naturally. Thus the result holds.
Suppose \(\sum_{i=1}^{f}L_{i}\geq t-\mu\). Note that \(t-\mu\geq L_{0}\) and \(L_{1}\leq L_{2}\leq\cdots\leq L_{f}\leq L_{0}-1\). Then \(f\geq 2\) now, and there exists some \(1\leq g\leq f-1\) such that \(L_{1}+\cdots+L_{g}+L_{g+1}\geq t-\mu\), but \(L_{1}+\cdots+L_{g}<t-\mu\). Note that \(L_{g+1}\leq L_{0}-1\). Then it follows that \(L_{0}+L_{1}+\cdots+L_{g}>t-\mu\), but \(t-\mu>L_{1}+\cdots+L_{g}\). This completes the proof. \(\Box\)
**Lemma 3.13**: \(\mathcal{D}\) _is a \(k\)-uniform hypergraph where \(v_{0}\in V(\mathcal{D})\). Both \({\bf P}=v_{0}e_{1}v_{1}e_{2}v_{2}\cdots e_{t}v_{t}\) and \({\bf P}_{0}=v_{0}\tilde{e}_{1}u_{1}\tilde{e}_{2}u_{2}\cdots\tilde{e}_{s}u_{s}\) are \(k\)-uniform hyperpaths where \(\tilde{e}_{1}=\{v_{0}\), \(v_{\varphi(1,1)}\), \(v_{\varphi(1,2)}\), \(\ldots\), \(v_{\varphi(1,k-2)}\), \(u_{1}\}\), \(V(\mathcal{D})\cap V({\bf P})=\{v_{0}\}\), \(V(\mathcal{D})\cap V({\bf P}_{0})=\{v_{0}\}\). \({\bf P}_{1}\), \({\bf P}_{2}\), \(\ldots\), \({\bf P}_{f}\) (\(1\leq f\leq k-2\)) are \(k\)-uniform hyperpaths attached respectively at vertices \(v_{\varphi(1,1)}\), \(v_{\varphi(1,2)}\), \(\ldots\), \(v_{\varphi(1,f)}\) in \(\tilde{e}_{1}\) satisfying \(1\leq L({\bf P}_{1})\leq L({\bf P}_{2})\leq\cdots\leq L({\bf P}_{f})\leq L({ \bf P}_{0})-1\), \(\sum_{i=0}^{f}L({\bf P}_{i})=t\), \(V(\mathcal{D})\cap V({\bf P}_{i})=\emptyset\) for \(1\leq i\leq f\). \(K\)-uniform hypergraph \(\mathcal{G}_{1}\) is a \(\mathcal{G}(\mathcal{D}_{v0};0,t;v_{i})\) consisting of \(\mathcal{D}\) and \({\bf P}\); \(k\)-uniform hypergraph \(\mathcal{G}_{2}\) consists of \(\mathcal{D}\) and \({\bf P}_{0}\), \({\bf P}_{1}\), \({\bf P}_{2}\), \(\ldots\), \({\bf P}_{f}\) (see Fig. 3.5). Then \(\rho(\mathcal{G}_{1})<\rho(\mathcal{G}_{2})\)._
**Proof.** For brevity, we denote by \(L_{i}=L({\bf P}_{i})\) for \(0\leq i\leq f\). Without loss of generality, we suppose \(f=3\) next.
In \(\mathcal{G}_{1}\), denote by \(e_{i}=\{v_{i-1}\), \(v_{a(i,1)}\), \(v_{a(i,2)}\), \(\ldots\), \(v_{a(i,k-2)}\), \(v_{i}\}\) for \(1\leq i\leq t-1\), \(e_{t}=\{v_{t-1}\), \(v_{a(t,1)}\), \(v_{a(t,2)}\), \(\ldots\), \(v_{a(t,k-2)}\), \(v_{a(t,k-1)}\}\) where \(v_{a(t,k-1)}=v_{t}\). Assume that in \(\mathcal{D}\), the edges incident with \(v_{0}\) are \(\varepsilon_{1}\), \(\varepsilon_{2}\), \(\ldots\), \(\varepsilon_{p}\). Let \(X\) be the principal eigenvector of \(\mathcal{G}_{1}\), and \(x_{v_{p}}=\max\{v_{i}:0\leq i\leq t\}\). By Lemma 3.11, we know that \(p\leq t-1\). By Lemma 3.6 and Lemma 3.7, we know that \(x_{v_{a(i,j)}}=x_{v_{a(i,s)}}<\min\{x_{v_{i-1}},x_{v_{i}}\}\) for \(2\leq j,z\leq k-2\) where \(1\leq i\leq t-1\), \(x_{v_{a(i,j)}}=x_{v_{a(t,s)}}\) for \(2\leq j,z\leq k-1\). By Lemma 3.11, we know that if \(p>0\), then \(x_{v_{i}}\leq x_{v_{i+1}}\) for \(0\leq i\leq p-1\); if \(p\geq 0\), then \(x_{v_{i}}\geq x_{v_{i+1}}\) for \(p\leq i\leq t-1\).
**Case 1**\(p>0\).
**Subcase 1.1**\(t-p\geq L_{0}\) (see Fig. 3.6). By Lemma 3.12, there exists \(1\leq j\leq 3\) such that \(t-p>L_{1}+\cdots+L_{j}\), but \(L_{0}+L_{1}+\cdots+L_{j}>t-p\). Without loss of generality, we suppose \(j=2\). Now \(t-L_{1}-L_{2}>p\), \(t-L_{0}-L_{1}-L_{2}<p\). Note that \(L_{3}=t-L_{0}-L_{1}-L_{2}\). For brevity and convenience, we let \(\xi=t-L_{1}\), \(\eta=t-L_{1}-L_{2}\).
**Subcase 1.1.1**\(x_{v_{q}}\geq x_{v_{L_{3}}}\). By Lemma 3.11, we know that \(x_{v_{q-1}}\geq x_{v_{L_{3}+1}}\).
**Subcase 1.1.1.1**\(x_{v_{0}}\leq x_{v_{a_{(n,1)}}}\), \(x_{v_{L_{3}}}\leq x_{v_{a_{(n,2)}}}\), \(x_{v_{t}}\leq x_{v_{a_{(n,3)}}}\). Let \(\varepsilon^{{}^{\prime}}_{i}=(\varepsilon_{i}\setminus\{v_{0}\})\cup\{v_{a_{( \eta,1)}}\}\) for \(1\leq i\leq\eta\),
\(e^{{}^{\prime}}_{L_{3}}=(e_{L_{3}}\setminus\{v_{L_{3}}\})\cup\{v_{a_{(\eta,2)} }\}\), \(e^{{}^{\prime}}_{\xi+1}=(e_{\xi+1}\setminus\{v_{\xi}\})\cup\{v_{a_{(\eta,3)}}\}\); \(\mathcal{S}_{i}=\sum_{v\in(\varepsilon_{i}\setminus\{v_{0}\})}x_{v}\) for \(i=1\), \(2\), \(\ldots\), \(\eta\), \(\mathcal{S}_{i_{3}}=\sum_{v\in e_{\xi+1}\setminus\{v_{\xi}\}}x_{v}\), \(\mathcal{S}_{\xi+1}=\sum_{v\in(\varepsilon_{\xi+1}\setminus\{v_{\xi}\})}x_{v}\).
Let
\[\mathcal{G}^{{}^{\prime}}_{1}=\mathcal{G}_{1}-\sum_{i=1}^{\eta}\varepsilon_{i} +\sum_{i=1}^{\eta}\varepsilon^{{}^{\prime}}_{i}-e_{L_{3}}+e^{{}^{\prime}}_{L_ {3}}-e_{\xi+1}+e^{{}^{\prime}}_{\xi+1}.\]
Then
\[X^{T}\mathcal{A}(\mathcal{G}^{{}^{\prime}}_{1})X-X^{T}\mathcal{A}(\mathcal{G} _{1})X=\frac{2}{k-1}\{(x_{v_{a_{(\eta,1)}}}-x_{v_{0}})\sum_{i=1}^{\eta} \mathcal{S}_{i}+(x_{v_{a_{(\eta,2)}}}-x_{v_{L_{3}}})\mathcal{S}_{L_{3}}+(x_{v_ {a_{(\eta,3)}}}-x_{v_{\xi}})\mathcal{S}_{\xi+1}\}\geq 0.\]
It follows that \(\rho(\mathcal{G}^{{}^{\prime}}_{1})\geq\rho(\mathcal{G}_{1})\). Suppose \(\rho(\mathcal{G}^{{}^{\prime}}_{1})=\rho(\mathcal{G}_{1})\). Then \(\rho(\mathcal{G}^{{}^{\prime}}_{1})=X^{T}\mathcal{A}(\mathcal{G}^{{}^{\prime}} _{1})X=X^{T}\mathcal{A}(\mathcal{G}_{1})X=\rho(\mathcal{G}_{1})\). Hence \(X\) is also the principal eigenvector of \(\mathcal{G}^{{}^{\prime}}_{1}\) and \(\mathcal{A}(\mathcal{G}^{{}^{\prime}}_{1})X=\mathcal{A}(\mathcal{G}_{1})X\). But a contradiction comes immediately because \((\mathcal{A}(\mathcal{G}^{{}^{\prime}}_{1})X)_{v_{a_{(\eta,1)}}}>(\mathcal{A} (\mathcal{G}_{1})X)_{v_{a_{(\eta,1)}}}\). Thus it follows that \(\rho(\mathcal{G}^{{}^{\prime}}_{1})>\rho(\mathcal{G}_{1})\). Note that \(\mathcal{G}^{{}^{\prime}}_{1}\cong\mathcal{G}_{2}\). Then \(\rho(\mathcal{G}_{2})>\rho(\mathcal{G}_{1})\).
**Subcase 1.1.1.2**\(x_{v_{0}}>x_{v_{a_{(\eta,1)}}}\), \(x_{v_{L_{3}}}\leq x_{v_{a_{(\eta,2)}}}\), \(x_{v_{t-L_{1}}}\leq x_{v_{a_{(\eta,3)}}}\). Let \(e^{{}^{\prime}}_{1}=(e_{1}\setminus\{v_{0}\})\cup\{v_{a_{(\eta,1)}}\}\), \(e^{{}^{\prime}}_{\eta}=(e_{\eta}\setminus\{v_{a_{(\eta,1)}}\})\cup\{v_{0}\}\), \(e^{{}^{\prime}}_{L_{3}}=(e_{L_{3}}\setminus\{v_{L_{3}}\})\cup\{v_{a_{(\eta,2)} }\}\), \(e^{{}^{\prime}}_{\xi+1}=(e_{\xi+1}\setminus\{v_{\xi}\})\cup\{v_{a_{(\eta,3)}}\}\); \(\mathcal{S}_{1}=\sum_{v\in(e_{1}\setminus\{v_{0}\})}x_{v}\), \(\mathcal{S}_{\eta}=\sum_{v\in(e_{\eta}\setminus\{v_{a_{(\eta,1)}}\})}x_{v}\), \(\mathcal{S}_{\xi+1}=\sum_{v\in(e_{\xi+1}\setminus\{v_{\xi}\})}x_{v}\). Note \(x_{v_{a}}\geq x_{v_{t-L_{0}-L_{1}-L_{2}}}\geq x_{v_{0}}\), \(x_{\eta-1}\geq x_{t-L_{0}-L_{1}-L_{2}+1}=x_{L_{3}+1}\). Using Lemma 3.6, we get \(x_{v_{a_{(\eta,i)}}}<x_{a_{(L_{3}+1,j)}}<x_{v_{a_{(\eta,j)}}}<\min\{x_{v_{v_{ 0}-1}},x_{v_{\eta}}\}\) for \(1\leq j\leq k-2\). As a result, it follows that \(\mathcal{S}_{1}\leq\mathcal{S}_{\eta}\).
Let
\[\mathcal{G}^{{}^{\prime}}_{1}=\mathcal{G}_{1}-e_{1}+e^{{}^{\prime}}_{1}-e_{\eta }+e^{{}^{\prime}}_{\eta}-e_{L_{3}}+e^{{}^{\prime}}_{L_{3}}-e_{\xi+1}+e^{{}^{ \prime}}_{\xi+1}.\]
Then
\[X^{T}\mathcal{A}(\mathcal{G}^{{}^{\prime}}_{1})X-X^{T}\mathcal{A}(G_{1})X =\frac{2}{k-1}\{(x_{v_{0}}-x_{v_{a_{(\eta,1)}}})\mathcal{S}_{\eta}- (x_{v_{0}}-x_{v_{a_{(\eta,1)}}})\mathcal{S}_{1}+(x_{v_{a_{(\eta,2)}}}-x_{v_{L_{3 }}})\mathcal{S}_{L_{3}}+(x_{v_{a_{(\eta,3)}}}-x_{v_{\xi}})\mathcal{S}_{\xi+1}\}\] \[\geq 0.\]
Thus it follows that \(\rho(\mathcal{G}^{{}^{\prime}}_{1})\geq\rho(\mathcal{G}_{1})\). Note that \((\mathcal{A}(\mathcal{G}^{{}^{\prime}}_{1})X)_{v_{\eta}}>(\mathcal{A}( \mathcal{G}_{1})X)_{v_{\eta}}\) and \(\mathcal{G}^{{}^{\prime}}_{1}\cong\mathcal{G}_{2}\). As Subcase 1.1.1.1, we get that \(\rho(\mathcal{G}_{2})>\rho(\mathcal{G}_{1})\).
**Subcase 1.1.1.3**\(x_{v_{0}}\leq x_{v_{a_{(\eta,1)}}}\), \(x_{v_{L_{3}}}>x_{v_{a_{(\eta,2)}}}\), \(x_{v_{\xi}}\leq x_{v_{a_{(\eta,3)}}}\). Let \(\varepsilon^{{}^{\prime}}_{i}=(\varepsilon_{i}\setminus\{v_{0}\})\cup\{v_{a_{( \eta,1)}}\}\) for \(1\leq i\leq\eta\),
\(e^{{}^{\prime}}_{\eta}=(e_{\eta}\setminus\{v_{a_{(\eta,2)}}\})\cup\{v_{L_{3}}\}\), \(e^{{}^{\prime}}_{L_{3}+1}=(e_{L_{3}+1}\setminus\{v_{L_{3}}\})\cup\{v_{a_{(\eta,2)} }\}\), \(e^{{}^{\prime}}_{\xi+1}=(e_{\xi+1}\setminus\{v_{\xi}\})\cup\{v_{a_{(\eta,3)}}\}\).
Let
\[G^{{}^{\prime}}_{1}=G_{1}-\sum_{i=1}^{\eta}\varepsilon_{i}+\sum_{i=1}^{\eta} \varepsilon^{{}^{\prime}}_{i}-e_{L_{3}+1}+e^{{}^{\prime}}_{L_{3}+1}-e_{\eta}+e^{{}^{ \prime}}_{\eta}-e_{\xi+1}+e^{{}^{\prime}}_{\xi+1}.\]
As Subcase 1.1.1.1 and Subcase 1.1.1.2, we get that \(\rho(\mathcal{G}^{{}^{\prime}}_{1})>\rho(\mathcal{G}_{1})\), and \(\rho(\mathcal{G}_{2
**Subcase 1.1.2**\(x_{v_{q}}<x_{v_{L_{3}}}\). By Lemma 3.11, we know that \(x_{v_{q-1}}\leq x_{v_{L_{3}+1}}\). By considering the comparisons between \(x_{v_{0}}\) and \(x_{v_{a(L_{3}+1,1)}}\), between \(x_{v_{\xi}}\) and \(x_{v_{a(L_{3}+1,2)}}\), between \(x_{v_{0}}\) and \(x_{v_{a(L_{3}+1,3)}}\), as Subcase 1.1.1, we get that \(\rho(\mathcal{G}_{2})>\rho(\mathcal{G}_{1})\).
**Subcase 1.2**\(t-p<L_{0}\) (see Fig. 3.7). Let \(\omega=L_{1}+L_{2}\), \(\varphi=L_{1}+L_{2}+L_{3}+1\). By considering the comparisons between \(x_{v_{0}}\) and \(x_{v_{a(\varphi,1)}}\), between \(x_{v_{L_{1}}}\) and \(x_{v_{a(\varphi,2)}}\), between \(x_{v_{\omega}}\) and \(x_{v_{a(\varphi,3)}}\), as Subcase 1.1, we get that \(\rho(\mathcal{G}_{2})>\rho(\mathcal{G}_{1})\).
\(\Box\)
**Case 2**\(p=0\) (see Fig. 3.8). Let \(\kappa=L_{0}+L_{1}\), \(\varsigma=L_{0}+L_{1}+L_{2}\). By considering the comparisons between \(x_{v_{a(1,1)}}\) and \(x_{v_{L_{0}}}\), between \(x_{v_{a(1,2)}}\) and \(x_{v_{\kappa}}\), between \(x_{v_{a(1,3)}}\) and \(x_{v_{\varsigma}}\), as Case 1, we get that \(\rho(\mathcal{G}_{2})>\rho(\mathcal{G}_{1})\).
This completes the proof. \(\Box\)
**Proof of Theorem 1.3.** For \(k\)-uniform supertrees of order \(n\), using Lemma 3.10 and Lemma 3.13 repeatedly gets the result. This completes the proof. \(\Box\)
**Proof of Corollary 1.4.** Note that \(\mathcal{S}^{*}(n,k)\) is the \(k\)th power of the ordinary star \(S^{*}(\frac{n-1}{k-1}+1,2)\), \(\mathcal{P}(n,k)\) is the \(k\)th power of the ordinary path \(P(\frac{n-1}{k-1}+1,2)\). Then the result follows from Theorem 1.1 and Theorem 1.3. This completes the proof. \(\Box\)
|
2305.09185 | Information Energy Ratio of XOR Logic Gate at Mesoscopic Scale | As the size of transistors approaches the mesoscopic scale, existing energy
consumption analysis methods exhibit various limits, especially when being
applied to describe the non-equilibrium information processing of transistors
at ultra-low voltages. The stochastic thermodynamics offers a theoretic tool to
analyze the energy consumption of transistor during the non-equilibrium
information processing. Based on this theory, an information energy ratio of
XOR gate composed of single-electron transistors is proposed at the mesoscopic
scale, which can be used to quantify the exchange between the information and
energy at XOR gates. Furthermore, the energy efficiency of the parity check
circuit is proposed to analyze the energy consumption of digital signal
processing systems. Compared with the energy efficiency of parity check circuit
adopting the 7 nm semiconductor process supply voltage, simulation results show
that the energy efficiency of the parity check circuit is improved by 266% when
the supply voltage is chosen at a specified value. | Xiaohu Ge, Muyao Ruan, Xiaoxuan Peng, Yong Xiao, Yang Yang | 2023-05-16T05:49:07Z | http://arxiv.org/abs/2305.09185v1 | # Information Energy Ratio of XOR Logic Gate at Mesoscopic Scale
###### Abstract
As the size of transistors approaches the mesoscopic scale, existing energy consumption analysis methods exhibit various limits, especially when being applied to describe the non-equilibrium information processing of transistors at ultra-low voltages. The stochastic thermodynamics offers a theoretic tool to analyze the energy consumption of transistor during the non-equilibrium information processing. Based on this theory, an information energy ratio of XOR gate composed of single-electron transistors is proposed at the mesoscopic scale, which can be used to quantify the exchange between the information and energy at XOR gates. Furthermore, the energy efficiency of the parity check circuit is proposed to analyze the energy consumption of digital signal processing systems. Compared with the energy efficiency of parity check circuit adopting the 7 nm semiconductor process supply voltage, simulation results show that the energy efficiency of the parity check circuit is improved by 266% when the supply voltage is chosen at a specified value.
## I Introduction
With the fast growing deployment of the 5th-generation (5G) mobile communication systems, the massive data need to be processed by digital signal processing circuits. The energy consumption of digital signal processing circuits is increased quickly in 5G mobile communication systems[1]. Digital signal processing circuits are composed of three types of logic gates, AND, NOT and XOR gates, all of which are made of transistors[2]. Thanks to the recent advancement of circuit technologies, the size of transistors is now at the mesoscopic scale, e.g., sub-7 nanometers(nm). However, as the sub-7nm transistor technology is approached, digital logic circuits are inevitably becoming more and more susceptible to the thermal noise due to the aggressive voltage and gate length scaling[3], especially at the mesoscopic scale. Traditional analytical methods of digital logic circuits take into account the thermal noise from phenomenological approaches. Hence, traditional analytical methods are difficult to analyze the mesoscopic scale digital circuits due to thermal functions of non-equilibrium information processing[4]. To overcome the limits of traditional analytical methods, the |
2302.02077 | Cross-Frequency Time Series Meta-Forecasting | Meta-forecasting is a newly emerging field which combines meta-learning and
time series forecasting. The goal of meta-forecasting is to train over a
collection of source time series and generalize to new time series
one-at-a-time. Previous approaches in meta-forecasting achieve competitive
performance, but with the restriction of training a separate model for each
sampling frequency. In this work, we investigate meta-forecasting over
different sampling frequencies, and introduce a new model, the Continuous
Frequency Adapter (CFA), specifically designed to learn frequency-invariant
representations. We find that CFA greatly improves performance when
generalizing to unseen frequencies, providing a first step towards forecasting
over larger multi-frequency datasets. | Mike Van Ness, Huibin Shen, Hao Wang, Xiaoyong Jin, Danielle C. Maddix, Karthick Gopalswamy | 2023-02-04T03:22:16Z | http://arxiv.org/abs/2302.02077v1 | # Cross-Frequency Time Series Meta-Forecasting
###### Abstract
Meta-forecasting is a newly emerging field which combines meta-learning and time series forecasting. The goal of meta-forecasting is to train over a collection of source time series and generalize to new time series one-at-a-time. Previous approaches in meta-forecasting achieve competitive performance, but with the restriction of training a separate model for each sampling frequency. In this work, we investigate meta-forecasting over different sampling frequencies, and introduce a new model, the Continuous Frequency Adapter (CFA), specifically designed to learn frequency-invariant representations. We find that CFA greatly improves performance when generalizing to unseen frequencies, providing a first step towards forecasting over larger multi-frequency datasets.
## 1 Introduction
Time series forecasting is a classical statistical problem with practical applications in several fields, such as finance, business management [16]. Local statistical models such as ARIMA and ETS [9] have long been state-of-the-art for forecasting. In recent years, much effort has been put into matching the performance of local models with deep learning approaches, particularly when modeling several closely-related time series [17; 14; 3].
When less data is available in a target dataset, transfer learning from source to target data is often necessary to compete with local methods. One approach is meta-learning, or domain generalization, where a model is trained to generalize to new target domains after an initial training phase on source data. Recent work has shown that meta-learning for time series forecasting, or meta-forecasting, can achieve competitive performance with only local fine-tuning [7] or even with no fine-tuning [15]. Such approaches are _zero-shot_ forecasters, as they can forecast out-of-domain time series one-at-a-time without access to any related time series.
Almost all of the previous transfer learning works use the assumption that source and target data come from the same sampling frequency, e.g. hourly, daily, monthly, etc. We propose a different assumption: _all data is seasonal, but not necessarily from the same sampling frequency_. The typical seasonality associated with each sampling frequency then creates a correspondence between sampling frequency and signal frequency. As seen in Figure 1, seasonal time series of different signal frequencies appear very similar to humans, but are challenging for typical forecasting models to transfer between. Along with our data assumption, we consider a new task, _frequency generalization_, in which we task a model to generalize to _unseen_ frequencies during meta-test time. For successful frequency generalization, we propose a new model, the Continuous Frequency Adapter (CFA). As shown in Figure 1, CFA can forecast the correct frequency on data of new unseen frequencies, which other methods cannot. CFA
uses continuous domain adaptation [19] to enforce frequency-invariant hidden states, which is vital for frequency generalization. We summarize the novelty and contribution of this paper:
* We explore meta-learning over different sampling frequencies, which is previously unexplored. For this, we introduce a new task, _frequency generalization_, in which we task a model to generalize to time series of new sampling frequencies during test time.
* We develop a new model, CFA, which achieves improved performance for frequency generalization than previous meta-learning models. CFA uses continuous domain adaptation [19] to adapt to new sampling frequencies, a new technique for the time series literature. Specifically, CFA uses signal frequencies obtained from a Fourier transform of the input time series to define continuous domain indices, a novel technique for time series domain generalization.
Related WorkTransfer learning has been explored previously for time series through several subfields. Time series representation learning [20; 21] does self-supervised pretraining for time series data. Domain adaptation approaches [11; 8] learn models that can adapt from one source dataset to a different target dataset, often utilizing adversarial training. In few-shot learning [10], a model transfers knowledge by learning how to learn from a small support set of related time series.
Recently, a few papers have particularly addressed meta-forecasting. In [15], the popular forecasting model N-BEATS is shown to fit a meta-learning framework, and achieves competitive zero-shot performance. In Meta-GLAR [7], a local closed-form head is used to adapt global representations to new time series. Our work is most similar to these meta-learning approaches in that performance on the target dataset is evaluated in a zero-shot manner. Our model, however, takes inspiration primarily from [11] in its use of adversarial training and self-attention. We emphasize that all of the above cited papers only consider transferring between datasets of the same frequency, with the exception of [11] which considers domain adaptation opposed to the harder task of zero-shot meta-learning.
## 2 Problem Definition
Time series forecasting is the problem of predicting future observations, i.e. forecasting, given a past context window. That is, for some time series \((\mathbf{z}_{t})_{t>0}\), data samples are of the form
\[\mathbf{x}=\mathbf{z}_{1:\tau_{c}},\quad\mathbf{y}=\mathbf{z}_{\tau_{c}+1:\tau_{c}+\tau_{f}}\]
where \(\tau_{c}\) is the length of the context and \(\tau_{f}\) is the length of the forecast. A model \(f\) then estimates \(\mathbf{y}\) from \(\mathbf{x}\). If \(f\) has parameters \(\theta\), we aim to find the parameters \(\theta\) that minimize the forecasting loss, i.e.
\[\underset{\theta}{\text{argmin}}\,\mathbb{E}[L_{f}(F(\mathbf{x}),\mathbf{y};\theta)] \tag{1}\]
where \(L_{f}\) is a forecasting loss such as MSE.
Figure 1: For frequency generalization, CFA outperforms LSTM. Both models are trained on synthetic source data with period length randomly sampled between [10; 20], and are zero-shot applied to synthetic target data with period length randomly sampled from [30; 40]. In the top plot, the LSTM model predicts the wrong seasonality, whereas CFA successfully adapts to the new seasonality. In the bottom plot, we see that CFA keys are invariant to frequency (color), but LSTM hidden states have different distributions for source and target frequencies ranges.
What distribution the expectation is taken over, i.e. what distribution \(\mathbf{z}\) comes from, depends on the nature of the forecasting problem. In this paper, we consider the _zero-shot_ framework, in which case \(\mathbf{z}\sim\mathcal{T}\) comes from some target distribution \(\mathcal{T}\), but we only have training data \((\mathbf{x}_{1},\mathbf{y}_{1}),\dots,(\mathbf{x}_{n},\mathbf{y}_{n})\sim\mathcal{S}\) from a source distribution \(\mathcal{S}\). This defines the problem of _meta-forecasting_, where the model \(f\) must meta-learn on \(\mathcal{S}\) with the goal of minimizing 1 on \(\mathcal{T}\) with no additional training on \(\mathcal{T}\), where \(\mathcal{T}\) could be _any_ target distribution unseen during training.
In this paper, we focus on _frequency generalization_, which we define as meta-forecasting with \(\mathcal{S}\) and \(\mathcal{T}\) representing distinct frequencies. This is more challenging than the setting considered in previous meta-forecasting papers, in which \(\mathcal{T}\) only contains frequencies that are already seen in \(\mathcal{S}\). We do explore this second easier setting to compare to previous papers, and present the results in Appendix A.2.
Since minimizing 1 on all potential \(\mathcal{T}\) is an ambitious and likely unrealistic desire, we restrict \(\mathcal{S}\) and \(\mathcal{T}\) to be distributions representing seasonal datasets. Under this assumption, different sampling frequencies correspond to different signal frequencies, and thus generalizing to new sampling frequencies corresponds to generalizing to new signal frequencies. This makes the frequency generalization problem realistic, since seasonal data of different signal frequencies appear quite similar to the human eye but are difficult for machine learning models to generalize between as shown in Figure 1. Relaxing this assumption would be significantly more challenging, and we leave this to future work.
## 3 Continuous Frequency Adaptation
Most meta-forecasting models struggle to learn frequency-invariant signal vital for meta-forecasting. This challenge is demonstrated in the bottom right portion of Figure 1, where an LSTM model learns hidden states whose distribution depends on the input signal frequency.
To overcome this challenge, we introduce the Continuous Frequency Adapter (CFA), which is specifically designed to learn frequency-invariant representations (see bottom left of Figure 1). CFA is primarily a self-attention network, and utilizes adversarial training to enforce frequency invariance in the attention keys and queries (but not the values). The model architecture is inspired by the Domain Adaptation Forecaster (DAF) [11], but uniquely uses continuous domain indices [19] obtained by a Fourier transform to generalize to unseen signal frequencies.
CFA ArchitectureThe CFA architecture is summarized in Figure 2. The encoder, self-attention, and decoder blocks are similar to the DAF architecture [11]. The encoder consists of a position-wise MLP follows by a series of 1D convolutional layers, and the decoder is a position-wise MLP. The self-attention block is a standard multi-head attention block as in transformer architectures [18]. The CFA discriminator is an MLP, like in DAF, but unlike DAF, our discriminator outputs a continuous response to match the continuous domain index (see next paragraph). Also, unlike DAF, the encoder and decoder blocks are shared for all time series, since a continuous domain index does not allow for separate encoders and decoders for each possible index since there are infinitely many domain indices.
Figure 2: CFA Architecture. The encoder takes in a context window and produces keys, queries, and values, which are used by the self-attention module to produce representations for each timestep in the forecast window, which the decoder uses to make forecasts. Meanwhile, the keys and queries are passed to the continuous discriminator, which predicts the top \(k\) frequencies of the context window sorted by FFT amplitude. Adversarial training via Equation 2 is used to learn good forecasts while also making keys and queries frequency-invariant.
The discriminator takes as input all keys and queries from the self-attention block, in order to enforce domain invariance in the keys and queries via adversarial training (see Equation 2). The motivation for this choice is that the keys and queries are used to generate the attention weights, which tell the model, for any given time point, which other time points are most relevant. For time series data, especially seasonal time series data, this importance weighting can be independent of the signal frequency, as the attention weights only need to capture the phase of the signal. Meanwhile, the values in the self-attention block are left independent of the discriminator, and thus can learn information that is dependent on the given time series, e.g. what the time series typically looks like at each phase.
Continuous Domain GeneralizationA key component of CFA is the use of continuous domain indices [19]. These continuous domain indices serve as labels for the discriminator, which takes as input the self-attention keys and queries and outputs a continuous domain index prediction. The domain indices are obtained via an FFT on the inputted context window to capture the signal frequencies of the time series. Specifically, we take the absolute value of the FFT outputs to obtain the amplitude corresponding to each frequency bin. We then select the top \(k\) frequencies, sorted by their amplitudes, and use their inverses (i.e. the period lengths) as the discriminator labels. We normalize the labels to be between 0 and 1 to stabilize the discriminator loss. For synthetic data we use \(k=1\), since the synthetic data is moist sine waves without subfrequencies, and on real data we use \(k=2\).
Adversarial LossCFA utilizes adversarial training via a typical minimax loss [6]. Let \(E\) be an encoder, \(F\) a forecasting decoder, and \(D\) a discriminator (for CFA, E generates the self-attention keys, queries, and values, and F produces forecasts using these self-attention inputs). The goal of CFA is to solve the following minimax problem:
\[\min_{E,F}\max_{D}\mathbb{E}[L_{f}(\mathbf{x};E,F)]-\lambda\mathbb{E}[L_{d}(\mathbf{x };E,D)] \tag{2}\]
where \(L_{f}\) is the forecasting loss and \(L_{d}\) is the discriminator loss. In words, \(D\) is trained to minimize \(L_{d}\), thereby training a strong discriminator, while \(E\) and \(F\) are trained to both produce good forecasts (i.e. minimize \(L_{f}\)) while maintaining hidden states that the strong discriminator cannot predict well (i.e. maximizing \(L_{d}\)). Such adversarial training allows the model to learn frequency-invariant discriminator inputs (keys and queries) while still producing good forecasts. In practice, the forecast/generative parameters (E, F) and the discriminative parameters (D) are updated in an alternating fashion, see Algorithm 1.
Training ProcedureSince CFA is designed to learn frequency-invariant keys and queries, it is essential that CFA is trained over source data with varied frequencies. The multi-source training procedure is illustrated in Algorithm 1. The training works by sampling one batch from each of the source datasets, and updating the generative and discriminative parameters from the sum of the batch losses. The forecast/generative parameters (E, F) and the discriminative parameters (D) are updated in an alternating fashion, as typical for adversarial training.
```
1:Input: source datasets \(S_{1},\ldots S_{d}\), forecast loss \(L_{f}\), discriminator loss \(L_{d}\).
2:for epoch \(=1\)to \(E\)do
3:for\(i=1\)to n_batches_per_epochdo
4: Sample \(\mathbf{x}_{j},\mathbf{y}_{j}\sim S_{j}\) for \(j=1,\ldots,d\)
5: Compute generative loss \(L_{j}=L_{f}(\mathbf{x}_{j},\mathbf{y}_{j})-\lambda L_{d}(\mathbf{x}_{j})\) for each \(j\)
6: Update generative parameters via \(L=L_{1}+\cdots+L_{d}\)
7: Compute discriminative loss \(L_{j}=L_{d}(\mathbf{x}_{j},\text{FFT}(\mathbf{x}_{j})\) for each \(j\)
8: Update discriminative parameters via \(L=L_{1}+\cdots+L_{d}\)
9:endfor
10:endfor
```
**Algorithm 1** CFA Training Algorithm
## 4 Experiments
ModelsFor our experiments, we consider the following models.
* **Mean**: simple baseline that forecasts the mean from the context window.
* **LSTM**: an auto-regressive LSTM model similar to DeepAR [17].
* **NBEATS**: deep model with mostly linear layers and doubly-residual connections [14].
* **CFA**: Our model described in Section 3.
Since frequency generalization is a new task, there are some baselines which cannot readily be adapted. For one, any method which requires separate modules for different sampling frequencies cannot be used, e.g. DAF [11], because it is impossible to train a new module for the target sampling frequency in the zero-shot regime. Additionally, we think that is it critical on real data to have different forecasting lengths for different sampling frequencies, and thus require models that can forecast an arbitrary number of time steps ahead during test time. CFA and LSTM can easily do this since they are autoregressive forecasters, i.e. they use the previous forecasts to make each successive forecast. NBEATS, on the other hand, requires a fixed forecast length that cannot be adjusted during test time, and thus we do not use it as a baseline for real data experiments. We still consider NBEATS as a baseline for synthetic data generated with uniform forecast length, though, since NBEATS has been shown to be a strong meta-forecaster [15].
Synthetic DataWe generate synthetic time series data using sinusoidal curves with Gaussian noise and uniformly random period length (inverse of frequency), see Appendix A.1 for full details. We designate one period range for source data and one period range for target data. Models are trained on source data and applied zero-shot to target data, evaluated by mean squared error (MSE) in the forecast window. Results are shown in Table 1. Across all combinations of source and target period ranges, CFA is either the best model or within one standard deviation of the best model. As shown in Figure 1, CFA is able to learn good forecasts on the source data while maintaining frequency-invariant keys and queries, allowing CFA to generalize to new frequencies. In comparison, LSTM and NBEATS learn frequency-dependent signal, and thus fail to generalize to new frequencies, even failing to beat the simple mean baseline.
Real DataWe focus on real world datasets that exhibit clear seasonality. We use the following datasets, each with a different sampling frequency: elec (hourly), aber (daily), tourism monthly (monthly), and tourism quarterly (quarterly). We load all datasets using GluonTS [1]. More information on each dataset, data preprocessing, and setup can be found in Appendix A.1. For each experiment, we designate one dataset as the target dataset and use the other 3 as the source datasets. We evaluate models by their Normalized Deviation (ND) [11] on the forecasting window. We do not run NBEATS because it requires equal context/forecast lengths across datasets, which we do not enforce for frequency generalization. The results are shown in Table 2. As was the case with synthetic data, CFA outperforms LSTM for frequency generalization.
## 5 Conclusion
Previous meta-forecasting papers have shown strong performance, but only when training one model per sampling frequency. In this paper, we instead consider frequency generalization, i.e. generalizing to unseen frequencies, for which it is not possible to train one model per sampling frequency. While previous meta-forecasting models are not successful, our CFA model provides much improved
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Source Range & Target Range & Mean & CFA & LSTM & NBEATS \\ \hline (10, 15) & (15, 20) & \(0.260\pm 0.002\) & \(\mathbf{0.088\pm 0.011}\) & \(0.312\pm 0.057\) & \(0.424\pm 0.017\) \\ & (20, 25) & \(0.259\pm 0.003\) & \(\mathbf{0.190\pm 0.025}\) & \(0.456\pm 0.081\) & \(0.351\pm 0.007\) \\ & (25, 30) & \(\mathbf{0.262\pm 0.003}\) & \(\mathbf{0.223\pm 0.041}\) & \(0.426\pm 0.028\) & \(0.320\pm 0.004\) \\ (15, 20) & (10, 15) & \(0.259\pm 0.003\) & \(\mathbf{0.071\pm 0.014}\) & \(0.388\pm 0.035\) & \(0.417\pm 0.011\) \\ & (20, 25) & \(0.259\pm 0.003\) & \(\mathbf{0.055\pm 0.004}\) & \(0.185\pm 0.055\) & \(0.469\pm 0.011\) \\ (20, 25) & (25, 30) & \(0.262\pm 0.003\) & \(\mathbf{0.082\pm 0.007}\) & \(0.499\pm 0.072\) & \(0.526\pm 0.012\) \\ (20, 25) & (10, 15) & \(\mathbf{0.259\pm 0.003}\) & \(\mathbf{0.209\pm 0.099}\) & \(0.497\pm 0.037\) & \(0.333\pm 0.005\) \\ & (15, 20) & \(0.260\pm 0.002\) & \(\mathbf{0.066\pm 0.013}\) & \(0.282\pm 0.082\) & \(0.425\pm 0.008\) \\ & (25, 30) & \(0.262\pm 0.003\) & \(\mathbf{0.064\pm 0.006}\) & \(0.134\pm 0.052\) & \(0.387\pm 0.020\) \\ (25, 30) & (10, 15) & \(\mathbf{0.259\pm 0.003}\) & \(\mathbf{0.257\pm 0.056}\) & \(0.442\pm 0.015\) & \(\mathbf{0.303\pm 0.006}\) \\ & (15, 20) & \(\mathbf{0.260\pm 0.002}\) & \(\mathbf{0.213\pm 0.081}\) & \(0.533\pm 0.045\) & \(0.417\pm 0.007\) \\ & (20, 25) & \(0.259\pm 0.003\) & \(\mathbf{0.069\pm 0.015}\) & \(0.177\pm 0.027\) & \(0.397\pm 0.021\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Frequency generalization on synthetic data, measured as MSE of forecast. Source and target range indicate the range of uniformly random period lengths in the source and target data, respectively. For each source/target pair, the model is trained on source and applied zero-shot to target. Across all pairs of ranges, CFA has the best performance.
performance. This is an important first step towards building forecasting models robust to signal frequency, which could be trained over larger and less constrained datasets.
|
2303.15622 | Structure, Stability and Superconductivity of N-doped Lutetium Hydrides
at kbar Pressures | The structure of the material responsible for the room temperature and near
ambient pressure superconductivity reported in an N-doped lutetium hydride
[Nature, 615, 244 (2023)] has not been conclusively determined. Herein, density
functional theory calculations are performed in an attempt to uncover what it
might be. Guided by a range of strategies including crystal structure
prediction and modifications of existing structure types, we present an array
of Lu-N-H phases that are dynamically stable at experimentally relevant
pressures. Although none of the structures found are thermodynamically stable,
and none are expected to remain superconducting above 17 K at 10 kbar, a number
of metallic compounds with fcc Lu lattices -- as suggested by the experimental
X-ray diffraction measurements of the majority phase -- are identified. The
system whose calculated equation of states matches best with that measured for
the majority phase is fluorite-type LuH2, whose 10 kbar superconducting
critical temperature was estimated to be 0.09 K using the Allen-Dynes modified
McMillan equation. | Katerina P. Hilleke, Xiaoyu Wang, Dongbao Luo, Nisha Geng, Busheng Wang, Eva Zurek | 2023-03-27T22:32:19Z | http://arxiv.org/abs/2303.15622v1 | # Structure, Stability and Superconductivity of N-doped Lutetium Hydrides at kbar Pressures
###### Abstract
The structure of the material responsible for the room temperature and near ambient pressure superconductivity reported in an N-doped lutetium hydride [Nature, 615, 244 (2023)] has not been conclusively determined. Herein, density functional theory calculations are performed in an attempt to uncover what it might be. Guided by a range of strategies including crystal structure prediction and modifications of existing structure types, we present an array of Lu-N-H phases that are dynamically stable at experimentally relevant pressures. Although none of the structures found are thermodynamically stable, and none are expected to remain superconducting above \(\sim\)17 K at 10 kbar, a number of metallic compounds with _fcc_ Lu lattices - as suggested by the experimental X-ray diffraction measurements of the majority phase - are identified. The system whose calculated equation of states matches best with that measured for the majority phase is fluorite-type LuH\({}_{2}\), whose 10 kbar superconducting critical temperature was estimated to be 0.09 K using the Allen-Dynes modified McMillan equation.
+
Footnote †: preprint:
## I Introduction
Heike Kamerlingh Onnes' 1911 discovery of mercury's entrance into a "new...superconductive state" at very low temperatures, where all electrical resistance vanished [1], marked the beginning of a quest: could such a state be observed at room temperature? Ever since, scientists have sought this "holy grail", steadily breaking through barriers such as the boiling point of liquid nitrogen [2], 100 K [3], then near 200 K [4; 5; 6]. The latter breakthrough can be directly traced back to Ashcroft's proposal that hydrogen-rich alloys, metallized at conditions of extreme pressure, albeit less extreme than those required to metallize pure hydrogen, would be high-temperature phonon-mediated superconductors [7]. It also marked a paradigm shift defined by a close synergy between theory and experiment, with computations either predicting the most promising superconducting phases or being instrumental in characterizing the synthesized compounds [8; 9; 10].
For the materials with the highest superconducting critical temperatures, \(T_{c}\)s, that were found two things were true: they featured high hydrogen content and they required immense pressures - approaching those found in the center of the Earth (350 GPa) - for stability. One prominent class of these high-pressure high-temperature compounds are known as the "superhydrides". All of them are characterized by clathrate-like hydrogen-based lattices that encapsulate an electropositive metal atom, typically an alkaline or rare earth. Examples of compounds that have been both predicted and synthesized include CaH\({}_{6}\) (\(T_{c}\) = 210-215 K, 160-172 GPa) [11; 12], LaH\({}_{10}\) (\(T_{c}\) = 260 K, 200 GPa) [13; 14], YH\({}_{9}\) (\(T_{c}\) = 262 K, 182 GPa) [15], YH\({}_{6}\) (\(T_{c}\) = 224 K, 166 GPa) [16], and mixed La/Y ternary hydrides [17; 18]
Clearly, the most prominent metal atoms in these phases are yttrium and lanthanum, with supporting roles played by calcium, scandium, and other rare earths. However, most of the heavier lanthanide hydrides are not expected to be as promising because of the suppressive influence of the \(f\) electrons on superconductivity, with maximum \(T_{c}\)s decreasing rapidly once past La [19; 20]. As a result, the hydrides of lutetium received relatively little attention despite the fact that the filled \(4f\) shell of the metal is chemically unreactive rendering its electronic properties similar to those of Sc, Y and La... till now.
An early theoretical study generated a Lu-H convex hull using known polyhydride structures, finding LuH\({}_{4}\), LuH\({}_{6}\), LuH\({}_{9}\), and LuH\({}_{10}\) as being thermodynamically stable at various pressures up to 400 GPa [20]. Another identified a unique \(Imm\) structure for LuH\({}_{8}\) with an estimated \(T_{c}\) of 81-86 K at 300 GPa, based on a distorted version of the backbone of the \(Fm\bar{3}m\) LaH\({}_{10}\) phase [21]. A theoretical comparison between the hydrides of the rare earth elements with filled vs. unfilled \(f\)-states - Tm, Yb, and Lu, found LuH\({}_{n}\) (\(n\)=4-8, 10) phases either on or very near the Lu-H convex hull at relatively low pressures (less than 200 GPa) [22]. Notably, LuH\({}_{6}\), with the same \(Im\bar{3}m\) symmetry as CaH\({}_{6}\), had an estimated \(T_{c}\) of 273 K (matching the melting point of ice) at 100 GPa. The filled \(f\)-shells of Lu and Yb were suggested to confer a strong degree of phonon softening, thereby resulting in a high electron-phonon coupling. Finally, a theoretical investigation of trends in superconductivity proposed high-pressure \(Cc\) LuH\({}_{7}\) and \(C222\) LuH\({}_{12}\) phases, with the latter predicted to undergo a superconducting transition below 6.7 K at 150 GPa [19].
On the experimental side, a recent work reported the synthesis of a Lu hydride, suggested to be \(Pm\bar{3}n\) Lu\({}_{4}\)H\({}_{23}\), with a measured \(T_{c}\) of 71 K at 218 GPa [23]. This structure has previously been observed in experimental studies in the La-H [24], Ba-H [25], and Eu-H [26] systems.
Thus, with reported \(T_{c}\)s of the superhydrides reaching temperatures not uncommon for a typical winter-day in upstate New York, the focus of research changed to predicting and synthesizing materials that could maintain high \(T_{c}\)s, but at lower pressures, with the ultimate goal of realizing superconductivity at ambient temperature and pressure. As the structures and superconducting properties of the binary hydrides had been exhaustively searched with no such candidate found, computations turned towards predicting ternary hydrides that remained dynamically stable to pressures below 100 GPa [27; 28; 29], or boron-carbon analogues of the superhydrides that were
stable at 1 atm [30].
It was therefore quite exciting when a recent experimental manuscript reported superconductivity near room-temperature, \(T_{c}\) = 294 K, at a very moderate pressure of 10 kbar (1 GPa) in a nitrogen-doped lutetium hydride phase [31]. This pressure is low enough so that it becomes feasible to use pressure-quenching [32] to stabilize the material to ambient conditions, or to use careful strain engineering to achieve the desired superconductivity. Unfortunately, though a variety of techniques including X-ray diffraction (XRD), energy-dispersive X-ray measurements, elemental analysis and Raman spectroscopy were used to characterize the superconducting material, its composition and structure could not be fully resolved [31].
On the basis of the XRD and Raman analysis, the proposed room-temperature superconducting material (referred to as compound **A** by the authors) was indexed with space group \(Fm\bar{3}m\), and both compound **A** and a minor product, which was dubbed compound **B**, were suggested to consist of an _fcc_ Lu network with additional N and H uptake [31]. At pressures above \(\sim\)30 kbar, the superconducting compound **A** was found to undergo a pressure-induced transition to a non-superconducting structure involving a symmetry reduction of the Lu lattice to orthorhombic \(Immm\) symmetry. The superconducting compound was also observed to undergo a sequence of color changes corresponding to structural transitions as pressure was applied, from blue to pink (marking transition to the high-\(T_{c}\) superconductor), to red.
Follow-up studies have, however, suggested that this color change is derived in fact from pure LuH\({}_{2}\)[33; 34]. Experiments reported no evidence for superconductivity down to 1.5 K in LuH\({}_{2}\)[33], or in LuH\({}_{2\pm x}\)N\({}_{y}\) from ambient pressure to 6.3 GPa down to 10 K [34]. Moreover, DFT calculations [35] concluded that LuH\({}_{2}\) in the fluorite structure is the dominant phase of the parent nitrogen-doped superconductor, based on its computed thermodynamic and dynamic stability, optical properties and XRD pattern. A computational exploration of the Lu-N-H phase diagram found no ternary phases on the convex hull at pressures below 10 GPa, the binaries instead dominating, although a few ternary phases (Lu\({}_{20}\)H\({}_{2}\)N\({}_{17}\), Lu\({}_{2}\)H\({}_{2}\)N, LuH\({}_{5}\)N\({}_{2}\), Lu\({}_{3}\)H\({}_{6}\)N, and Lu\({}_{10}\)HN\({}_{8}\)) were within 100 meV/atom of the hull. A number of the identified phases were found to be derived from either H vacancies or N-doping of LuH\({}_{2}\)[36]. Another computational study did not find any thermodynamically stable Lu-N-H phases at 1 GPa and the highest \(T_{c}\) computed for N-doped \(Fm\bar{3}m\)-LuH\({}_{3}\) did not exceed 30 K [37].
Herein, we present a density functional theory (DFT) investigation of a series of structures in the Lu-N-H system that were either constructed via modification of known and theoretical prototype structures, via constrained and unconstrained crystal structure prediction (CSP) searches, or by a combination of these two methods. From the results of the unconstrained CSP runs we obtain a baseline against which to measure the enthalpies of constructed phases and to compare their properties. From constrained searches and artificially-constructed structures we begin to understand the motifs that contribute to dynamic stability at low pressures, and those which do not, allowing us to narrow the range of possible structures for further explorations into the Lu-N-H ternary system. The simulated X-ray diffraction patterns of the optimized phases and calculated equations of states are compared with available experimental data provided in Reference [31]. The highest superconducting critical temperature we find - 17 K at 10 kbar - was obtained for a CaF\({}_{2}\)-type LuNH phase that was far from thermodynamic stability.
## II Computational details
Precise geometry optimizations and electronic structure calculations were performed using DFT in conjunction with the Perdew-Burke-Ernzerhof (PBE) functional [38], as implemented in the Vienna _ab initio_ simulation package (VASP) [39; 40; 41]. The valence electrons of the hydrogen (H \(1s^{1}\)), nitrogen (N \(2s^{2}2p^{3}\)), and lutetium (Lu \(5p^{6}5d^{1}6s^{2}\)) atoms were simulated using plane wave basis sets with a cutoff energy of 600 eV. The core electrons were treated with the projector augmented wave (PAW) method [42]. Detailed tests of the inclusion of the \(4f\) electrons on the properties of select structures, as well as the convergence of the plane wave basis were performed and representative results are provided in the Supporting Information. The reciprocal space was sampled using a \(\Gamma\)-centered Monkhorst-Pack mesh [43], where the number of divisions along each reciprocal lattice vector was chosen such that the product of this number with the real-space lattice constant was 70 A for density of states calculations and 50 A for static calculations. To interrogate the dynamic stability of promising phases, phonon calculations were performed using the finite difference scheme, as implemented in the Phonopy software package [44; 45].
The electron-phonon coupling (EPC) calculations were performed using the Quantum Espresso (QE) package [46; 47] version 7.1 with the PBE functional. A plane wave basis set with a cutoff energy of 80 Ry was used, along with a charge density cutoff of 640 Ry for the valence electrons of hydrogen (H \(1s^{1}\)), nitrogen (N \(2s^{2}2p^{3}\)), and lutetium (Lu \(5s^{2}5p^{6}6s^{2}5d^{1}\)). The core electrons were treated with the PAW pseudopotentials generated using the PSLibrary package [48]. The \(k\)-point and \(q\)-point grids were selected to ensure the total electron-phonon-coupling (EPC) constant, \(\lambda\), was converged to within 0.05 at the desired Gaussian broadening width for each structure, as summarized in the Supporting Information.
The superconducting critical temperature (\(T_{\rm c}\)) was estimated using the Allen-Dynes modified McMillan equation [49]:
\[T_{\rm c}=\frac{\omega_{\rm ln}}{1.20}\exp\left[-\frac{1.04(1+\lambda)}{ \lambda-\mu^{*}(1+0.62\lambda)}\right], \tag{1}\]
in which the effective Coulomb potential, \(\mu^{*}\), was set to 0.1, the logarithmic average frequency \(\omega_{\rm ln}\) was obtained by
\[\omega_{\rm ln}=\exp\left(\frac{2}{\lambda}\int\frac{d\omega}{\omega}\alpha^{2 }F(\omega)\ln\omega\right), \tag{2}\]
and the electron phonon coupling constant, \(\lambda\), was evaluated by
\[\lambda=\int d\omega\alpha^{2}F(\omega)/\omega. \tag{3}\]
The Eliashberg spectral function, \(\alpha^{2}F(\omega)\), was obtained from the QE calculations, and was also used to numerically solve the Eliashberg equations [50].
The CSP searches were performed using the open-source evolutionary algorithm (EA) XtalOpt[51, 52, 53] version 12 [54]. The initial generation consisted of random symmetric structures created by the RandSpg algorithm [55]. Duplicate structures were identified via the XtalComp algorithm [56] and discarded from the breeding pool. Constrained XtalOpt searches were performed by determining the symmetry of the Lu atoms using Pymatgen [57] and only keeping those structures in the breeding pool that possessed an \(Fm\bar{3}m\) symmetry Lu sublattice. The parameters employed in the XtalOpt searches for the considered stoichiometries (number of formula units, pressures at which the EA searches were performed, and constraints employed) are provided in the Supporting Information.
## III Results
### Known Ambient Pressure Phases
Before we begin our theoretical investigation of novel Lu-N-H combinations that could be formed at mild pressures, let us review the structures and properties of the known LuH\({}_{x}\) and LuN phases. Unlike the high-pressure superhydrides, which bear little to no resemblance to the hydrides that are known at ambient conditions, the 1 atm LuN and LuH\({}_{x}\) phases may provide the key to the structure of Lu-N-H - or at least very good starting points - stemming from the relatively low pressures required to stabilize this ternary phase.
At ambient pressure, LuN assumes the rock-salt, or \(B1\), structure (Figure 1a), with the Lu atoms in the \(fcc\) configuration. A transition to the \(B2\) or CsCl phase has been predicted near 250 GPa [58]. Our PBE calculations, which likely underestimate the band gap, suggest semiconducting behavior at 1 atm with a gap of 0.23 eV. In compounds, lutetium typically adopts the +3 oxidation state, and its hydrides can incorporate vacancies or extra hydrogen atoms that go into the interstitial regions [59]. At 1 atm fluorite (CaF\({}_{2}\)) type LuH\({}_{x}\) is adopted when \(x=1.85-2.23\) (Figure 1(b)), usually resulting in a metallic phase. Increasing the hydrogen content to \(x=2.78-3\) yields a hexagonal semiconducting phase [59]. This \(P\bar{3}c1\) LuH\({}_{3}\) transitions to a cubic phase at \(\sim\)10 GPa (the AlFe\({}_{3}\) or \(D0_{3}\) structure type, Figure 1(c)), which can be stabilized at ambient via milling [60]. Recently, superconductivity in \(Fm3m\)-LuH\({}_{3}\) was measured with a \(T_{c}\) of 12.4 K at 122 GPa [61].
To validate the computational settings used, we compared the lattice constants of the known phases where the Lu atoms are found in the \(fcc\) arrangement: rock salt LuN (4.760 A [62]) and fluorite type LuH\({}_{2}\) (5.033 A [63]) with those of the optimized structures. The DFT lattice constants differed by only 0.17% and 0.28% from experiment, further supporting the choice of our computational parameters. These known ambient-pressure nitrides and hydrides of lutetium provide a basis that could be used to build models of the high-\(T_{c}\) superconducting phase reported in Ref. [31]. In fact, the similarity of the 1 atm lattice parameters of phase **A** (5.0289(4) A) and the (presumably non-superconducting) compound **B** (4.7529 A) with the known dihydride and nitride of lutetium, respectively, coupled with a comparison of the DFT-optimized unit cell parameters of several hypothetical and selected partially-doped versions of the known compounds were used to assign possible compositions [31]. Phase **A** was tentatively assigned as LuH\({}_{3-\delta}\)N\({}_{e}\), with partial N substitution onto H sites in the cubic (high-pressure) LuH\({}_{3}\), and phase **B** as LuN\({}_{1-\delta}\)H\({}_{e}\), an H-doped variant of rock-salt LuN [31]. On the other hand, a recent theoretical manuscript proposed that CaF\({}_{2}\)-type LuH\({}_{2}\) is the parent structure of the superconducting phase, and compound **B** could be the rock-salt LuH structure, which is dynamically stable at 0 GPa [35].
### Newly Predicted Phases
The structures investigated herein were generated using a variety of procedures including _ab initio_ CSP techniques, as well as modification of known phases and compounds predicted using CSP. The advantage of CSP searches is that they can, freed from structural preconceptions, locate the low-lying configurations in a potential energy surface, whose complexity here is heightened by the inclusion of three elements. Such searches can be unconstrained, purely hunting down the lowest-enthalpy configurations given a certain stoichiometry. Constraining a search to structures containing a particular motif will narrow down the possible results, but could also miss out on even lower-enthalpy alternatives that do not align with the constraints.
To that end, a combination of both unconstrained and constrained CSP searches were carried out for the Lu-N-H system using the XtalOpt EA. From the former we can learn about the structural motifs that yield the most stability, and comparison with the latter informs us of the enthalpic cost associated with a specific structural feature. In addition,
Figure 1: Prototype Lu-N and Lu-H structures with _fcc_ Lu lattices: (a) NaCl-type LuN, (b) CaF\({}_{2}\)-type LuH\({}_{2}\), and (c) a high-pressure (hp) phase of LuH\({}_{3}\).
various structures were made "by hand" via modification of known prototypes or CSP generated structures that possess an _fcc_ Lu lattice. As we will soon see, a large structural variety is present amongst the dynamically stable phases that we found, highlighting the difficulties inherent in the computational prediction of metastable phases that could potentially be synthesized.
### Semiconductors:
Unconstrained XtalOpt searches for the lowest-enthalpy structures were performed for the Lu\({}_{3}\)NH\({}_{11}\) and Lu\({}_{4}\)NH\({}_{10,11}\) compositions at both 0 and 3 GPa, as well as for Lu\({}_{4}\)NH\({}_{6}\) and LuNH\({}_{2}\) at 0 GPa. These EA runs located a number of structurally diverse semiconducting phases with PBE band gaps that ranged from 1.1-2.1 eV; some are shown in Figure 2. A few of the predicted structures, including \(P2_{1}m\) LuNH\({}_{2}\), and two Lu\({}_{3}\)NH\({}_{11}\) phases - one with \(P1\) symmetry at 0 GPa and one with \(Cm\) symmetry at 3 GPa - possessed large empty regions. \(P2_{1}m\) LuNH\({}_{2}\) (Figure 2(a)) is, in fact, a fully 2D compound. At 0 GPa \(Pc\) Lu\({}_{4}\)NH\({}_{11}\) (Figure 2(b)) was also identified; it consists of layers of trigonal nets of Lu with H atoms in the resulting hexagonal channels, while the N atoms are arranged in zigzag chains oriented along the \(c\)-axis that weave into the the Lu network (into the plane of the page).
Two of the structures from unconstrained searches - \(P1\) Lu\({}_{4}\)NH\({}_{10}\) (at 0 GPa) and a second \(Pc\) Lu\({}_{4}\)NH\({}_{11}\) structure (at 3 GPa; Figure 2(c)) - possessed Lu sublattices in slightly distorted _fcc_ arrangements. In \(P1\) Lu\({}_{4}\)NH\({}_{10}\), the N atoms go into some of the sites octahedrally coordinated by Lu, while some H atoms go into the tetrahedral interstices and the rest are scattered across the unit cell, resulting in the very low symmetry. For \(Pc\)-I Lu\({}_{4}\)NH\({}_{11}\) (Figure 2(c)), the N atoms go instead into the tetrahedral interstices and the hydrogen atoms take the octahedral and most of the remaining tetrahedral interstices. The _fcc_ Lu lattice is also preserved in a semiconducting \(Amm2\) compound with Lu\({}_{4}\)NH\({}_{9}\) stoichiometry (Figure 2(d)), which was produced not by CSP but by modifying the geometry of the high-pressure AlFe\({}_{3}\)-type LuH\({}_{3}\) compound. Here, H again partially occupies both tetrahedral and octahedral interstices in _fcc_ Lu, leaving 1/4 of the tetrahedral interstices empty and with 1/4 of the H atoms filling octahedral interstices being replaced by N.
From these results, it is clear that a variety of geometric motifs can be found in the low-enthalpy Lu-N-H compounds, highlighting both the difficulty of honing in on a single structure and the utility of guidance from experimental data. The unit cell volumes of several of the systems identified via unconstrained CSP searches were too large for them to stay as candidates for the putative superconducting phase. Importantly, because all of the aforementioned compounds were semiconducting it is impossible for any of them to be superconductors. Our search continues, with inspiration taken from known experimental phases or CSP searches guided via constraints towards desired structural features - or both.
### Structures from Prototype Modification:
The relatively low pressures needed to stabilize the putative room-temperature superconducting phase highlight the importance of - and inspiration to be gleaned from - examining the ambient- and low-pressure compounds formed between Lu and either N or H. Notably, within most of these, the Lu atoms adopt the _fcc_ arrangement that has been suggested for the superconducting phase.
In addition to the ambient pressure \(B1\) mononitride, LuN (Figure 1(a)), we considered a hypothetical rock-salt monohydride, LuH (Figure 3(a)), and hypothetical zinc-blende (or \(B3\)) LuN and LuH phases (Figure 3(b,c)). To explore the potential of a solid solution between the two rock-salt phases calculations were carried out on the unit cells shown in Figure 3(d). From these, only LuN\({}_{0.25}\)H\({}_{0.75}\) and LuN\({}_{0.5}\)H\({}_{0.5}\) were dynamically stable at 0 GPa. Similarly, solid solutions of zinc-blende LuN and LuH were optimized (Figure 1(e)) and from these LuH, LuN\({}_{0.5}\)H\({}_{0.5}\), LuN\({}_{0.75}\)H\({}_{0.25}\) and LuN were 0 GPa dynamically stable.
N/H substitution into the fluorite-type, or \(C1\), LuH\({}_{2}\) phase (Figure 1(b)), yielded another set of potential candidates (Figure 3(f)), with LuN\({}_{0.5}\)H\({}_{1.5}\) and LuNH being dynamically stable at 0 GPa. LuNH is a half-Heusler-like compound with equal amounts of N and H occupying the tetrahedral interstices of the Lu lattice. From the dynamically stable phases identified in this section, \(C1\) LuN\({}_{0.5}\)H\({}_{1.5}\) is weakly metallic under PBE-DFT, and thus likely in actuality to be a non-metal. The rest are metallic. Below, we will compare the pressure-volume relation calculated for the phases discussed in this section with the experimental results obtained for compounds **A** and **B**, and discuss the thermodynamic stability, electronic structure and potential for superconductivity in these prototype-based Lu-N-H phases.
Figure 2: Semiconducting Lu-N-H phases found using unconstrained evolutionary crystal structure searches and prototype modification (Lu\({}_{4}\)NH\({}_{9}\)).
### Structures Inspired by Evolutionary Searches:
Figure 4 illustrates a number of 0 GPa dynamically stable, metallic phases with _fcc_ Lu sublattices that were found in a variety of ways. The \(Fd\bar{3}m\) Lu\({}_{4}\)NH\({}_{7}\) phase (Figure 4(a)) was found in an unconstrained evolutionary search performed at 1 GPa. It can be constructed from a modified 2\(\times\)2\(\times\)2 supercell of CaF\({}_{2}\)-type LuH\({}_{2}\), in which 1/8 of the tetrahedral interstices of the Lu lattice are occupied by N rather than H. The distribution of the N atoms throughout the unit cell is in a diamond-like lattice. In this structure, the octahedral interstices of the Lu lattice are left empty. This structure belongs to the same family of phases illustrated in Figure 3(f), representing another N-substituted CaF\({}_{2}\)-type LuH\({}_{2}\) derivative. However, rather than being derived from prototype modification, it was located in an XtalOpt search and then served as a template to construct additional metastable phases. One of these, \(Fd\bar{3}m\) Lu\({}_{2}\)NH\({}_{5}\) (Figure 4(b)), was generated by placing H\({}_{2}\) units into some of the empty octahedral interstices of the Lu lattice, and replacing an additional H atom from Lu\({}_{4}\)NH\({}_{7}\) by N, so that the N atoms now trace out a _bcc_ network within the structure, leaving H\({}_{2}\) molecules lying along only half of the N-N contacts.
Another (incomplete, or prematurely terminated) XtalOpt search at 1 GPa identified \(P\bar{4}3m\) Lu\({}_{4}\)NH\({}_{6}\) (Figure 4(c)), which was chosen for further analysis and modification due to its dynamic stability, and the good match between its simulated XRD pattern with experiment. Like \(Fd\bar{3}m\) Lu\({}_{4}\)NH\({}_{7}\), \(P4\bar{3}m\) Lu\({}_{4}\)NH\({}_{6}\) is similarly a variant of the CaF\({}_{2}\)-type LuH\({}_{2}\) structure, in which 1/8 of the tetrahedral
Figure 4: Crystal structures of various dynamically stable Lu-N-H phases obtained from a combination of CSP searches – some constrained – and subsequent modification. Lu\({}_{2}\)NH\({}_{5}\) and Lu\({}_{2}\)NH\({}_{3}\) had PBE band-gaps of 1.09 and 0.06 eV at 10 kbar.
Figure 3: Illustrations of hypothetical (a) rock-salt (\(B1\)) LuH, and zinc-blende (\(B3\)) (b) LuH and (c) LuN phases. (d) Rock-salt and (e) zinc-blende LuN\({}_{x}\)H\({}_{(1-x)}\), and (f) fluorite (\(C1\)) Lu(N\({}_{x}\)H\({}_{(1-x)}\))\({}_{2}\) solid solution models that were considered.
interstices of the Lu lattice are occupied by N rather than H and an additional 1/8 of the tetrahedral interstices are left empty. Rather than the diamond-like distribution of N atoms found in Lu\({}_{4}\)NH\({}_{7}\), the substituting N atoms and vacancies are arranged in a CsCl-type framework. Filling the vacancies in Lu\({}_{4}\)NH\({}_{6}\) with N atoms yields the \(Pn\bar{3}m\) Lu\({}_{2}\)NH\({}_{3}\) structure (Figure 4(d)).
In the above phases, N atoms were positioned in the tetrahedral interstices of an _fcc_ Lu framework, whereas in Lu\({}_{2}\)NH\({}_{5}\) the octahedral interstices were partially occupied by H\({}_{2}\) molecular units. In \(R3m\) Lu\({}_{4}\)NH\({}_{4}\), which was identified using an XtalOpt search carried out at 6 GPa where the Lu sublattice was constrained to maintain the \(Fm\bar{3}m\) space group, the N atoms are not found within the tetrahedral holes but instead lie on 1/4 of the octahedral holes of the Lu lattice (Figure 4(e)). The N atoms in Lu\({}_{4}\)NH\({}_{4}\) trace out a simple cubic arrangement, with their positions shifted slightly off of the center of the surrounding Lu\({}_{6}\) octahedra, while the tetrahedral interstices are half occupied by H and half are left empty. The remaining H atoms can be grouped into H@H\({}_{6}\) vertex-sharing octahedra.
The unconstrained 0 GPa XtalOpt searches that mainly uncovered the semiconducting compounds shown in Figure 2 also produced the metallic \(R3m\) Lu\({}_{4}\)NH\({}_{6}\) phase (Figure 4(f)). In this phase, the Lu-N and Lu-H interactions become separated, with layers of N@Lu\({}_{6}\) octahedra - in essence, slabs of \(B1\) LuN - interrupting a CaF\({}_{2}\)-type packing of Lu and H. Because this phase was found using an EA search that generated sufficient structures to explore the potential energy landscape, it was 136.1 meV/atom lower in enthalpy than the previously discussed \(P\bar{4}3m\) Lu\({}_{4}\)NH\({}_{6}\), and at 5 GPa this difference increased to 175 meV/atom. Perhaps this structure, with N-rich layers intercalated into a LuH\({}_{2}\) matrix, could hint at a strategy for inducing epitaxial strain on simple LuH\({}_{n}\) structures, thereby altering their electronic and mechanical properties from those of their parent.
## IV Properties: Stability, Equation of States, Electronic Structure, Superconductivity
**Thermodynamics:**
The thermodynamic stability of the new structures was investigated by calculating their formation enthalpies relative to the solid elemental phases as a function of pressure. The reference phases employed were Lu: \(\alpha\)-Sm (0-8 GPa [64]) and the hexagonal phase (9-10 GPa [65]); H\({}_{2}\): \(P6_{3}/m\) phase (0-10 GPa [66]); and N\({}_{2}\): \(\alpha\)-N\({}_{2}\) phase (0-7 GPa [67]) and \(\epsilon\)-N\({}_{2}\) phase (8-10 GPa [68]). Known experimental phases including fluorite type LuH\({}_{2}\), \(B1\) LuN, \(P\bar{3}c1\) LuH\({}_{3}\), and \(P_{2}13\) NH\({}_{3}\)[69] were also considered.
The 0 GPa convex hull shown in Figure 5 illustrates that only the known structures are thermodynamically stable, and all of the previously discussed Lu-N-H compounds are thermodynamically unstable within the static lattice approximation. Up to 10 GPa only the known phases lie on the hull, while all others lie above it. A structure's thermodynamic stability can be characterized by its distance to the convex hull, which is listed in the Supporting Information, where it is also plotted as a function of pressure (for all compounds, regardless of their dynamic stability). Herein, we employ 70 meV/atom, which corresponds to the 90\({}^{th}\) percentile of the DFT-calculated metastability of all of the known inorganic crystalline materials [70], as a gauge to identify those structures that could potentially be synthesized. At 0 GPa only five structures - all found using our unconstrained crystal structure search - fall in this range. From these only \(R3m\) Lu\({}_{4}\)NH\({}_{6}\) was metallic.
Let us now turn to the metallic phases with _fcc_ Lu lattices. For the rock-salt solid-solution family, hydrogen concentrations ranging from 25-100% were roughly within 150-250 meV/atom from the hull, with \(B1\) LuH as the lower boundary. For the zinc-blende solid-solution system this range expanded to 100-500 meV/atom, with \(B3\) LuH corresponding to the lower boundary as well. Doping fluorite LuH\({}_{2}\) causes its energy to explode quickly: 25% nitrogen content results in an increase of energy by \(\sim\)200 meV/atom above the convex hull, which rises to \(\sim\)550 meV/atom for a 50% composition, and 1.1 eV/atom for 75% nitrogen content. The 0 GPa ternary convex hull plot shows that most of the low-enthalpy metastable structures are found at the bottom left hand corner. The reason for this is that these are the only regions where full unconstrained CSP searches were performed, and the survivor bias makes us think that this region is where stable structures might appear. It should be noted, however, that these stoichiometries were chosen because exploratory calculations suggested their volumes were likely to provide the best match
Figure 5: Convex hull at 0 GPa. Only dynamically stable structures within 300 meV/atom above the hull are shown. If multiple structures exist for the same stoichiometry, only the most stable structure is listed. Black dots represent thermodynamically stable phases on the hull, and the colored points are colored by their distance from the hull in meV/atom. Triangles: structures generated via evolutionary search. Boxes: structures from prototype modification. Circles: structures generated by inserting atoms into structures derived from EA searches.
with the experimental equation of states of compound **A**. This will be explored shortly below.
Assuming linear behavior of the enthalpy-pressure relation, we were able to estimate the pressure where the considered phases may become thermodynamically stable if the slope of the distance from the hull versus pressure is negative. This estimate does not take into account the dynamic stability, nor does it include temperature or effects arising from the zero point motion of the nuclei. The results suggest that \(B1\) LuN-LuH mixtures become favored at high-pressures: LuH by \(\sim\)30 GPa, \(\sim\)50 GPa for LuN\({}_{0.25}\)H\({}_{0.75}\), about 70 GPa for LuN\({}_{0.5}\)H\({}_{0.5}\), and 80 GPa for LuN\({}_{0.75}\)H\({}_{0.25}\). The higher the hydrogen concentration, the lower the predicted stabilization pressure, with a lower boundary of 30 GPa for \(B1\) LuH. The slope of the other two solid solutions considered, \(B3\) and \(C1\) type, is positive suggesting they will never be stabilized. Two further phases that could potentially be stabilized within the megabar range are \(R3m\) Lu\({}_{4}\)NH\({}_{6}\) (16 GPa) and \(P_{1}\) Lu\({}_{4}\)NH\({}_{10}\) (34 GPa) because they are very close to the hull. The rest of the structures either possess a positive slope, or cannot be stabilized until at least 140 GPa.
**Equation of States and X-ray Diffraction Patterns:**
One of the key experimental observables guiding our choice of stoichiometries was the pressure-volume relation, or equation of states (EoS), of the majority phase presented in Reference [31], which was assigned tentatively as an \(Fm\bar{3}m\) structure with a LuH\({}_{3-\delta}\)N\({}_{e}\) stoichiometry (or compound **A**). Above \(\sim\)30 kbar a first-order structural phase transition with a \(\sim\)0.3% volume discontinuity was observed suggesting that the metal lattice of the resulting non-superconducting phase distorted to the \(Immm\) spacegroup. In Figure 6 we plot the EoS fits from Reference [31] for the majority phase, which were obtained for two pressure ranges. Choosing stoichiometries whose volumes matched well with experiment was initially non-intuitive because the effective radius of the metal atom changes substantially with its oxidation state being largest for Lu and smallest for Lu\({}^{3+}\).
From all of the phases we considered, both fluorite LuH\({}_{2}\) and zinc-blende LuH presented the best match with the experimental data below 40 kbar. At higher pressures, however, the volume of \(B3\) LuH was computed to become progressively smaller than the measured value for compound **A**. The good agreement with \(C1\) LuH\({}_{2}\), on the other hand, remained up to at least 80 GPa. At 0 GPa \(B3\) LuH was slightly larger than \(C1\) LuH\({}_{2}\), in-line with the general notion that the effective radius of Lu\({}^{+}\) is larger than that of Lu\({}^{2+}\). However, the volume of \(B3\) LuH shrinks much faster (with a slope that is similar to that of \(B1\) LuH) with increasing pressure as compared to that of \(C1\) LuH\({}_{2}\), while the volume of cubic LuH\({}_{3}\) shrinks at an even slower rate. Thus, the compressibility in these compounds appears to be dependent upon the repulsion exhibited between the ionic cores, with a larger number of H\({}^{-}\) anions resulting in a higher resistance to compression. Due to its larger ionic radius, N\({}^{3-}\) is less compressible than H\({}^{-}\). Since the computed EoS of cubic LuH\({}_{3}\) has a smaller slope than the EoS derived from experiment, and introduction of nitrogen will decrease the slope further, it could be expected that a compound with the LuH\({}_{3-\delta}\)N\({}_{e}\) stoichiometry that was proposed for compound **A** would not have a slope that coincides with the experimentally derived EoS.
Because the calculated EoS of fluorite LuH\({}_{2}\) across the whole pressure range yielded the best fit with the experimentally reported EoS, we employed the quasiharmonic approximation to obtain a temperature-dependent EoS. Fitting the resulting EoS using the Birch-Murnaghan method at 300 K yielded \(V_{0}\) (reference volume at \(P=0\)) of 31.85 A\({}^{3}\), \(K_{0}\) (bulk modulus at \(P=0\)) of 922.8 kbar, \(K_{0}^{\prime}\) (\(dK_{0}/dP\) at \(P=0\), dimensionless) of 3.7 (data for 0 and 100 K can be found in the SI). This compares well with the values presented in Ref [31] obtained using fits to data collected below (above) 40 kbar of 31.74 (31.6) A\({}^{3}\), 886.4(900) kbar, and 4, respectively.
To determine if the structures discussed here could yield XRD patterns similar to those observed in experiment, their simulated 0 GPa XRD patterns were generated, as was an XRD pattern for a model \(Fm\bar{3}m\) Lu cell whose lattice constant (\(a=5.029\) A) was in-line with the refined unit cell suggested for superconducting compound **A** at 0 GPa (plots are provided in the Supporting Information). The PyXtal XRD Similarity tool [71] was used to assess the similarity between the simulated powder XRD patterns of the proposed structures and that of the model \(Fm\bar{3}m\) Lu cell. The strongest matches came from the experimental phases CaF\({}_{2}\)-type LuH\({}_{2}\) (0.9848), AlFe\({}_{3}\)-type LuH\({}_{3}\) (0.9316), and from ZnS-type LuH (0.9962) - in-line with the volume of \(B3\) LuH adhering closely to the experimental EoS near 0 GPa. Of the N/H-doped NaCl, ZnS, and CaF\({}_{2}\)-type structures, the best XRD matches could be attributed to the ZnS-based phases, with the NaCl-based phases comparing most poorly. Of the phases directly obtained from XtalOpt searches or based on modifications of XtalOpt results, \(Fd\bar{3}m\) Lu\({}_{4}\)NH\({}_{7}\) and \(R3m\) Lu\({}_{4}\)NH\({}_{4}\) provided the best matches, although their enthalpies place them well above the convex hull in the pressure range of interest.
**Electronic Structure and Superconductivity:**
Superconductivity has been measured in elemental Lu at pressures above \(\sim\)100 kbar, with \(T_{c}\) rising to \(\sim\)0.6 K near 160 kbar [72]. Adding hydrogen and mild pressure does not improve the superconducting properties much or at all: superconductivity in LuH\({}_{2}\) was not observed down to 1.5 K at pressures as high as 7.7 GPa [33]. These recent experimental results are in agreement with our computed values at 10 kbar, obtained via the Allen-Dynes modified McMillan equation, which is thought to be appropriate for phonon-mediated superconductors whose \(\lambda<\sim\)1-1.5. As shown in Table I, we found that the \(T_{c}\) of fluorite-type LuH\({}_{2}\) was \(\sim\)0.1 K, owing to a small \(\omega_{\text{ln}}\) combined with a modest \(\lambda=0.29\).
To study the potential for superconductivity in ternary Lu-N-H compounds we performed EPC calculations for the previously discussed metallic phases that were dynamically stable at 10 kbar - the pressure at which the maximum \(T_{c}\) was observed in Reference [31]. Table I shows that though the \(T_{c}\)s of most of these phases (with the exception of LuN\({}_{0.5}\)H\({}_{1.5}\)
were predicted to surpass that of \(C1\) LuH\({}_{2}\), they do not even reach the boiling point of liquid nitrogen, in agreement with recent theoretical calculations that did not find any Lu-N-H phases with room temperature superconductivity [37].
The highest \(T_{c}\) compound we found, fluorite type LuNH, can be derived from LuH\({}_{2}\) by replacing 50% of the hydrogen atoms by nitrogen (Figure 3(f)). This chemical substitution dramatically increased the EPC, placing it in the realm of the ambient pressure conventional superconductor with the highest confirmed \(T_{c}\), MgB\({}_{2}\). However, the larger \(\lambda\) of 0.78 was attained at a cost of the thermodynamic stability: while \(C1\) LuH\({}_{2}\) fell on the 10 kbar hull, LuNH was 564 meV/atom above the hull, suggesting it could never be made. The \(T_{c}\) of LuNH (\(\sim\)17 K) was estimated to be a factor of two smaller than that of MgB\({}_{2}\) with its strong covalent B-B bonds, whose motions, with frequencies around 600 cm\({}^{-1}\), yield an \(\omega_{\text{ln}}\) of 504 cm\({}^{-1}\) (or 725 K) [73]. As we shall soon see, in LuNH the EPC are relatively evenly distributed from the high frequency motions of the hydrogen vibrations to the very low frequency acoustic modes. Their \(\alpha^{2}F\)-weighted logarithmic average yields an \(\omega_{\text{ln}}\) of 257.2 cm\({}^{-1}\) (or \(\sim\)370 K). Numerical solution of the Eliashberg equations raised the \(T_{c}\) of LuNH only slightly to \(\sim\)18 K.
Let us examine the electronic structure of CaF\({}_{2}\)-type LuNH and its contributions to the EPC to better understand how these factors influence the \(T_{c}\). Replacing H by N in LuH\({}_{2}\) increases the density of states (DOS) at the Fermi level (\(E_{F}\)) by around 50% from 0.019 states/eV/A\({}^{3}\) to 0.027 states/eV/A\({}^{3}\), concomitantly increasing the \(T_{c}\). As shown in Figure 7, the major contributions to the DOS at \(E_{F}\) are the H \(1s\) and N \(2p\) states, with a negligible amount from the metal, indicative of a +3 oxidation state. The primitive cell of LuNH contains one formula unit, and as a result its conduction band is half-filled. The reaction LuNH\(+\frac{1}{2}\text{H}_{2}\rightarrow\text{LuN}+\text{H}_{2}\) is exothermic by 400 meV/atom; we would therefore expect the products of this reaction to be found in a CSP search for unit cells whose sizes approach infinity.
Pivoting to the phonon band structure in Figure 8, we observe that the large differences in the mass between the three elements splits their bands nicely into three regions. The vibrational modes of lutetium are mainly below 140 cm\({}^{-1}\) (acoustic region), nitrogen are between 380-470 cm\({}^{-1}\) and hydrogen above 660 cm\({}^{-1}\). It should be noted that due to the
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Structure & \(\lambda\) & \(\omega_{\text{ln}}\) (K) & \(T_{c}\) (K) \\ \hline CaF\({}_{2}\)-type LuH\({}_{2}\) & 0.29 & 302 & 0.09 \\ CaF\({}_{2}\)-type LuNH & 0.78 & 377 & 16.9 (18.3) \\ CaF\({}_{2}\)-type LuN\({}_{0.5}\)H\({}_{1.5}\) & 0.11 & 680 & 0.0 \\ \(R3m\) Lu\({}_{4}\)NH\({}_{4}\) & 0.64 & 151 & 4.2 \\ \(P43m\) Lu\({}_{4}\)NH\({}_{6}\) & 0.48 & 291 & 2.9 \\ \(Fd3m\) Lu\({}_{4}\)NH\({}_{7}\) & 0.47 & 435 & 4.2 \\ \(R3m\) Lu\({}_{4}\)NH\({}_{6}\) & 0.29 & 306 & 0.12 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The electron phonon coupling, \(\lambda\), logarithmic average frequency, \(\omega_{\text{ln}}\), and superconducting critical temperature, \(T_{c}\), estimated using the Allen-Dynes modified McMillan equation with \(\mu^{\star}=0.1\) at 10 kbar for select Lu-N-H compounds. For LuNH the numerical solution of the Eliashberg equations was employed to obtain the value in brackets.
Figure 6: The DFT calculated pressure-volume relationship or equation of states (EoS) of the Lu-N-H phases considered in this study. The colored squares correspond to the specified structures and the open diamond, triangle and circles to various structures comprising the \(B1\), \(B3\) and \(C1\) solid solution series (see Figure 3), except for \(C1\) LuH\({}_{2}\) and \(B3\) LuH. The black lines represent the EoS fitted using the Birch-Murnaghan method for compound **A** using data from the pressure ranges \(0<P<40\) kbar (solid) and \(P>42.7\) kbar (dashed) [31].
extremely heavy mass of lutetium versus nitrogen and hydrogen (175 vs. 14 and 1 a.u.), lutetium moves roughly ten times slower than the hydrogen, and four times slower than the nitrogen. As a result, the atomic displacements of the nitrogen and hydrogen atoms along the low frequency acoustic modes are still significant.
Because of the separation of these vibrational modes, their contribution to the total EPC can be obtained: motions from the acoustic modes contribute 41%, 23% from the nitrogen active region, and 35% from the hydrogen active region. The largest Lu-based contribution originates from the lower two acoustic phonon branches around the middle of the \(\Gamma\)-\(K\) path, and also around the \(L\) point. Visualization of these motions show they result in the formation of N-Lu-H molecular fragments and a hexagonal-like Lu lattice. In the nitrogen-active region the largest EPC is found at the \(\Gamma\) point, resulting from the nitrogen atoms approaching lutetium to form N-H motifs. In the hydrogen-active region, the entire bands exhibit moderate EPC, especially at several points where the modes are softened; visualization shows that these correspond to the motion of hydrogen atoms closer to lutetium to form H-Lu units.
The \(\omega_{\text{In}}\) of our Lu-N-H compounds ranged from \(\sim\)150 K (\(R3m\) Lu\({}_{4}\)NH\({}_{4}\)) to 680 K (CaF\({}_{2}\)-type LuN\({}_{0.5}\)H\({}_{1.5}\)). The absence of high frequency vibrations in these compounds, resulting from the low pressure and the absence of covalent bonds, suggests that higher \(\omega_{\text{In}}\) are unlikely to be found in other Lu-N-H compounds at 10 kbar with _fcc_ Lu lattices. Generally speaking, the \(\omega_{\text{In}}\) calculated for hydrogen and the high-\(T_{c}\) hydrides at extreme pressures is significantly higher, with values of 1200-1800 K not being uncommon. For those hydrides where comparable \(\omega_{\text{In}}\) values have been calculated, room temperature superconductivity has only been predicted in phases with a very large EPC (e.g. \(Fmmm\) ThH\({}_{18}\) at 400 GPa, \(\omega_{\text{In}}=\)568 K, \(\lambda=\)3.39, \(T_{c}=\)296 K) [74]. Therefore, we speculate that similar EPC constants are required for a Lu-N-H compound to be superconducting near room temperature, provided the mechanism is phonon-mediated.
## V Conclusions
Density functional theory calculations were performed to explore Lu-N-H containing compounds that could be (meta)stable in a pressure range of about 0-100 kbar (10 GPa). The computations were biased towards systems where the Lu atoms adopt an _fcc_ arrangement, because it was recently suggested that a compound with this structural feature could be responsible for the near-ambient superconducting critical temperature, \(T_{c}\), of 294 K reported at 10 kbar [31]. Based on the results of our calculations we conclude that:
* The Lu-N-H potential energy landscape, within the static lattice approximation and neglecting quantum nuclear and anharmonic effects, contains many local minima with _fcc_ Lu lattices. Other geometries, not explicitly considered here, could be generated via altering the N/H ratio of the solid-solution prototypes we discuss. Which of these structures are synthesizable, and which are kinetically and/or thermally stable and relatively chemically inert is currently unknown.
* None of the ternary compounds studied here are thermodynamically stable (e.g. they do not lie on the convex hull) up to 10 GPa at 0 K. Only the known binaries, LuH\({}_{2}\), LuH\({}_{3}\), LuN and NH\({}_{3}\), comprise the convex hull. Thermal effects and the role of configurational entropy on the thermodynamic stability are not known.
* From all of the phases considered here, the one whose equation of states (EoS) had the closest match with the fits to experimental data obtained for compound **A** was fluorite-type LuH\({}_{2}\), with errors smaller than 0.3% up to 80 kbar. EoS calculations on model compounds suggest that the previously proposed formula for compound **A**, LuH\({}_{3-\delta}\)N\({}_{e}\), would not have the same slope as what was observed experimentally.
* XRD similarity indices for the compounds studied here compared to a pure _fcc_ Lu lattice with the experimental lattice constant indicated a fair match at 0 GPa for binaries LuH\({}_{2}\), LuH\({}_{3}\), and ZnS-type LuH, N-substituted ZnS-type LuH, \(Fd\bar{3}m\) Lu\({}_{4}\)NH\({}_{7}\) and \(R3m\) Lu\({}_{4}\)NH\({}_{4}\).
Figure 8: Phonon dispersion curve and projected EPC constant (\(\lambda_{\mathbf{q}\nu}\)). Blue color indicates \(\lambda_{\mathbf{q}\nu}\) approaches 0, and red indicates \(\lambda_{\mathbf{q}\nu}\) approaches the maximum value of 0.36. The atom projected phonon density of states is illustrated, along with the total \(\lambda\) and the integral of \(\lambda(\omega))\) separated into regions comprising the Lu, N and H-based modes.
Figure 7: PBE band structure and projected densities of states of fluorite-type LuNH at 10 kbar.
* Many, though not all, of the investigated phases exhibited metallic behavior, and their density of states at the Fermi level (DOS at \(E_{\text{F}}\)) varied greatly. For example, the main contributions to the DOS at \(E_{\text{F}}\) for \(B1\) and \(B3\) LuH were the Lu \(d\) states; for LuH\({}_{2}\) the DOS at \(E_{\text{F}}\) was very small and mainly lutetium \(p\)-like, and for LuNH the main components arose from hydrogen \(s\) and nitrogen \(p\) states.
* The logarithmic average frequency, \(\omega_{\text{ln}}\), of the Lu-N-H compounds whose superconducting properties we studied ranged from \(\sim\)150-680 K, and the electron phonon coupling constants, \(\lambda\), varied between 0.1-0.8. Assuming conventional superconductivity, the \(T_{\text{c}}\)s of such compounds can be estimated using the modified McMillan Allen Dynes equation. Under this approximation, and with \(\mu^{*}=0.1\) at 10 kbar we obtain a \(T_{c}\) of 0.09 K for fluorite LuH\({}_{2}\). The highest \(T_{c}\) compound we found was fluorite-type LuNH with a \(T_{c}\) of 17 K.
Though we have not uncovered an Lu-N-H containing phase with a superconducting critical temperature near what was recently reported in Reference [31], we believe our computations shed light on the structures that contain these elemental combinations at mild pressures. Future work will ascertain if our choice of standard DFT parameters (gradient corrected exchange functional, neglect of spin polarization and strong electron correlations, and inclusion of \(f\) electrons in the core) impact our conclusions. Our work also highlights the complexity inherent in the computational search for phases that may be metastable with desired structural and property characteristics in multi-element _ab initio_ (or even machine-learning-assisted) crystal structure prediction.
## VI Acknowledgments
We are grateful to R. Dias for sharing experimental data, as well as G.W. Collins and R.J. Hemley for useful discussions. K.H. acknowledges the Chicago/DOE Alliance Center under Cooperative Agreement Grant No. DE-NA0003975, and N.G. the U.S. National Science Foundation (DMR-2132491). This material is based upon work supported by the U.S. Department of Energy, Office of Science, Fusion Energy Sciences funding the award entitled High Energy Density Quantum Matter under Award Number DE-SC0020340. Computations were carried out at the Center for Computational Research at the University at Buffalo ([http://hdl.handle.net/10477/79221](http://hdl.handle.net/10477/79221)).
|
2301.00330 | Efficient On-device Training via Gradient Filtering | Despite its importance for federated learning, continuous learning and many
other applications, on-device training remains an open problem for EdgeAI. The
problem stems from the large number of operations (e.g., floating point
multiplications and additions) and memory consumption required during training
by the back-propagation algorithm. Consequently, in this paper, we propose a
new gradient filtering approach which enables on-device CNN model training.
More precisely, our approach creates a special structure with fewer unique
elements in the gradient map, thus significantly reducing the computational
complexity and memory consumption of back propagation during training.
Extensive experiments on image classification and semantic segmentation with
multiple CNN models (e.g., MobileNet, DeepLabV3, UPerNet) and devices (e.g.,
Raspberry Pi and Jetson Nano) demonstrate the effectiveness and wide
applicability of our approach. For example, compared to SOTA, we achieve up to
19$\times$ speedup and 77.1% memory savings on ImageNet classification with
only 0.1% accuracy loss. Finally, our method is easy to implement and deploy;
over 20$\times$ speedup and 90% energy savings have been observed compared to
highly optimized baselines in MKLDNN and CUDNN on NVIDIA Jetson Nano.
Consequently, our approach opens up a new direction of research with a huge
potential for on-device training. | Yuedong Yang, Guihong Li, Radu Marculescu | 2023-01-01T02:33:03Z | http://arxiv.org/abs/2301.00330v2 | # Efficient On-device Training via Gradient Filtering
###### Abstract
Despite its importance for federated learning, continuous learning and many other applications, on-device training remains an open problem for EdgeAI. The problem stems from the large number of operations (e.g., floating point multiplications and additions) and memory consumption required during training by the back-propagation algorithm. Consequently, in this paper, we propose a new gradient filtering approach which enables on-device CNN model training. More precisely, our approach creates a special structure with fewer unique elements in the gradient map, thus significantly reducing the computational complexity and memory consumption of back propagation during training. Extensive experiments on image classification and semantic segmentation with multiple CNN models (e.g., MobileNet, DeepLabV3, UPerNet) and devices (e.g., Raspberry Pi and Jetson Nano) demonstrate the effectiveness and wide applicability of our approach. For example, compared to SOTA, we achieve up to 19\(\times\) speedup and 77.1% memory savings on ImageNet classification with only 0.1% accuracy loss. Finally, our method is easy to implement and deploy; over 20\(\times\) speedup and 90% energy savings have been observed compared to highly optimized baselines in MKLDNN and CUDNN on NVIDIA Jetson Nano. Consequently, our approach opens up a new direction of research with a huge potential for on-device training.1
Footnote 1: Code: [https://github.com/SLDGroup/GradientFilter-CVPR23](https://github.com/SLDGroup/GradientFilter-CVPR23)
## 1 Introduction
Existing approaches for on-device training are neither efficient nor practical enough to satisfy the resource constraints of edge devices (Figure 1). This is because these methods do not properly address a fundamental problem in on-device training, namely _the computational and memory complexity of the back-propagation (BP) algorithm_. More precisely, although the architecture modification [6] and layer freezing [18, 20] can help skipping the BP for some layers, for other layers, the complexity remains high. Gradient quantization [4, 7] can reduce the cost of arithmetic operations but cannot reduce the number of operations (_e.g._, multiplications); thus, the speedup in training remains limited. Moreover, gradient quantization is not supported by existing deep-learning frameworks (e.g., CUDNN [9], MKLDNN [1], PyTorch [25] and Tensorflow [2]). To enable on-device training, there are two important questions must be addressed:
* _How can we reduce the computational complexity of back propagation through the convolution layers?_
* _How can we reduce the data required by the gradient computation during back propagation?_
In this paper, we propose _gradient filtering_, a new research direction, to address both questions. By addressing the first question, we reduce the computational complexity of training; by addressing the second question, we reduce the memory consumption.
In general, the gradient propagation through a convolution layer involves multiplying the gradient of the output variable with a Jacobian matrix constructed with data from either the input feature map or the convolution kernel. We aim at simplifying this process with the new gradient filtering approach proposed in Section 3. Intuitively, if the gradient map w.r.t. the output has the same value for all entries, then the computation-intensive matrix multiplication can be greatly simplified, and the data required to construct the Jacobian matrix can be significantly reduced. Thus, our gradient filtering can approximate the gradient w.r.t. the output by creating a new gradient map with a special (_i.e._, spatial) structure and fewer unique elements. By doing so, the gradient propagation through the convolution layers reduces to cheaper operations, while the data required (hence memory) for the forward propagation also lessens. Through this filtering process, we trade off the gradient precision against the computation complexity during BP. We note that gradient filtering does not necessarily lead to a worse precision, _i.e._, models sometimes perform better with filtered gradients when compared against models trained with vanilla BP.
In summary, our contributions are as follows:
* We propose _gradient filtering_, which reduces the computation and memory required for BP by more than two orders of magnitude compared to the exact gradient calculation.
* We provide a rigorous error analysis which shows that the errors introduced by the gradient filtering have only a limited influence on model accuracy.
* Our experiments with multiple CNN models and computer vision tasks show that we can train a neural network with significantly less computation and memory costs, with only a marginal accuracy loss compared to baseline methods. Side-by-side comparisons against other training acceleration techniques also suggest the effectiveness of our method.
* Our method is easy to deploy with highly optimized deep learning frameworks (_e.g._, MKLDNN [1] and CUDNN [9]). Evaluations on resource-constrained edge (Raspberry Pi and Jetson Nano) and high-performance devices (CPU/GPU) show that our method is highly suitable for real life deployment.
The paper is organized as follows. Section 2 reviews relevant work. Section 3 presents our method in detail. Section 4 discusses error analysis, computation and memory consumption. Experimental results are presented in Section 5. Finally, Section 6 summarizes our main contributions.
## 2 Related Work
**Architecture Modification:** Authors of [6] propose to attach small branches to the original neural network. During training, the attached branches and biases in the original model are updated. Though memory consumption is reduced, updating these branches still needs gradient propagation through the entire network; moreover, a large computational overhead for inference is introduced.
**Layer Freezing:** Authors of [18, 20] propose to only train parts of the model. [18] makes layer selection based on layer importance metrics, while [20] uses evolutionary search. However, the layers selected by all these methods are typically computationally heavy layers (_e.g._, the last few layers in ResNet [14]) which consume most of the resources. Thus, the speedup achieved by these approaches is limited.
**Gradient Quantization:**[3, 5] quantize gradient after backpropagation, which means these methods cannot accelerate the training on a single device. Work in [4, 7, 15, 17, 28, 33, 29] accelerates training by reducing the cost for every arithmetic operation. However, these methods do not reduce the number of operations, which is typically huge for SOTA CNNs, so their achievable speedup is limited. Also, all these methods are not supported by the popular deep learning frameworks [1, 2, 9, 25].
In contrast to the prior work, our method opens up a new research direction. More precisely, we reduce the number of computations and memory consumption required for training a single layer via gradient filtering. Thus, our method can be combined with any of the methods mentioned above. For example, in Section H in the Supplementary, we illustrate how our method can work together with the gradient quantization methods to enable a higher speedup.
## 3 Proposed Method
In this section, we introduce our gradient filtering approach to accelerate BP. To this end, we target the most computation and memory heavy operation, _i.e._, convolution (Figure 2(a)). Table 1 lists some symbols we use.
\begin{table}
\begin{tabular}{c|c} \hline \hline \(C_{x}\) & Number of channels of \(x\) \\ \hline \(W_{x},H_{x}\) & Width and height of \(x\) \\ \hline \(\theta\) & Convolution kernel \\ \hline \(\theta^{\prime}\) & Rotated \(\theta\), _i.e._, \(\theta^{\prime}=\text{rot180}(\theta)\) \\ \hline \(r\) & Patch size (\(r\times r\) ) \\ \hline \(g_{x},g_{y},g_{\theta}\) & Gradients w.r.t. \(x,y,\theta\) \\ \hline \(\tilde{g}_{y}\) & Approximated gradient \(g_{y}\) \\ \hline \(\tilde{x},\tilde{\theta}^{\prime}\) & Sum of \(x\) and \(\theta^{\prime}\) over spatial dimensions (height and width) \\ \hline \(x[n,c_{i},h,w]\) & Element for feature map \(x\) \\ & at batch \(n\), channel \(c_{i}\), pixel \((h,w)\) \\ \hline \(\theta[c_{o},c_{i},u,v]\) & Element for convolution kernel \(\theta\) \\ & at output channel \(c_{o}\), input channel \(c_{i}\), \\ & position \((u,v)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table of symbols we use.
Figure 1: Matrix of orthogonal directions for on-device training. “Arch” is short for “architecture”. Our approach opens up a new direction of research for on-device training for EdgeAI.
### Problem Setup
The computations for both forward and backward paths are shown in Figure 2(a). For the standard (vanilla) approach (upper Figure 2(a)), starting with input \(x\), the forward propagation convolves the input feature map \(x\) with kernel \(\theta\) and returns output \(y\), which is further processed by the other layers in the neural network (dotted arrow) until the loss value \(l\) is calculated. As shown in Figure 2(a), the BP of the convolution layer starts with the gradient map w.r.t. output \(y\) (\(g_{y}\)). The gradient w.r.t. input (\(g_{x}\)) is calculated by convolving \(g_{y}\) with the _rotated_ convolution kernel \(\theta^{\prime}\), _i.e._, \(g_{x}=g_{y}\) ort180(\(\theta)=g_{y}\) ort. The gradient w.r.t. convolution kernel, namely \(g_{\theta}\), is calculated with the Frobenius inner product [16] between \(x\) and \(g_{y}\), _i.e._, \(g_{\theta}=g_{y}\) ort\(x\).
The lower half of Figure 2(a) shows our method, where several changes are made: We introduce the gradient filter "\(\bigodot\)" after \(g_{y}\) to generate the approximate gradient for BP. Also, instead of using the accurate \(x\) and \(\theta^{\prime}\) values for gradient computation, we sum over spatial dimensions (height and width dimensions), _i.e._, \(\tilde{x}\) and \(\tilde{\theta^{\prime}}\), respectively. Finally, the convolution layer now multiplies the approximate gradient \(\tilde{g}_{y}\) with spatial kernel \(\tilde{\theta^{\prime}}\) instead of convolving with it to calculate \(\tilde{g}_{x}\). Figure 2(b) shows an example of gradient propagation with our gradient filter.
### Preliminary Analysis
Consider the vanilla BP for convolution in Figure 2(a). Equation (1) shows the number of computations (#FLOPs) required to calculate \(g_{x}\) given \(g_{y}\):
\[\text{\#FLOPs}=2C_{x}C_{y}\cdot W_{y}H_{y}\cdot W_{\theta}H_{\theta} \tag{1}\]
The computation requirements in Equation (1) belong to three categories: number of channels, number of _unique elements_ per channel in the gradient map, and _kernel size_. Our method focuses on the last two categories.
**i. Unique elements:**\((W_{y}H_{y})\) represents the number of unique elements per channel in the gradient w.r.t. output variable \(y\) (\(g_{y}\)). Given the high-resolution images we use, this term is huge, so if we manage to reduce the number of unique elements in the spatial dimensions (height and width), the computations required are greatly reduced too.
**ii. Kernel size:**\((W_{\theta}H_{\theta})\) represents the number of unique elements in the convolution kernel. If the gradient \(g_{y}\) has some special structure, for example \(g_{y}=1_{H_{y}\times W_{y}}\cdot v\) (_i.e._, every element in \(g_{y}\) has the same value \(v\)), then the convolution can be simplified to \((\sum\theta^{\prime})v1_{H_{y}\times W_{y}}\) (with boundary elements ignored). With such a special structure, only one multiplication and \((W_{\theta}H_{\theta}-1)\) additions are required. Moreover, \(\sum\theta^{\prime}\) is independent of data so the result can be shared across multiple images until \(\theta\) gets updated.
### Gradient Filtering
To reduce the number of unique elements and create the special structure in the gradient map, we apply the gradient filter after the gradient w.r.t. output (\(g_{y}\)) is provided. During the backward propagation, the gradient filter \(\bigodot\)_approximates_ the gradient \(g_{y}\) by spatially cutting the gradient map into \(r\times r\)-pixel patches and then replacing all elements in each patch with their average value (Figure 2(b)):
\[\tilde{g}_{y}[n,c_{o},h,w]=\frac{1}{r^{2}}\sum_{i=\lfloor h/r\rfloor r}^{ \lceil h/r\rceil r}\sum_{j=\lfloor w/r\rfloor r}^{\lceil w/r\rceil r}g_{y}[n,c_ {o},i,j] \tag{2}\]
Figure 2: (a) Computation procedures for vanilla training method (upper) and our method (lower). (b) Example of gradient propagation with gradient filtering. Numbers in this example are chosen randomly for illustration purposes. In this case, the patch size selected for the gradient filter is \(2\times 2\). Thus, the \(4\times 4\) gradient map \(g_{y}\) is approximated by \(\tilde{g}_{y}\), which has four \(2\times 2\) patches with one unique value for each patch. Also, input feature map \(x\) and mirrored convolution kernel \(\theta^{\prime}\) are spatial summed to \(\tilde{x}\) and \(\tilde{\theta^{\prime}}\). Since \(\tilde{x}\) has fewer unique values than \(x\), memory consumption is reduced. Finally, with \(\tilde{g}_{y},\tilde{x}\) and \(\tilde{\theta}\), we compute the gradient w.r.t. kernel and input feature map with much fewer operations than the standard back propagation method.
For instance in Figure 2(b), we replace the 16 distinct values in the gradient map \(g_{y}\) with 4 average values in \(\tilde{g}_{y}\). So given a gradient map \(g_{y}\) with \(N\) images per batch, \(C\) channels, and \(H\times W\) pixels per channel, the gradient filter returns a structured approximation of the gradient map containing only \(N\times C\times\lceil\frac{H}{r}\rceil\times\lceil\frac{W}{r}\rceil\) blocks, with _one unique value per patch_. We use this matrix of unique values to represent the approximate gradient map \(\tilde{g}_{y}\), as shown in Figure 2(b).
### Back Propagation with Gradient Filtering
We describe now the computation procedure used after applying the gradient filter. Detailed derivations are provided in Supplementary Section B.
**Gradient w.r.t. input:** The gradient w.r.t. input is calculated by convolving \(\theta^{\prime}\) with \(g_{y}\) (Figure 2(a)). With the approximate gradient \(\tilde{g}_{y}\), this convolution simplifies to:
\[\tilde{g}_{x}[n,c_{i},h,w]=\sum_{c_{o}}\tilde{g}_{y}[n,c_{o},h,w]\odot\tilde{ \theta}^{\prime}[c_{o},c_{i}] \tag{3}\]
where \(\tilde{\theta}^{\prime}[c_{o},c_{i}]=\sum_{u,v}\theta^{\prime}[c_{o},c_{i},u,v]\) is the spatial sum of convolution kernel \(\theta\), as shown in Figure 2(b).
**Gradient w.r.t. kernel:** The gradient w.r.t. the kernel is calculated by taking the Frobenius inner product between \(x\) and \(g_{y}\), _i.e_., \(g_{\theta}[c_{o},c_{i},u,v]=x\)\(\bigodot g_{y}\), namely:
\[g_{\theta}[c_{o},c_{i},u,v]=\sum_{n,i,j}x[n,c_{i},i+u,j+v]g_{y}[n,c_{o},i,j] \tag{4}\]
With the approximate gradient \(\tilde{g}_{y}\), the operation can be simplified to:
\[\tilde{g}_{\theta}[c_{o},c_{i},u,v]=\sum_{n,i,j}\tilde{x}[n,c_{i},i,j]\tilde{ g}_{y}[n,c_{o},i,j] \tag{5}\]
with \(\tilde{x}[n,c_{i},i,j]=\sum_{h=\lfloor i/r\rfloor r}^{\lceil i/r\rceil r}\sum _{w=\lfloor j/r\rfloor r}^{\lceil j/r\rceil r}x[n,c_{i},h,w]\). As shown in Figure 2(b), \(\tilde{x}[n,c_{i},i,j]\) is the spatial sum of \(x\) elements in the same patch containing pixel \((i,j)\).
## 4 Analyses of Proposed Approach
In this section, we analyze our method from three perspectives: gradient filtering approximation error, computation reduction, and memory cost reduction.
### Error Analysis of Gradient Filtering
We prove that the approximation error introduced by our gradient filtering is bounded during the gradient propagation. Without losing generality, we consider that all variables have only one channel, _i.e_., \(C_{x_{0}}=C_{x_{1}}=1\).
**Proposition 1**: _For any input-output channel pair \((c_{o},c_{i})\) in the convolution kernel \(\theta\), assuming the DC component has the largest energy value compared to all components in the spectrum2, then the signal-to-noise-ratio (SNR) of \(\tilde{g}_{x}\) is greater than SNR of \(\tilde{g}_{y}\)._
Footnote 2: As a reminder, the energy of a signal is the sum of energy of the DC component and the energy of its AC components.
**Proof:** We use \(G_{x},G_{y}\) and \(\Theta\) to denote the gradients \(g_{x},g_{y}\) and the convolution kernel \(\theta\) in the _frequency domain_; \(G_{x}[u,v]\) is the spectrum value at frequency \((u,v)\) and \(\delta\) is the 2D discrete Dirichlet function. To simplify the discussion, we consider only one patch of size \(r\times r\).
The gradient returned by the gradient filtering can be written as:
(6)
where denotes convolution. By applying the discrete Fourier transformation, Equation (6) can be rewritten in the frequency domain as:
\[\tilde{G}_{y}[u,v]=\frac{1}{r^{2}}\delta[u,v]G_{y}[u,v] \tag{7}\]
\(\tilde{g}_{y}\) is the approximation of \(g_{y}\) (_i.e_., the ground truth for \(\tilde{g}_{y}\) is \(g_{y}\)), and the SNR of \(\tilde{g}_{y}\) equals to:
\[\begin{split}\text{SNR}_{\tilde{g}_{y}}&=\frac{ \sum_{(u,v)}G_{y}^{2}[u,v]}{\sum_{(u,v)}(G_{y}[u,v]-\frac{1}{r^{2}}\delta[u,v]G_ {y}[u,v])^{2}}\\ &=(1-\frac{2r^{2}-1}{r^{4}}\frac{G_{y}^{2}[0,0]}{\sum_{(u,v)}G_{y }^{2}[u,v]})^{-1}\end{split} \tag{8}\]
For the convolution layer, the gradient w.r.t. the approximate variable \(\tilde{x}\) in the frequency domain is3:
Footnote 3: Because \(g_{y}\) is convolved with the **rotated** kernel \(\theta^{\prime}\), in the frequency domain, we use \(\Theta[-u,-v]\) instead of \(\Theta[u,v]\).
\[\begin{split}\tilde{G}_{x}[u,v]&=\Theta[-u,-v] \tilde{G}_{y}[u,v]\\ &=\frac{1}{r^{2}}\Theta[-u,-v]\delta[u,v]G_{y}[u,v]\end{split} \tag{9}\]
and its ground truth is:
\[G_{x}[u,v]=\Theta[-u,-v]G_{y}[u,v] \tag{10}\]
Similar to Equation (8), the SNR of \(g_{\tilde{x}}\) is:
\[\text{SNR}_{\tilde{g}_{x}}=(1-\frac{2r^{2}-1}{r^{4}}\frac{(\Theta[0,0]G_{y}[0, 0])^{2}}{\sum_{(u,v)}(\Theta[u,v]G_{y}[u,v])^{2}})^{-1} \tag{11}\]
Equation (11) can be rewritten as:
\[\begin{split}\frac{r^{4}(1-\text{SNR}_{\tilde{g}_{x}}^{-1})}{2r^ {2}-1}&=\frac{(\Theta[0,0]G_{y}[0,0])^{2}}{\sum_{(u,v)}(\Theta[-u,-v] G_{y}[u,v])^{2}}\\ &=\frac{G_{y}^{2}[0,0]}{\sum_{(u,v)}(\frac{\Theta[-u,-v]}{\Theta[ 0,0]}G_{y}[u,v])^{2}}\end{split} \tag{12}\]
Furthermore, the main assumption (_i.e_., the DC component dominates the frequency spectrum of \(\Theta\)) can be written as:
\[\Theta^{2}[0,0]/\text{max}_{(u,v)\neq(0,0)}\Theta^{2}[u,v]\geq 1 \tag{13}\]
that is, \(\forall(u,v),\frac{\Theta^{2}[-u,-v]}{\Theta^{2}[0,0]}\leq 1\); thus, by combining Equation (12) and Equation (13), we have:
\[\frac{G_{y}^{2}[0,0]}{\sum_{(u,v)}(\frac{\Theta[-u,-v]}{\Theta[0,0]}G_{y}[u,v]) ^{2}} \geq\frac{G_{y}^{2}[0,0]}{\sum_{(u,v)}(G_{y}[u,v])^{2}} \tag{14}\] \[\Leftrightarrow\frac{r^{4}(1-\text{SNR}_{\tilde{g}_{x}}^{-1})}{2 r^{2}-1} \geq\frac{r^{4}(1-\text{SNR}_{\tilde{g}_{y}}^{-1})}{2r^{2}-1}\]
which means that: \(\text{SNR}_{\tilde{g}_{x}}\geq\text{SNR}_{\tilde{g}_{y}}\). This completes our proof for error analysis. \(\blacksquare\)
In conclusion, as the gradient propagates through the network, the noise introduced by our gradient filter becomes weaker compared to the real gradient signal. This property ensures that the error in gradient has only a limited influence on the quality of BP. We validate Proposition 1 later in the experimental section.
### Computation and Overhead Analysis
In this section, we analyse the computation required to compute \(g_{x}\), the gradient w.r.t. input \(x\). Figure 3 compares the computation required to propagate the gradient through this convolution layer under different patch sizes \(r\times r\). A patch size \(1\times 1\) means the vanilla BP algorithm which we use as the baseline. As discussed in the preliminary analysis section (Section 3.2), two terms contribute to the computation savings: fewer unique elements in the gradient map and the structured gradient map.
**Fewer unique elements:** In vanilla BP, there are \(H_{y}W_{y}\) unique elements in the gradient map. After applying gradient filtering with a patch size \(r\times r\), the number of unique elements reduces to only \(\lceil\frac{H_{y}}{r}\rceil\lceil\frac{W_{y}}{r}\rceil\). This reduction contributes the most to the savings in computation (orange line in Figure 3).
**Structured Gradient Map:** By creating the structured gradient map, the convolution over the gradient map \(\tilde{g}_{y}\) is simplified to the element-wise multiplication and channel-wise addition. Computation is thus reduced to \((H_{\theta}W_{\theta})^{-1}\) of its original value. For instance, the example convolution layer in Figure 3 uses a \(3\times 3\) convolution kernel so around \(89\%\) computations are removed. The blue line in Figure 3 shows the #FLOPs after combining both methods. Greater reduction is expected when applying our method with larger convolution kernels. For instance, FastDepth [30] uses \(5\times 5\) convolution kernel so as much as \(96\%\) reduction in computation can be achieved, in principle.
**Minimum Achievable Computation:** With the two reductions mentioned above, the computation required to propagate the gradient through the convolution layer is:
\[\text{\#FLOPs}(r)=\lceil\frac{H_{y}}{r}\rceil\lceil\frac{W_{y}}{r}\rceil C_ {x}(2C_{y}-1)+o(H_{y}W_{y}) \tag{15}\]
where \(o(H_{y}W_{y})\) is a constant term which is independent of \(r\) and negligible compared to \(H_{y}W_{y}\). When the patch is as large as the feature map, our method reaches the minimum achievable computation (blue dashed line in Figure 3):
\[\text{min}_{r}\text{\#FLOPs}(r)=2C_{x}C_{y}-C_{x}+o(H_{y}W_{y}) \tag{16}\]
In this case, each channel of the gradient map is represented with a single value, so the computation is controlled by the number of input and output channels.
**Overhead:** The overhead of our approach comes from approximating the feature map \(x\), gradient \(g_{y}\), and kernel \(\theta\). As the lower part of Figure 2(a) shows, the approximation for \(x\) is considered as part of forward propagation, while the other two as back propagation. Indeed, with the patch size \(r\), the ratio of forward propagation overhead is about \(1/(2C_{o}W_{\theta}H_{\theta})\), while the ratio of backward propagation overhead is about \((r^{2}-1)/(2C_{x})\).
Given the large number of channels and spatial dimensions in typical neural networks, both overhead values take less than 1% computation in the U-Net example above.
### Memory Analysis
As Figure 2(a) shows, the standard back propagation for a convolution layer relies on the input feature map \(x\), which needs to be stored in memory during forward propagation. Since every convolution layer requiring gradient for its kernel needs to save a copy of feature map \(x\), the memory consumption for storing \(x\) is huge. With our method, we simplify the feature map \(x\) to approximated \(\tilde{x}\), which has only \(\lceil\frac{H_{x}}{r}\rceil\lceil\frac{W_{x}}{r}\rceil\) unique elements for every channel. Thus, by saving only these unique values, our method achieves around \((1-\frac{1}{r^{2}})\) memory savings, overall.
Figure 3: Computation analysis for a specific convolution layer4. Minimum achievable computation is given in Equation (16). By reducing the number of unique elements, computations required by our approach drop to about \(1/r^{2}\) compared with the standard BP method. By combining it with structured gradient map, computations required by our approach drop further, getting very close to the theoretical limit.
## 5 Experiments
Our experimental section consists of theoretical and practical evaluations. Sections 5.2-5.4 show the theoretical advantages of our method on image classification and semantic segmentation tasks with implementation-agnostic metrics (_e.g._, accuracy, FLOPs). Then, in Section 5.5, we show how these theoretical advantages translate into practical advantages (_i.e.,_ speedup and memory savings) on real edge devices.
### Experimental Setup
**Classification:** Following [24], we split every dataset into two highly non-i.i.d. partitions with the same size. Then, we pretrain our models on the first partition with a vanilla training strategy, and finetune the model on the other partition with different configurations for the training strategy (_i.e._, with/without gradient filtering, hyper-parameters, number of convolution layers to be trained). More details (_e.g._, hyper-parameters) are in the Supplementary.
**Segmentation:** Models are pretrained on Cityscapes [11] by MMSegmentation [10]. Then, we calibrate and finetune these models with different training strategies on the augmented Pascal-VOC12 dataset following [8], which is the combination of Pascal-VOC12 [12] and SBD [13]. More details are included in the supplementary material.
**On-device Performance Evaluation:** For CPU performance evaluation, we implement our method with MKLDNN [1] (a.k.a. OneDNN) v2.6.0 and compare it with the convolution BP method provided by MKLDNN. We test on three CPUs, namely Intel 11900KF, Quad-core Cortex-A72 (Jetson Nano) and Quad-core Cortex-A53 (Raspberry Pi-3b). For GPU performance evaluation, we implement our method on CUDNN v8.2 [9] and compare with the BP
\begin{table}
\begin{tabular}{c c|c c c|c c|c c c}
**MobileNetV2**[27] & **\#Layers** & **Accuracy** & **FLOPs** & **Mem** & **ResNet-18**[14] & **\#Layers** & **Accuracy** & **FLOPs** & **Mem** \\ \hline No Finetuning & 0 & 4.2 & 0 & 0 & No Finetuning & 0 & 4.7 & 0 & 0 \\ \hline Vanilla & \begin{tabular}{c} All \\ \(2\) \\ \end{tabular} & \begin{tabular}{c} 75.1 \\ 63.1 \\ 46.2 \\ \end{tabular} & \begin{tabular}{c} 1.13G \\ 63.1 \\ 62.2 \\ \end{tabular} & \begin{tabular}{c} 24.33MB \\ 63.1 \\ 160.00M \\ \end{tabular} & \begin{tabular}{c} Vanilla \\ 245.00KB \\ 459.38KB \\ \end{tabular} & \begin{tabular}{c} All \\ 24.0 \\ \end{tabular} & \begin{tabular}{c} 73.1 \\ 70.4 \\ \end{tabular} & \begin{tabular}{c} 5.42G \\ 489.20M \\ 489.20M \\ 490.00KB \\ \end{tabular} &
\begin{tabular}{c} 8.33MB \\ 70.4 \\ \end{tabular} \\ \hline TinyTL [6] & N/A & 60.2 & 663.51M & 683.00KB & TinyTL [6] & N/A & 69.2 & 3.88G & 1.76MB \\ \hline
**Ours** & 2 & 63.1 & 39.27M & 80.00KB & **Ours** & 2 & 68.6 & 28.32M & 64.00KB \\
**Ours** & 4 & 63.4 & 53.96M & 150.00KB & **Ours** & 4 & 68.5 & 61.53M & 112.00KB \\ \hline
**MCUNet**[19] & **\#Layers** & **Accuracy** & **FLOPs** & **Mem** & **ResNet-34**[14] & **\#Layers** & **Accuracy** & **FLOPs** & **Mem** \\ \hline No Finetune & 0 & 4.1 & 0 & 0 & No Finetune & 0 & 0 & 0 \\ \hline Vanilla & \begin{tabular}{c} All \\ \(2\) \\ \end{tabular} & \begin{tabular}{c} 68.5 \\ 62.1 \\ 46 \\ \end{tabular} & \begin{tabular}{c} 231.67M \\ 62.9 \\ 64.9 \\ \end{tabular} & \begin{tabular}{c} 9.17MB \\ 62.9 \\ 33.71M \\ \end{tabular} & \begin{tabular}{c} 9.17MB \\ 322.505KB \\ 33.71M \\ \end{tabular} & \begin{tabular}{c} \\ 220.50KB \\ 312.38KB \\ \end{tabular} & \begin{tabular}{c} Vanilla \\ 22.3 \\ \end{tabular} & \begin{tabular}{c} - \\ 69.6 \\ 72.3 \\ \end{tabular} & \begin{tabular}{c} 0 \\ 1.12G \\ \end{tabular} &
\begin{tabular}{c} 1.17G \\ 392.00KB \\ 392.00KB \\ \end{tabular} \\ \hline TinyTL [6] & N/A & 53.1 & 148.01M & 571.5KB & TinyTL [6] & N/A & 72.9 & 8.03G & 2.95MB \\ \hline
**Ours** & 2 & 61.8 & 6.34M & 72.00KB & **Ours** & 2 & 68.6 & 28.32M & 64.00KB \\
**Ours** & 4 & 64.4 & 11.01M & 102.00KB & **Ours** & 4 & 70.6 & 64.07M & 128.00KB \\ \hline \end{tabular}
\end{table}
Table 2: Experimental results for ImageNet classification with four neural networks (MobileNet-V2, ResNet18/34, MCUNet). #Layers’ is short for “the number of _active_ convolutional layers”. For example, #Layers equals to 2 means that only the last two convolutional layers are trained. For memory consumption, we only consider the memory for input feature \(x\). Strategy “No Finetuning” shows the accuracy on new datasets without finetuning the pretrained model. Since TinyTL [6] changes the architecture, “#Layers’ is not applicable (N/A).
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c|c c}
**PSNet**[32] & **\#Layers** & **GFLOPs** & **mIoU** & **mAcc** & **PSPNet-M**[32] & **\#Layers** & **GFLOPs** & **mIoU** & **mAcc** & **FCN**[21] & **\#Layers** & **GFLOPs** & **mIoU** & **mAcc** \\ \hline Calibration & 0 & 0 & 12.86 & 19.74 & Calibration & 0 & 0 & 14.20 & 20.46 & Calibration & 0 & 0 & 10.95 & 15.69 \\ \hline Vanilla & \begin{tabular}{c} All \\ \(5\) \\ \end{tabular} & \begin{tabular}{c} 166.5 \\ 5.0 \\ 5.0 \\ 5.0 \\ 15.0 \\ 10 \\ \end{tabular} & \begin{tabular}{c} 5.51 \\ 5.0 \\ 5.0 \\ 5.0 \\ 10.0 \\ \end{tabular} &
\begin{tabular}{c} 68.02 \\ 5.5 \\ 5.0 \\ 5.
method provided by CUDNN. We test on two GPUs, RTX 3090Ti and the edge GPU on Jetson Nano. Since both MKLDNN and CUDNN only support float32 BP, we test float32 BP only. Additionally, for the experiments on Jetson Nano, we record the energy consumption for CPU and GPU with the embedded power meter. More details (_e.g._, frequency) are included in the supplementary material.
### ImageNet Classification
Table 2 shows our evaluation results on the ImageNet classification task. As shown, our method significantly reduces the FLOPs and memory required for BP, with very little accuracy loss. For example, for ResNet34, our method achieves 18.9\(\times\) speedup with 1.7% accuracy loss when training four layers; for MobileNetV2, we get a 1.2% better accuracy with 3.0\(\times\) speedup and 3.1\(\times\) memory savings. These results illustrate the effectiveness of our method. On most networks, TinyTL has a lower accuracy while consuming more resources compared to the baselines methods.
### Semantic Segmentation
Table 3 shows our evaluation results on the augmented Pascal-VOC12 dataset. On a wide range of networks, our method constantly achieves significant speedup with marginal accuracy loss. For the large network UPerNet, our method achieves 229\(\times\) speedup with only 1% mIoU loss. For the small network PSPNet, our method speedups training by 140\(\times\) with only 2.27% mIoU loss. This shows the effectiveness of our method on a dense prediction task.
### Hyper-Parameter Selection
Figure 4 shows our experimental results for ResNets under different hyper-parameter selection, _i.e._ number of convolution layers and patch size of gradient filter \(r\times r\). Of note, the y-axis (MFLOPs) in Figure 4 is log scale. More results are included in Supplementary Section G. We highlight three qualitative findings in Figure 4:
* For a similar accuracy, our method greatly reduces the number of operations (1 to 2 orders of magnitude), while for a similar number of computations, our method achieves a higher accuracy (2% to 5% better).
This finding proves the effectiveness of our method.
* Given the number of convolution layers to be trained, the more accurate method returns a better accuracy. Baseline (_i.e._, standard BP) uses the most accurate gradient, Ours-R4 (BP with gradient filter with patch size \(4\times 4\)) uses the least accurate gradient; thus, Baseline \(>\) Ours-R2 \(>\) Ours-R4.
This finding is intuitive since the more accurate method should introduce smaller noise to the BP, _e.g._, the gradient filtering with patch size \(2\times 2\) (Ours-R2) introduces less noise than with patch size \(4\times 4\) (Ours-R4). In Figure 5, we evaluate the relationship between accuracy and noise level introduced by gradient filtering. With a higher SNR (_i.e._, a lower noise level), a better accuracy is achieved.
* Given the number of computations, the less accurate method returns the better accuracy by training more layers, _i.e._, Ours-R4 \(>\) Ours-R2 \(>\) baseline.
This finding suggests that for neural network training with relatively low computational resources, training more layers with less accurate gradients is preferable than training fewer layers with more accurate gradients.
### On-device Performance Evaluation
Figure 6 and Table 4 show our evaluation results on real devices. More results are included in the Supplementary Section I. As Figure 6 shows, on CPU, most convolution layers achieve speedups over 20\(\times\) with less than 50% memory consumption for gradient filtering with patch sizes \(2\times 2\); for gradient filtering with patch size \(4\times 4\), the speedups are much higher, namely over 60\(\times\). On GPU, the speedup is a little bit lower, but still over 10\(\times\) and 25\(\times\), respectively. Furthermore, as Table 4 shows, our method saves over 95%
Figure 4: Computation (#MFLOPs, log scale) and model accuracy [%] under different hyper-parameter selection. “Baseline” means vanilla BP; “Ours-R2/4” uses gradient filtering with patch size \(2\times 2\)/\(4\times 4\) during BP.
Figure 5: Relationship between accuracy and noise level introduced by the gradient filtering. As shown, accuracy increases as the SNR increases, _i.e._, noise level decreases.
energy for both CPU and GPU scenarios, which largely resolves one of the most important constraints on edge devices. All these experiments on real devices show that our method is practical for the real deployment of both high-performance and IoT applications.
### Main Assumption Verification
We now empirically verify the assumption that the DC component dominates the frequency spectrum of the convolution kernel (Section 4.1). To this end, we collect the energy ratio shown in Equation (13) from trained models published in Torchvision [23]. As Table 5 shows, for the convolution kernels in all these networks, we get a ratio greater than one, which means that the energy of DC components is larger than energy of all AC components. Thus, our assumption in Section 4.1 empirically holds true in practice.
## 6 Conclusions
In this paper, we have addressed the on-device model training for resource-constrained edge devices. To this end, a new gradient filtering method has been proposed to systematically reduce the computation and memory consumption for the back-propagation algorithm, which is the key bottleneck for efficient model training.
In Section 3, a new gradient filtering approach has been proposed to reduce the computation required for propagating gradients through the convolutional layers. The gradient filtering creates an approximate gradient feature map with fewer unique elements and a special structure; this reduces the computation by more than two orders of magnitude. Furthermore, we proved that the error introduced during back-propagation by our gradient filter is bounded so the influence of gradient approximation is limited.
Extensive experiments in Section 5 have demonstrated the efficiency and wide applicability of our method. Indeed, models can be finetuned with orders of magnitudes fewer computations, while having only a marginal accuracy loss compared to popular baseline methods.
**Acknowledgements:** This work was supported in part by the US National Science Foundation (NSF) grant CNS-2007284.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Device & Patch Size & Normalized Energy Cost [STD] \\ \hline Edge & \(2\times 2\) & 4.13\% [0.61\%] \\ CPU & \(4\times 4\) & 1.15\% [0.18\%] \\ \hline Edge & \(2\times 2\) & 3.80\% [0.73\%] \\ GPU & \(4\times 4\) & 1.22\% [1.10\%] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Normalized energy consumption for BP with gradient filtering for different patch sizes. Results are normalized w.r.t. the energy cost of standard BP methods. For instance, for edge CPU with a \(4\times 4\) patch, only 1.15% of energy in standard BP is used. Standard deviations are shown within brackets.
\begin{table}
\begin{tabular}{l l|l l} \hline \hline Model & Ratio & Model & Ratio \\ \hline (Wide)ResNet18-152 & 1.462 & VGG(bn)11-19 & 1.497 \\ DenseNet121-201 & 2.278 & EfficientNet b0-b7 & 1.240 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of energy ratio defined in Equation (13) on models published on Torchvision. The ratio greater than 1 empirically verifies our assumption.
Figure 6: Speedup and normalized memory consumption results on multiple CPUs and GPUs under different test cases (_i.e._ different input sizes, numbers of channels, etc.) Detailed configuration of these test cases are included in the supplementary material. “R2”, “R4” mean using gradient filtering with \(2\times 2\) and \(4\times 4\) patch sizes, respectively. Our method achieves significant speedup with low memory consumption compared to all baseline methods. For example, on Jetson CPU with patch size \(4\times 4\) (“Jetson-R4” in left top figure), our method achieves 114\(\times\) speedup with only 33% memory consumption for most test cases. |
2307.02871 | Contrastive Label Disambiguation for Self-Supervised Terrain
Traversability Learning in Off-Road Environments | Discriminating the traversability of terrains is a crucial task for
autonomous driving in off-road environments. However, it is challenging due to
the diverse, ambiguous, and platform-specific nature of off-road
traversability. In this paper, we propose a novel self-supervised terrain
traversability learning framework, utilizing a contrastive label disambiguation
mechanism. Firstly, weakly labeled training samples with pseudo labels are
automatically generated by projecting actual driving experiences onto the
terrain models constructed in real time. Subsequently, a prototype-based
contrastive representation learning method is designed to learn distinguishable
embeddings, facilitating the self-supervised updating of those pseudo labels.
As the iterative interaction between representation learning and pseudo label
updating, the ambiguities in those pseudo labels are gradually eliminated,
enabling the learning of platform-specific and task-specific traversability
without any human-provided annotations. Experimental results on the RELLIS-3D
dataset and our Gobi Desert driving dataset demonstrate the effectiveness of
the proposed method. | Hanzhang Xue, Xiaochang Hu, Rui Xie, Hao Fu, Liang Xiao, Yiming Nie, Bin Dai | 2023-07-06T09:15:38Z | http://arxiv.org/abs/2307.02871v1 | Contrastive Label Disambiguation for Self-Supervised Terrain Traversability Learning in Off-Road Environments
###### Abstract
Discriminating the traversability of terrains is a crucial task for autonomous driving in off-road environments. However, it is challenging due to the diverse, ambiguous, and platform-specific nature of off-road traversability. In this paper, we propose a novel self-supervised terrain traversability learning framework, utilizing a contrastive label disambiguation mechanism. Firstly, weakly labeled training samples with pseudo labels are automatically generated by projecting actual driving experiences onto the terrain models constructed in real time. Subsequently, a prototype-based contrastive representation learning method is designed to learn distinguishable embeddings, facilitating the self-supervised updating of those pseudo labels. As the iterative interaction between representation learning and pseudo label updating, the ambiguities in those pseudo labels are gradually eliminated, enabling the learning of platform-specific and task-specific traversability without any human-provided annotations. Experimental results on the RELLIS-3D dataset and our Gobi Desert driving dataset demonstrate the effectiveness of the proposed method.
## I Introduction
For autonomous driving, understanding the traversability of surrounding environments is one of the most fundamental and critical tasks. The majority of existing related works primarily concentrates on structured environments where the traversable regions are explicitly defined, and treat the traversability analysis as a binary classification task. Closely related tasks include road detection [1], ground segmentation [2], or free-space detection [3]. However, most of these approaches may not work well in complex off-road environments. There are two main reasons: Firstly, there is a high degree of similarity between traversable and non-traversable regions in some off-road environments; Secondly, countless terrain types and irregular terrain shapes in off-road environments present intricate possibilities for traversability, and it is challenging to analyze them with a unified rule.
Recently, several researchers [4, 5] have attempted to employ supervised semantic segmentation approaches to obtain semantic information for each region in off-road environments, and to analyze traversability by establishing mapping relationships between different semantic categories and traversability. Although these methods achieved impressive results on some specific datasets, they are difficult to adapt to previously unseen environments since the inherent ambiguity of the traversability in off-road environments. On the one hand, it is challenging to define semantic categories and their corresponding traversability in diverse off-road environments without ambiguity. On the other hand, traversability itself is also platform-related and task-related, different platforms or autonomous tasks may yield different traversability results. Furthermore, these supervised learning approaches also require exhausting human labor for manual annotation of training samples. It is unaffordable and unsustainable for re-annotating a tremendous amount of data each time a new environment is encountered.
Bearing the purpose of rapidly learning platform-specific and task-specific traversability in new off-road environments without any human-provided annotations, we shift from directly define traversability or semantic categories in off-road environments to understand traversability by learning from demonstration. When an unmanned ground vehicle (UGV) encounters an unknown off-road environment or a new autonomous task, a large amount of weakly labeled data can
be automatically generated by simply driving the UGV for a short distance with the assistance of a human driver (shown as the middle figure in Fig. 1). As derived from actual driving experiences, these weakly labeled data are platform-specific and task-specific. They consist of scarce positive samples (actually traversed regions) and numerous unlabeled samples, which can be employed as training data in the problem of self-supervised traversability learning. Recently, although a few approaches such as positive-unlabeled learning [6] or anomaly detection [7] have been applied to address this problem, they possess limited ability to discriminate traversability in off-road environments with high similarity. Furthermore, common forms of input data for this problem include images [8] or single-frame LiDAR scan [9]. Images are sensitive to illumination changes, and single-frame LiDAR scan is sparse and prone to noises. Consequently, neither can provide a stable and robust representation of off-road environments, directly affecting the stability of traversability learning.
To address these challenges, we propose a novel self-supervised terrain traversability learning framework. After generating stable, complete, and accurate terrain models in real-time using our previous work [10], automatic data annotation is conducted in those constructed terrain models based on actual driving experiences. Those actually traversed regions are assigned determined positive labels, while the remaining regions are assigned candidate pseudo labels. Inspired by the impressive progress of partial label learning (PLL) [11], a prototype-based contrastive representation learning method with the aid of a local window based transformer encoder is designed to learn distinguishable embeddings for updating those candidate pseudo labels, and the refined pseudo labels in turn facilitate representation learning. As the iterative interaction between representation learning and label updating, the ambiguities associated with those pseudo labels are gradually eliminated, enabling the learning of specific traversability in off-road environments. This learning process can be referred to as contrastive label disambiguation.
To demonstrate the effectiveness of the proposed method, we conduct experiments on both the publicly available RELLIS-3D dataset [12] and a Gobi Desert driving dataset collected by our own UGV. Experimental results show that the proposed method can learn specific traversability from human-selected driving routes in a self-supervised manner.
The rest of this paper is organized as follows. Section II discusses some related works. Section III provides detailed information about the proposed method. Experimental results on both the RELLIS-3D dataset and our Gobi Desert driving dataset are presented in Section IV. Finally, Section V summarizes the conclusions.
## II Related Work
Traversability analysis plays a crucial role in autonomous driving and has garnered significant attention in recent years. In the existing literature, most approaches treat traversability analysis as a binary classification task. One common method projects point cloud or RGB images onto a 2D Bird's Eye View (BEV) grid map and extracts geometric features [13] or appearance features [14] for traversability classification. Another kind of approaches determines traversability by estimating terrain models of local environments, such as Gaussian process regression [15], Bayesian generalized kernel inference [16], and B-spline surface [17]. With the rise of deep learning, convolutional neural networks (CNNs) have also been utilized for end-to-end traversable region detection by using RGB images [3], point cloud [18], or a combination of both [1] as input. Although these binary classification approaches work well in structured environments, their suitability for complex off-road environments is limited.
It is more suitable to adopt semantic mapping for traversability analysis in off-road environments, which allows for distinction of semantic information among different regions. Some methods [4, 19] perform fine-grained semantic mapping to assign a detailed semantic label (such as dirt roads, grass, bushes, etc.) to each region by using semantic segmentation networks. The traversability can be further analyzed through mapping relationships between semantic categories and traversability. Some other works [5, 20, 21] argue that fine-grained semantic information is not necessary for autonomous navigation and propose solutions based on coarse-grained semantic mapping. In these approaches, different regions are segmented by traversability levels. Although these semantic mapping approaches achieve good results in some specific off-road environments, they face some intractable challenges. Firstly, it is difficult to define uniform and unambiguous semantic categories or semantic-traversability mapping relationships that are suitable for all off-road environments. Additionally, these methods heavily rely on supervised labels and the burden of manually re-annotating pixel-level or grid-level semantic labels each time a new environment is encountered proves to be impractical, thus limiting the practical application of these approaches.
Recently, there has been a growing interest in self-supervised learning methods for traversability analysis. The physical experiences of the UGV, rather than human-provided annotations, are used to automatically label the training data. These methods can be divided into two categories. The first type utilizes on-board proprioceptive sensors to measure signals that directly reflect information about terrain traversability. Commonly used sensors include the Inertial Measurement Unit (IMU) [22, 23, 24], force-torque sensors [25], or acoustic sensors [26]. Then, traversability-related signals are used to annotate data from exteroceptive sensors (such as images or point cloud), and the generated weakly labeled data is employed as training samples for self-supervised learning traversability classification [23, 24, 25, 26] or regression [22, 24, 25]. The second category of methods obtain labeled data directly from the driving experiences of the UGV. Vehicle trajectories are projected into image space or point cloud space, successful and failed traverses provide positive and negative traversability labels, respectively. Some works [6, 8, 27] utilize only positive samples annotated by the footprints of the UGV for learning traversability in a positive-unlabeled learning manner. Part of works try
to enhance performance by incorporating negative samples, for example, using LiDAR-based obstacle detection algorithms to annotate obstacle regions as negative samples [28, 29, 30]. However, these approaches can not distinguish those non-traversable regions that are not obstacles. Additionally, Chavez-Garcia et al. [31] employ a simulation system for self-learning traversability estimation, labeling regions where the UGV gets stuck as negative samples. Bae et al. [9] introduce a small amount of manually labeled support data to provide negative samples, resulting in better performance with reduced labor costs.
## III The Proposed Approach
In this paper, a self-supervised traversability learning framework is proposed with treating this problem as a contrastive label disambiguation task. The pipeline of the proposed framework is illustrated in Fig. 2, consisting of four core modules: a spatial-temporal 3D terrain modeling module, an automated label generation module, a local window based transformer encoder, and a prototype-based contrastive representation learning module.
### _Spatial-temporal 3D Terrain Modeling_
To provide a stable, complete, and objective representation of off-road environments and overcome the limitations of sparse and noise-prone point cloud data, a spatial-temporal terrain modeling approach proposed in our previous work [10] is applied to generate dense 3D terrain models in real-time. In this approach, a normal distributions transform (NDT) mapping technique is first utilized to recursively fuse information from consecutive LiDAR scans into a global grid map. The elevation of each observed grid cell is modeled as a normal distribution \(\mathcal{N}(\hat{\mu},\hat{\Sigma})\). Subsequently, a bilateral filtering-aided Bayesian generalized kernel (BGK) inference approach is employed to infer a predicted elevation distribution \(\mathcal{N}\left(\mu,\Sigma\right)\) for each grid cell, thus producing a dense and stable elevation map. Furthermore, various terrain features can be calculated by considering the geometric connectivity properties between adjacent grid cells.
After constructing the 3D terrain models, a multi-channel terrain feature map \(\mathbf{F}\) can be generated by projecting various terrain features onto the 2D grid map (as shown in Fig. 3). The features contained within each grid cell include: (1) The mean \(\hat{\mu}\) and variance \(\hat{\Sigma}\) of the observed elevation distribution; (2) The mean \(\mu\) and variance \(\Sigma\) of the predicted elevation distribution; (3) The maximum-minimum observed elevation difference \(\delta_{z}\); (4) The normal angle \(\theta_{n}\); (5) The average concavity angle \(\bar{\theta}_{c}\)[32] between the 4-neighboring grid cells.
### _Automated Label Generation_
To provide training samples without any human-provided annotations, vehicle trajectories are utilized to annotate terrain patches in the proposed method. The process of the automated label generation is illustrated in Fig. 4.
A vehicle trajectory is defined as positions of four contact points \(\mathbf{P}=\{P_{lf},P_{rf},P_{lr},P_{rr}\}\) between four vehicle wheels and the ground plane. The past or future vehicle trajectory at timestamp \(\tau\) can be transformed into the local body coordinate system \(\{B\}\) at current timestamp \(t\) by:
\[\mathbf{P}_{\tau,t}^{B}=\left(\mathbf{T}_{t}^{WB}\right)^{-1}\cdot\mathbf{T}_{\tau}^{WB} \cdot\mathbf{P}^{B}\,, \tag{1}\]
Fig. 4: The process of the automated label generation. Past and future vehicle trajectories (denoted as yellow circles) are first projected into the current feature map. Four projected points from each trajectory are then connected to form a quadrilateral, those cells lying within the quadrilateral are annotated as positive cells (colored by green). The red square indicates a positive terrain patch with a determined positive label, while the blue square is an unlabeled terrain patch with a candidate pseudo label.
Fig. 3: An illustration of the multi-channel terrain feature map. Some feature channels are visualized in the bottom. The magnitude of \(\delta_{z}\), \(\hat{\Sigma}\), \(\mu\), \(\theta_{n}\), and \(\bar{\theta}_{c}\) for each cell is visualized by different colors.
Fig. 2: The pipeline for the proposed self-supervised traversability learning framework.
where \(\mathbf{P}_{\tau,t}^{B}\) represents the transformed vehicle trajectory. \(\mathbf{P}^{B}\) denotes the local position of the contact points \(\mathbf{P}\), which can be measured by a simple calibration process. \(\mathbf{T}_{\tau}^{WB}\) denotes the transformation matrix from the global coordinate system \(\{W\}\) into the local body coordinate system \(\{B\}\) at timestamp \(\tau\), and it is estimated by using an online pose estimation module proposed in our previous work [33].
The transformed vehicle trajectory set \(\big{\{}\mathbf{P}_{\tau,t}^{B}\big{\}}_{\tau\in[t_{p},t_{f}]}\) (\([t_{p},t_{f}]\) denotes valid time interval) is projected onto the terrain feature map \(\mathbf{F}_{t}\) generated at current timestamp \(t\). Four projected points from each vehicle trajectory are connected to form a quadrilateral in \(\mathbf{F}_{t}\). As shown in Fig. 4, those grid cells lying within the quadrilateral are considered as actually traversed regions and are annotated as positive cells \(G_{p}\), while the remaining grid cells are unlabeled cells \(G_{u}\). Then, weakly annotated terrain patches can be extracted. Each terrain patch \(\mathbf{x}_{i}\) consists of \(M\times M\) grid cells (\(M\) is an odd number), and its pseudo label \(\mathbf{y}_{i}\) is determined by the annotation of its central grid cell \(G_{i,c}\). \(\mathbf{y}_{i}\) is represented as a one-hot encoded form:
\[\mathbf{y}_{i}= \,\Big{\{}\underbrace{[1\quad 0\quad 0\quad\cdots\quad 0]}_{K} \quad(G_{i,c}=G_{p})\, \tag{2}\]
where \(K\) is the total number of traversability categories.
### _Local Window Based Transformer Encoder_
For traversability analysis, it is helpful to effectively utilize terrain information from spatially adjacent terrain patches since traversability itself is a spatially-related concept. However, it is challenging to determine the optimal range of supported spatial neighborhoods. To address this issue and extract more representative embeddings, a local window based transformer encoder \(\mathbf{f}\) inspired by the Swin Transformer [34] is introduced in this subsection.
An illustration of the proposed encoder \(\mathbf{f}\) is shown in Fig. 5. The input terrain feature map \(\mathbf{F}\) is partitioned into a sequence of non-overlapping local windows, and each local window is further split into \(W\times W\) terrain patches. All terrain patches are automatically labeled using the approach introduced in Section III-B. A local window is treated as a whole and fed into the \(\mathbf{f}\). Each flatten terrain patch contained in the local window is treated as a token. In \(\mathbf{f}\), a linear embedding layer is applied to the input flatten tokens to project them to a \(D\)-dimensional embedding \(\mathbf{E}\in\mathbb{R}^{W^{2}\times D}\). \(\mathbf{E}\) is then fed into eight transformer blocks. The shape of the output embedding for each transformer block remains unchanged. Each transformer block consists of a local window based multi-head self-attention (LW-MSA) module, followed by a 2-layer MLP module. A Layer Normalization (LN) layer is applied before each LW-MSA and MLP module, and a residual connection is applied after each LW-MSA and MLP module. The whole process of the proposed encoder \(\mathbf{f}\) can be formulated as:
\[\mathbf{Z}^{0} =\mathbf{E}\,, \tag{3}\] \[\mathbf{\hat{Z}}^{l} =\text{{LW-MSA}}\left(\text{{LN}}\left(\mathbf{Z}^{l-1}\right)\right) +\mathbf{Z}^{l-1}\,,\] \[\mathbf{Z}^{l} =\text{{MLP}}\left(\text{{LN}}\left(\mathbf{\hat{Z}}^{l}\right)\right) +\mathbf{\hat{Z}}^{l}\,,\]
where \(\mathbf{\hat{Z}}^{l}\in\mathbb{R}^{W^{2}\times D}\) and \(\mathbf{Z}^{l}\in\mathbb{R}^{W^{2}\times D}\) represent the output embedding vectors of the LW-MSA module and the MLP module in the \(l\)-th transformer block, respectively. The final extracted embedding for each terrain patch \(\mathbf{x}_{i}\) is \(\mathbf{z}_{i}^{\text{g}}\in\mathbb{R}^{D}\), which is simply denoted as \(\mathbf{z}_{i}\) in the subsequent contents.
### _Prototype-based Contrastive Representation Learning_
Inspired by [11], a prototype-based contrastive representation learning approach is proposed to learn discriminative embeddings for self-supervised traversability learning. The overall process of this approach is illustrated in Fig. 6.
In this approach, a given local window is first processed by separate query encoder \(\mathbf{f}_{q}\) and key encoder \(\mathbf{f}_{k}\), respectively, generating a query embedding \(\mathbf{z}_{q}\) and a key embedding \(\mathbf{z}_{k}\) for each token. Only the parameters \(\mathbf{\theta}_{q}\) of \(\mathbf{f}_{q}\) are updated by back-propagation, while the parameters \(\mathbf{\theta}_{k}\) of \(\mathbf{f}_{k}\) are momentum updated by \(\mathbf{\theta}_{q}\):
\[\mathbf{\theta}_{k}=m_{\theta}\cdot\mathbf{\theta}_{k}+(1-m_{\theta})\cdot\mathbf{\theta}_ {q}\,, \tag{4}\]
where \(m_{\theta}\) is a momentum coefficient for updating encoder. The query embedding \(\mathbf{z}_{q}\) is then fed into an MLP-based classifier \(\mathbf{f}_{c}\). By combining the output of \(\mathbf{f}_{c}\) with the pseudo label \(\mathbf{y}\) of the token corresponding to \(\mathbf{z}_{q}\), a masked predicted label \(\tilde{y}_{q}\) can be generated by:
\[\tilde{y}_{q}=\operatorname*{arg\,max}_{j\in[1,K]}\left[\mathbf{f}_{c}^{j}\left( \mathbf{z}_{q}\right)\cdot\mathbf{y}\right]\,, \tag{5}\]
Fig. 5: An illustration of the local window based transformer encoder. The input terrain feature map is first partitioned into a sequence of local windows, and each local window is further split into several terrain patches. Then, a linear embedding layer is applied on each input patch, followed by eight transformer blocks. The right figure is the compositions of each transformer block.
Fig. 6: The overall process of the prototype-based contrastive representation learning approach.
where \(\mathbf{f}_{c}^{j}\left(\mathbf{z}_{q}\right)\) denotes the \(j\)-th component of the output vector \(\mathbf{f}_{c}\left(\mathbf{z}_{q}\right)\in\mathbb{R}^{K}\).
An embedding queue \(\mathbf{Q}_{c}\) and a predicted label queue \(\mathbf{Q}_{l}\) are maintained to store the recently encoded key embeddings and their corresponding predicted labels. The embeddings and labels of the latest tokens are enqueued, and the same number of the oldest embeddings and labels are dequeued to ensure a fixed queue size. For a token \(\mathbf{x}\) with a predicted label \(\tilde{y}_{q}\), its positive embeddings can be selected from \(\mathbf{Q}_{e}\). Specifically, any embedding \(\mathbf{z}^{\prime}\) in \(\mathbf{Q}_{e}\) with the same predicted label as \(\tilde{y}_{q}\) is selected as a positive embedding, while the remaining embeddings are considered as negative embeddings. After the positive/negative embeddings selection, the per-token contrastive loss \(\mathcal{L}_{\mathrm{cont}}\left(\mathbf{x}\right)\) can be defined as:
\[\mathcal{L}_{\mathrm{cont}}\left(\mathbf{x}\right)=\frac{-1}{|\mathbf{A}\left(\mathbf{x} \right)|}\sum\limits_{\mathbf{z}^{+}\in\mathbf{A}\left(\mathbf{x}\right)}\log\frac{\exp \left(\mathbf{z}_{q}^{T}\mathbf{z}^{+}/\tau\right)}{\sum\limits_{\mathbf{z}_{j}\in\mathbf{Q}_ {e}}\exp\left(\mathbf{z}_{q}^{T}\mathbf{z}_{j}/\tau\right)}\,, \tag{6}\]
where \(\tau\) is a temperature hyper-parameter, and \(|\mathbf{A}\left(\mathbf{x}\right)|\) denotes the total number of positive samples in the positive embedding set \(\mathbf{A}\left(\mathbf{x}\right)\).
To ensure the generation of discriminative embeddings, a high-quality classifier is required for accurate positive/negative embeddings selection. However, improving the performance of the classifier solely through the contrastive loss is challenging due to the inherent ambiguity of pseudo labels. To alleviate this problem, \(K\) prototype vectors \(\mathbf{\Psi}=\left\{\mathbf{\psi}_{c}\right\}_{c=1:K}\) are created for incremental updating of the pseudo labels. Each prototype serves as a representative embedding for a group of similar embeddings. During training, \(\mathbf{\psi}_{c}\) is momentum updated by those query embeddings \(\mathbf{z}_{q}\) whose predicted labels \(\tilde{y}_{q}\) belong to class \(c\), and the update process can be expressed as:
\[\mathbf{\psi}_{c}=\frac{m_{p}\cdot\mathbf{\psi}_{c}+(1-m_{p})\cdot\mathbf{z}_{q}}{\|m_{p} \cdot\mathbf{\psi}_{c}+(1-m_{p})\cdot\mathbf{z}_{q}\|_{2}}\,, \tag{7}\]
where \(m_{p}\) is a momentum coefficient for updating the prototypes, \(\|\cdot\|_{2}\) denotes L2-norm of a vector.
After the prototype updating, the pseudo label updating process is performed. An initial normalized vector \(\mathbf{y}_{n}=\frac{1}{\sum_{i=1}^{n}\mathbf{y}}\mathbf{y}\) is assigned to each token based on its pseudo label \(\mathbf{y}\) in the first batch. Then, an indicator vector \(\mathbf{\xi}\in\mathbb{R}^{K}\) is computed by comparing the similarity between \(\mathbf{z}_{q}\) and \(\mathbf{\Psi}\), and \(\mathbf{y}_{n}\) is momentum updated by:
\[\mathbf{y}_{n} =m_{l}\cdot\mathbf{y}_{n}+(1-m_{l})\cdot\mathbf{\xi}\,, \tag{8}\] \[\mathbf{\xi}^{c} =\begin{cases}1&\text{if }c=\operatorname*{arg\,max}_{j\in[1,K]} \left(\mathbf{z}_{q}^{T}\cdot\mathbf{\psi}_{j}\right)\\ 0&\text{else}\end{cases}\,, \tag{9}\]
where \(m_{l}\) is a momentum coefficient for updating pseudo labels, and \(\mathbf{\xi}^{c}\) denotes the \(c\)-th component of \(\mathbf{\xi}\). \(\mathbf{y}_{n}\) is considered as the refined pseudo label, and is utilized for calculating the per-token cross-entropy loss \(\mathcal{L}_{\mathrm{cls}}\left(\mathbf{x}\right)\) as:
\[\mathcal{L}_{\mathrm{cls}}\left(\mathbf{x}\right)=\sum_{j=1}^{K}-\mathbf{y}_{n}^{j} \cdot\log\left(\mathbf{f}_{c}^{j}\left(\mathbf{z}_{q}\right)\right)\,. \tag{10}\]
In the training process, the MLP-based classifier and the query encoder are jointly trained, and the overall loss function is:
\[\mathcal{L}_{\mathrm{sum}}=\mathcal{L}_{\mathrm{cls}}+\lambda\mathcal{L}_{ \mathrm{cont}}\,, \tag{11}\]
where \(\lambda\) is a weight used for balancing \(\mathcal{L}_{\mathrm{cls}}\) and \(\mathcal{L}_{\mathrm{cont}}\).
In summary, the proposed prototype-based contrastive representation learning approach consists of two components that mutually reinforce each other. The discriminative embeddings learned from contrastive learning enhance the quality of positive/negative embeddings selection, while the refined pseudo labels in turn improve the performance of contrastive representation learning. As the iterative interaction of prototype updating and pseudo label updating, the ambiguities associated with those pseudo labels are gradually eliminated, leading to the understanding of the specific traversability.
## IV Experimental Results
### _Experimental Datasets_
To evaluate the proposed method, experiments are conducted on two off-road datasets: the publicly available RELLIS-3D [12] dataset and a Gobi Desert driving dataset collected by our UGV. The data collection platforms and some typical scenes of both datasets are shown in Fig. 7.
The RELLIS-3D dataset consists of five sequences of LiDAR frames collected in a rugged off-road environment using a Warthog all-terrain UGV. The UGV is equipped with an Ouster OS1 LiDAR and a Vectornav VN-300 inertial navigation system. Each LiDAR frame is point-wise annotated with 20 different semantic classes (such as grass, fence, tree, barrier, etc.). Additionally, ground-truth pose for each frame is provided by a high-precision Simultaneous Localization and Mapping (SLAM) system. For our experiments, we select 50 key-frames from sequence 01 for training, 200 random frames from the remaining frames of sequence 01 for validation, and all 2059 frames from sequence 04 for quantitative and qualitative testing.
In our Gobi Desert driving dataset, LiDAR frames were collected in a Gobi desert scene. Our UGV is equipped with a Robosense RS-Ruby128 LiDAR and a StarNeto XW-GI7660 GNSS/INS system. High-frequency 6-degree of freedom (DoF) poses with centimeter-level accuracy can be obtained by using an online pose estimation module proposed in our previous work [33]. For our experiments, we select 100 key-frames for training, and 1900 frames for qualitative testing.
Fig. 7: Data collection platforms and typical scenes of RELLIS-3D dataset (the top figures) and our Gobi Desert driving dataset (the bottom figures).
### _Evaluation Metrics_
To quantitatively evaluate the performance of the proposed method, we utilize the annotations from the RELLIS-3D dataset to generate ground-truth traversability maps. The semantic categories are grouped into three traversability levels (traversable, non-traversable, and risky) based on their travel costs. In the process of ground-truth generation, several annotated LiDAR frames are first assembled by using the provided ground-truth poses. The merged dense point cloud is then projected onto a 2D grid map, and the traversability of each grid cell is determined by the semantic labels of the projected points. If all the projected points within a grid cell have labels such as "grass", "puddle", "asphalt", or "concrete", it is considered as a traversable cell; if the labels of all projected points are "bush" or "fence", it is considered as a risky cell; otherwise, it is considered as a non-traversable cell.
Given the ground-truth traversability maps, we evaluate the traversability analysis results by two performance metrics widely used for semantic segmentation: Pixel Accuracy (PA) and mean Intersection over Union (mIoU). PA measures the proportion of correctly classified grid cells in the prediction results, and mIoU calculates the degree of overlap between the ground-truth and prediction results. Both metrics provide a quantitative measure of the grid-level prediction accuracy.
### _Implementation Details_
In our experiments, we set the resolution of each grid cell to \(0.2\mathrm{m}\times 0.2\mathrm{m}\), and the map size is set to \(40\mathrm{m}\times 40\mathrm{m}\). Each terrain patch consists of \(11\times 11\) (\(M=11\)) grid cells, and each local window comprises \(10\times 10\) (\(W=10\)) terrain patches. The dimensionality \(D\) of the embedding is set to 32. The lengths of the embedding queue \(\mathbf{Q}_{e}\) and the predicted label queue \(\mathbf{Q}_{l}\) are kept as 81920. The momentum coefficients \(m_{\theta}\) and \(m_{p}\) are set to 0.999 and 0.99, respectively. The initial momentum coefficient \(m_{l}\) is set to 0.99, and its value decays polynomially after the initial 10 epochs. For hyper-parameters, we set \(\tau\) to 0.07, and \(\lambda\) to 0.5. During training, we use Stochastic Gradient Descent (SGD) as the optimizer, with a weight decay of \(1e^{-5}\), a momentum of 0.9, and an initial learning rate of 0.02. The network is trained for 50 epochs on a NVIDIA RTX A6000 GPU, with an exponentially decayed learning rate.
### _Ablation Studies_
#### Iv-D1 Prototype Num
To evaluate how different number of prototypes \(K\) affects the performance of the proposed method, an ablation study is conducted with varying number of \(K\). The quantitative experimental results are presented in Fig 8. It can be found that an increasing number of \(K\) boosts the model's performance until when \(K=4\), and then the model's performance decreases and tends to converge when \(K>4\). Therefore, we choose \(K=4\) as the optimal number of prototypes for our subsequent experiments.
Furthermore, we also conduct visualization to gain insights into the generated prototypes and their semantic meaning. First, we visualize the traversability classification results (Fig.9(b)). Notably, the proposed method automatically divides the "bushes" category into "tall bushes" and "low bushes", resulting a finer semantic categories compared to the original annotations (Fig.9(a)). Subsequently, we employ t-SNE [35] visualization to explore the embedding space (Fig.9(c)). We observe that well-separated clusters are generated in the embedding space. Each cluster represents a specific semantic category, and can be represented by a prototype. Based on these visualizations, we can interpret the semantic meaning of each prototype. In the subsequent traversability analysis, the grass category corresponds to traversable regions, the tree category corresponds to non-traversable regions, and both low bushes and high bushes are considered as risk regions.
#### Iv-D2 Input data
To verify the validity of the input terrain feature map \(\mathbf{F}\) in the proposed method, we conduct an ablation study by varying the forms of input data. In LiDAR-based traversability analysis approaches, a common input data format is the BEV grid map [6, 28]. In this ablation study, we consider two common variations of BEV grid maps: the single LiDAR scan BEV (S-BEV) and the multiple LiDAR scans BEV (M-BEV). The S-BEV is generated from a single LiDAR scan, while the M-BEV is formed by fusing multiple LiDAR scans. The S-BEV and M-BEV are applied as two forms of input data in the proposed framework for comparative analysis. The quantitative experimental results are shown in Table I. The results indicate that the model achieves the worst performance when using S-BEV as the input data. This can be attributed to the sparse nature of a single LiDAR scan, which may fail to provide stable and complete representations of the local environment. Although the model's performance improves significantly when using M-BEV as the input data, there still exists a performance
Fig. 8: Performance comparison results of the ablation study with varying number of prototypes \(K\).
Fig. 9: (a) visualizes the semantic annotation information of a raw LiDAR scan. (b) shows the traversability classification result. (c) is the 2D t-SNE visualization of the generated embeddings. Different colors represent different semantic categories.
gap compared to using \(\mathbf{F}\) as the input data. The reason is that \(\mathbf{F}\) contains richer information compared to M-BEV.
#### V-B3 Encoder Network
To evaluate the validity of the proposed local window based transformer (LW-Transformer), two commonly used backbone networks (AlexNet and ResNet-18) are employed as encoders in the proposed framework for comparative analysis. The results in Table I show that model's performance decreases when using AlexNet or ResNet-18 as encoders. The primary reason is that traversability is a spatially-related concept, the traversability of a terrain patch not only depends on the patch itself, but also on those neighboring terrain patches within a certain range. The self-attention mechanism incorporated in the LW-Transformer enables it to capture the implicit spatial dependencies between adjacent terrain patches. This capability is crucial for accurate traversability analysis. In contrast, CNN-based encoder networks lack the modeling of spatial dependencies, which results in performance degradation.
#### V-B4 Loss Function
To validate the impact of the loss functions \(\mathcal{L}_{\mathrm{cont}}\) and \(\mathcal{L}_{\mathrm{cls}}\) in the prototype-based contrastive representation learning, we conduct an ablation study by considering each loss function individually. The results presented in Table I clearly indicate that the model's performance decreases significantly when utilizing only \(\mathcal{L}_{\mathrm{cont}}\) or \(\mathcal{L}_{\mathrm{cls}}\) as the loss function. This finding validates the necessity of using a joint loss function that combines \(\mathcal{L}_{\mathrm{cont}}\) and \(\mathcal{L}_{\mathrm{cls}}\) as Eq. (11) in the proposed method.
### _Comparative Experiments_
We compare the proposed method with two recent LiDAR-based traversability analysis approaches. The first one [10] is a rule-based approach that estimates the travel cost for each region by using the constructed 3D terrain models. It determines traversability based on some cost thresholds derived from vehicle trajectories. The second approach is a self-supervised learning based off-road drivable area extraction network (ORDAE-Net) [28]. ORDAE-Net segments the environments into obstacle regions, traversable regions, and grey regions using vehicle paths and auto-generated obstacle labels. Fig. 10 shows some qualitative comparison results, and the quantitative results are presented in Table II. The results in Fig. 10 show that the rule-based approach can roughly distinguish the overall shape of regions with different traversability, but the results often contain a significant amount of noise. The ORDAE-Net excels in detecting non-traversable regions but struggles to distinguish traversable regions from those similar risky regions. In contrast, the proposed method demonstrates superior capability in distinguishing regions with varying traversability. The qualitative results shown in Table II further supports the superiority of the proposed method. It significantly surpasses the other two approaches in terms of both PA and mIoU.
## V Concluding Remarks
In this paper, we present a novel terrain traversability learning method that leverages a contrastive label disambiguation strategy to learn platform-specific and task-specific traversability in a self-supervised manner, without any human-provided annotations. To achieve this, a prototype-based contrastive representation learning approach is designed to learn discriminative embeddings by using weakly labeled terrain patches obtained from actual driving experiences. As the iterative interaction between prototype updating and pseudo label updating, the ambiguities of those pseudo labels are gradually eliminated, and the specific traversability can be learned. Experimental results on both the RELLIS-3D dataset and our Gobi Desert driving dataset have demonstrated the effectiveness of the proposed method. In future work, we aim to address the limitations of using LiDAR as sole sensing modality by incorporating visual and proprioceptive modalities to capture richer terrain features.
## Acknowledgment
This work was supported by the National Natural Science Foundation of China under No. 61790565 and No. 61803380.
|
2308.03332 | Improving Deep Attractor Network by BGRU and GMM for Speech Separation | Deep Attractor Network (DANet) is the state-of-the-art technique in speech
separation field, which uses Bidirectional Long Short-Term Memory (BLSTM), but
the complexity of the DANet model is very high. In this paper, a simplified and
powerful DANet model is proposed using Bidirectional Gated neural network
(BGRU) instead of BLSTM. The Gaussian Mixture Model (GMM) other than the
k-means was applied in DANet as a clustering algorithm to reduce the complexity
and increase the learning speed and accuracy. The metrics used in this paper
are Signal to Distortion Ratio (SDR), Signal to Interference Ratio (SIR),
Signal to Artifact Ratio (SAR), and Perceptual Evaluation Speech Quality (PESQ)
score. Two speaker mixture datasets from TIMIT corpus were prepared to evaluate
the proposed model, and the system achieved 12.3 dB and 2.94 for SDR and PESQ
scores respectively, which were better than the original DANet model. Other
improvements were 20.7% and 17.9% in the number of parameters and time
training, respectively. The model was applied on mixed Arabic speech signals
and the results were better than that in English. | Rawad Melhem, Assef Jafar, Riad Hamadeh | 2023-08-07T06:26:53Z | http://arxiv.org/abs/2308.03332v1 | # Improving Deep Attractor Network by BGRU and GMM for Speech Separation
###### Abstract
Deep Attractor Network ( DANet) is the state-of-the-art technique in speech separation field, which uses Bidirectional Long Short-Term Memory ( BLSTM), but the complexity of the DANet model is very high. In this paper, a simplified and powerful DANet model is proposed using Bidirectional Gated neural network ( BGRU) instead of BLSTM. The Gaussian Mixture Model ( GMM) other than the k-means was applied in DANet as a clustering algorithm to reduce the complexity and increase the learning speed and accuracy. The metrics used in this paper are Signal to Distortion Ratio ( SDR), Signal to Interference Ratio ( SIR), Signal to Artifact Ratio ( SAR), and Perceptual Evaluation Speech Quality ( PESQ) score. Two speaker mixture datasets from TIMIT corpus were prepared to evaluate the proposed model, and the system achieved 12.3 dB and 2.94 for SDR and PESQ scores respectively, which were better than the original DANet model. Other improvements were 20.7% and 17.9% in the number of parameters and time training respectively. The model was applied on mixed Arabic speech signals and the results were better than that in English.
Keywords:attractor network; speech separation; gated recurrent units : : 10.11916/j.issn.1005-9113.2019044
## 0 Introduction
Isolating each speech signal from a mixture in noisy environment is an easy task for human but a difficult work for machine. The problem is called "cocktail party", and its solution is useful in various applications, such as automatic meeting transcription, automatic caption for Audio/Video recordings ( e. g., YouTube), applications that need human-machine interaction ( e. g., Internet of things ( IoT) ), and advanced hearing aids.
Cocktail party was formalized by Cherry E C in 1953[1], and many solutions were proposed. Most of the proposed solutions tried to mimic actions of human ears by filtering or extracting auditory properties from mixture, and then grouping T-F bins of the same speaker. These methods belong to Computational Auditory Speech Analysis ( CASA) [2], but CASA methods are not enough for analysis. Another approach for multi-speaker separation is non-negative Matrix Factorization ( NMF )[3], which decomposes spectrogram matrix into two matrices ( templates and activations). By using activations and non-negative dictionaries, the separated signal can be approximated. Statistical methods were also utilized as solutions for speech separation, such as Independent Component Analysis ( ICA) [4], which assumes that speech signal and interference signal are statistically independent, so the separation can be conducted by maximizing the independence. However, ICA only works for overdetermined environment, while cocktail party is an issue in underdetermined environment[5].
Deep learning performs significantly better than other methods, which has been applied in speech enhancement and separation[6, 7, 8] as well as in music separation[10, 11, 12]. However, deep learning has two obstacles[13], i. e., fixed number of outputs and permutation of sources. Training neural network to separate ( \(n\) ) signals does not work for any number that differs from ( \(n\) ), thus resulting in fixed number of outputs. Permutation of sources occurs due to the order of sources at the outputs of network. Training the network on two different orders of sources will increase the training error and lead to convergence problem. The difficulty of " permutation of sources"
can be solved through Permutation Invariant Training ( PIT) \({}^{\{13\}}\) by calculating the error between separated signal and the targets and choosing the target corresponding to the minimum error. However, PIT can only remove the permutation obstacle, while the problem of fixed number of outputs remains. Deep clustering ( DC) \({}^{\{14\}}\) outperforms PIT in solving the problems of " fixed number of outputs " and " permutation of sources". The main idea of DC is to produce new space of embeddings, which has desirable features that can make speaker segmentation much easier. Each T-F bin of spectrogram corresponds to an embedding vector, so the embedding space is dense and able to show the T-F bins in a separable way. Clustering algorithm is applied to the embeddings to get each speaker, so it is feasible to determine the number of clusters to vary the outputs of the network. In this way, DC can solve the problem of fixed number of outputs. Loss function of DC is Frobenius norm between affinity matrix of embedding and affinity matrix of target binary mask, so the permutation problem disappears since the affinity matrices are not affected by the order. The trouble of DC is that it does not represent end-to-end system, because mask generation is done separately after the neural network \({}^{\{12\}}\). Deep Attractor Network ( DANet) \({}^{\{12\}}\) depends on DC algorithm, but after extracting the embeddings, central points of each speaker's embedding will be created, which are called " attractors". By calculating the distance between each embedding vector and the attractors, the mask for each speaker will be generated. DANet uses reconstruction error by comparing between reconstructed and ideal signals, so it provides end-to-end system. The disadvantage of DANet is its complexity, which takes a long time in training, and relatively long time in estimating the masks. Gated Recurrent Units ( GRU ) \({}^{\{15\}}\) is a new version of recurrent neural network ( RNN), which has better results than LSTM in many cases \({}^{\{16\}}\). In this paper, a new version of DANet is proposed, which is less complex and more accurate. Embeddings were created by Bidirectional GRU instead of BLSTM, which can make the model less complex. Therefore, the neural network can be trained on normal workstation. Gaussian Mixture Model ( GMM) clustering algorithm was employed, instead of k-means, for a more accurate model.
The rest of the paper is organized as follows. Speech separation problem is introduced in Section 1. In Section 2, the proposed system is explained. The experimental results are discussed in Section 3.
## 1 Single Channel Speech Separation Problem
Single channel speech separation problem is defined as follows;
Estimate the signals \(\boldsymbol{s}_{\_}{\_}{\{}\{t\}}\), \(i=1,2,\cdots,N\), given only signal \(\boldsymbol{x}(\,\iota\,)\), where \(\boldsymbol{x}(\,\iota\,)\) is the mixture of \(N\) speech signals \(\boldsymbol{x}(\,t\,)=\,\sum\limits_{\_}{i=1}^{N}\,\boldsymbol{s}_{\_}{\{}\{t\}}\).
To write \(\boldsymbol{x}(\,t\,)\) in time-frequency ( T-F) domain, Short-time Fourier Transform ( STFT) is calculated as
\[\boldsymbol{X}(\,t\,)=\,\sum\limits_{\_}{i=1}^{N}\,\boldsymbol{S}_{\_}{\{}\{t \}} \tag{1}\]
where \(\boldsymbol{X}\), \(\boldsymbol{S}_{\_}{\{}\) are the Fourier transform for \(\boldsymbol{x}\) and \(\boldsymbol{s}_{\_}{\{}\), respectively.
## 2 The Proposed Model
The proposed method is very similar to DANet\({}^{\{12\}}\), except that the hidden Layers are BGRU rather than BLSTM, and GMM is the alternative to the k-means algorithm.
### Gru
Fig. 1 shows the architecture of LSTM and GRU\({}^{\{17\}}\).
RNN can use hidden state to process sequences of inputs, and there are three categories of RNN;
1) Vanilla RNN; It is the simplest type of RNN, which has problems when training "vanishing/ exploding gradient".
2)LSTM; It first appeared in 1997, and has three gates, i.e., input, forget, and output. The method can solve the vanishing/exploding gradient but with high complexity.
3) GRU; It was found in 2014, and is considered the simplest version of LSTM, which replaces forget and input gates with update gate.
LSTM has achieved satisfactory results in speech processing tasks, but the complexity remains to be a trouble, so the network needs more data to learn. Recently, GRU has become a strong competitor of LSTM, and in some cases both of them have nearly the same result, but GRU outperforms the standard LSTM in most times [16]. Merging input and forget gates in one-gate results in combining hidden state with cell state, which is one of the reasons for the superiority of GRU\({}^{\text{\tiny\@@cite[cite]{[\@@bibref{}{GMM}{}{}]}}}\).
### GMM
k-means is a special case of GMM, while it has restrictions that it is only suitable when clusters are spherical. The biggest limitation of k-means is probably that each cluster has the same diagonal covariance matrix. GMM is more flexible and almost more accurate than k-means. In this paper, it is proposed that each cluster resulting from GMM has its own general covariance matrix.
Fig.2 shows the flow chart of the whole method. The steps can be summarized as preprocessing, training phase, and testing phase.
### The proposed model using BGRU and GMM
#### 2.2.1 Preprocessing
1)Calculate the magnitude of spectrogram of the mixture using STFT as flatten vector \(\boldsymbol{X}\in\mathbf{R}^{1\times FT}\) to be the input for the neural network.
2)Build ideal binary mask for each speaker using Weiner-filter like mask as
\[\text{WFM}_{i,\delta}=\frac{\big{|}s_{i,\theta}\big{|}^{2}}{\sum\limits_{j=1}^{ N}\big{|}s_{j,\theta}\big{|}^{2}} \tag{2}\]
\[\boldsymbol{m}_{i}=\begin{cases}1\,,\ \text{WFM}_{i,\theta}\ >\tau\\ 0\,,\ \text{otherwise}\end{cases} \tag{3}\]
where \(\boldsymbol{m}_{i}\ \in\mathbf{R}^{1\times FT}\).
Choose \(\tau=0.5\) as a threshold. Ideal masks will be used in training phase only.
#### 2.2.2 Training phase
1) Generate the embedding space \(\boldsymbol{V}\) using four BGRU layers and fully connected layer, where each T-F bin of magnitude of spectrogram maps \(K\)-dimensional vector.
\[\boldsymbol{V}=f(\boldsymbol{X})\ \,\quad\boldsymbol{V}\in\mathbf{R}^{ \kappa\times FT} \tag{4}\]
The fully connected layer is used to represent the spectrogram into the dense embedding space.
2) Form attractor \(\boldsymbol{a}_{i}\in\mathbf{R}^{1\times K}\) for each cluster by calculating the weighted average of embeddings as
\[\boldsymbol{a}_{i}=\frac{\boldsymbol{m}_{i}\ \boldsymbol{\cdot}\boldsymbol{V}^{ \text{T}}}{\sum\limits_{f,i}\boldsymbol{m}_{i}},i=1\,,2\,,\cdots,N \tag{5}\]
3)Measure the distance \(\boldsymbol{d}_{i}\) between each embedding vector and the attractors as
\[\boldsymbol{d}_{i}=\boldsymbol{a}_{i}\boldsymbol{V}\,\ i=1\,,2\,,\cdots,N \tag{6}\]
where \(\boldsymbol{d}_{i}\in\mathbf{R}^{1\times FT}\). By normalizing the distance, each mask will be estimated by sigmoid function as
\[\hat{\boldsymbol{m}_{i}}=\text{sigmoid}\,(\boldsymbol{d}_{i}) \tag{7}\]
4) Update the weights of the network by minimizing the reconstruction error as follows;
\[L=\frac{1}{N}\sum\limits_{i}\ \big{\|}\boldsymbol{X}\odot(\boldsymbol{m}_{i} \ -\hat{\boldsymbol{m}_{i}})\ \big{\|}_{2}^{2} \tag{8}\]
#### 2.2.3 Testing phase
1)Calculate the phase of spectrogram;
2)Generate the embedding using the previous trained model;
3)Cluster the last embedding using GMM;
4)Find the attractors, which are the center of the clusters;
5)Estimate the mask for each speaker by calculating the distance \(\boldsymbol{d}_{i}\) following Step 3 in the Training Phase;
6)Reconstruct the speech signal for each speaker by multiply the magnitude of the spectrogram by the corresponding estimated mask, and then apply inverse STFT using the phase of the mixture calculated in Step 2.
## 3 Experimental Results
### Network Architecture
The network is similar to that of DANet, but differs in the way of extracting embeddings and in the
clustering algorithm. The dimension of the input feature is 129. The optimizer algorithm is ADAM with training rate starting at \(10^{-3}\), and will be halved if the validation error does not reduce in 3 epochs. The number of epochs was chosen as 150. Four Bi-directional GRU layers were used, and each had 600 units. The dimension of the embedding vector was set to be \(20^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text
Then, the architecture of the network was changed to check its effects. Table 4 shows the separation results using BLSTM network with k-means and BGRU network with k-means algorithm.
Third, the effect of using BGRU with GMM instead of BLSTM with k-means was studied, and the proposed model was compared with the DANet. The two models should be trained by the same dataset for comparison, but WSJ0-2mix is not free. Thus, DANet was trained on TIMIT-2mix, and the DANet model was established, which consists of BLSTM and k-means. All parameters were mentioned in Ref. [12]. In this case, the comparison is convincing because the two models were trained using TIMIT-2mix.
As can be seen in Table 5, the proposed model, which depends on BGRU network with GMM, outperformed the DANet model.
The results of our model can be seen clearly from Fig.4, where Fig.4 ( a) and Fig.4 ( b) show the two separated speakers in time domain and in spectrogram domain, respectively.
The system is language independent, because it can work in Arabic although it learned in English language. It relies on features of the human voice and does not depend on language. According to the study, the separation over Arabic mixture was better in English, for it depends on the speed of talking, and speaking in Arabic is often slower than in English. Table 6 shows the performance of the proposed model on Arabic and English mixtures.
### Discussion
Table 1 reveals how the complexity is reduced by using GRU. GRU is more and more used and is much simpler than LSTM. The reason is that LSTM has three gates and internal memory ( cell state), while GRU only has two gates without cell state, which yields to less computational power and faster training. Therefore, GRU can be used to form really deep networks.
As shown in Table 3, the separation accuracy was
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metrics & SDR ( dB) & SIR ( dB) & SAR ( dB) & PESQ \\ \hline Arabic & 13.20 & 21.00 & 14.80 & 3.11 \\ English & 12.30 & 19.20 & 13.60 & 2.94 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of separation results between Arabic and English mixtures
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metrics & SDR ( dB) & SIR ( dB) & SAR ( dB) & PESQ \\ \hline BLSTM+k-means & 9.30 & 15.80 & 11.40 & 2.07 \\ BGRU+k-means & 11.80 & 18.10 & 12.90 & 2.71 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of [ BLSTM+k-means, BGRU+k-means] in terms of accuracy of separation using TIMIT-2mix dataset
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metrics & SDR ( dB) & SIR ( dB) & SAR ( dB) & PESQ \\ \hline BLSTM+k-means & 9.30 & 15.80 & 11.40 & 2.07 \\ BLSTM+GMM & 10.70 & 16.90 & 11.50 & 2.50 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of [ BLSTM + k-means, BLSTM + GMM] in terms of accuracy of separation using TIMIT-2mix dataset
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metrics & SDR ( dB) & SIR ( dB) & SAR ( dB) & PESQ \\ \hline DANet & 9.30 & 15.80 & 11.40 & 2.07 \\ The proposed model & 12.30 & 19.20 & 13.60 & 2.94 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of DANet with the proposed model in terms of accuracy of separation using TIMIT-2mix dataset
improved by GMM, and BGRU was more useful in separation speech ( Table 4). In Table 5, the BGRU network with GMM was better than BLSTM with k-means. The architecture of the network and the clustering algorithm had different influences on increasing the accuracy.
In our task ( i.e., speech separation), according to the experimental results, GRU performed better than LSTM, and the reasons for the superiority of GRU over LSTM in speech separation are as follows:
1) GRU does not limit the amount of information added to the cell in each time step, and it is controlled by the forget gate in LSTM, which sometimes leads to the loss of some features.
2) GRU exposes the output ( hidden state) in its entirety not as LSTM, which limits the output by hyperbolic tangent and sigmoid functions. It helps the GRU network to find patterns easier.
3) It is possible for GRU to obtain the overall sequence, which makes it more powerful in classification and clustering tasks. While the complexity of LSTM ( more gates and cell state) makes it able to learn complicated relationships between words, besides classification and clustering tasks.
In general, the performance of k-means was not better than GMM because of the assumption that all clusters will have spherical models determined by the covariance. In our case, it is not necessary for each cluster to have a spherical shape. GMM is much more flexible in terms of cluster covariance than k-means. Due to the standard deviation parameter, the clusters can take on any shape, rather than being restricted to spheres. k-means is actually a special case of GMM, in which the covariance of each cluster along all dimensions approached zero.
## 4 Conclusions
In this work, a new version of DANet was proposed, which was less complex and more accurate by using BGRU for generating embeddings and GMM with general covariance matrix for each cluster. These modifications in DANet structure improved the speed learning and separation accuracy, which is beneficial in using normal PC for training neural network instead of using high performance PC. It was found that the proposed system was language independent, but it performed better in Arabic than in English. Another advantage of this study is that a new challenging dataset TIMIT-2mix is proposed, which may be an alternative of WSJ0-2mix.
## Acknowledgement
The authors would like to thank Dr. Jonathan Le Roux of Mitsubishi Electric Research Lab for his valuable advices.
|
2301.10123 | Inducing Point Allocation for Sparse Gaussian Processes in
High-Throughput Bayesian Optimisation | Sparse Gaussian Processes are a key component of high-throughput Bayesian
Optimisation (BO) loops; however, we show that existing methods for allocating
their inducing points severely hamper optimisation performance. By exploiting
the quality-diversity decomposition of Determinantal Point Processes, we
propose the first inducing point allocation strategy designed specifically for
use in BO. Unlike existing methods which seek only to reduce global uncertainty
in the objective function, our approach provides the local high-fidelity
modelling of promising regions required for precise optimisation. More
generally, we demonstrate that our proposed framework provides a flexible way
to allocate modelling capacity in sparse models and so is suitable broad range
of downstream sequential decision making tasks. | Henry B. Moss, Sebastian W. Ober, Victor Picheny | 2023-01-24T16:43:29Z | http://arxiv.org/abs/2301.10123v2 | # Inducing Point Allocation for Sparse Gaussian Processes in High-Throughput Bayesian Optimisation
###### Abstract
Sparse Gaussian processes are a key component of high-throughput Bayesian optimisation (BO) loops; however, we show that existing methods for allocating their inducing points severely hamper optimisation performance. By exploiting the quality-diversity decomposition of determinantal point processes, we propose the first inducing point allocation strategy designed specifically for use in BO. Unlike existing methods which seek only to reduce global uncertainty in the objective function, our approach provides the local high-fidelity modelling of promising regions required for precise optimisation. More generally, we demonstrate that our proposed framework provides a flexible way to allocate modelling capacity in sparse models and so is suitable for a broad range of downstream sequential decision making tasks.
## 1 Introduction
Countless design tasks in science, industry and machine learning can be formulated as high-throughput optimisation problems, as characterised by access to substantial evaluation budgets and an ability to make large batches of evaluations in parallel. Prominent examples include high-throughput screening within drug discovery (Hernandez-Lobato et al., 2017), DNA sequencing, and experimental design pipelines, where automation allows researchers to efficiently oversee thousands of scientific experiments, field tests and simulations through sensor arrays and cloud compute resources (Kandasamy et al., 2018). However, such design tasks tend to have large search spaces and multi-modal optimisation landscapes such that, even under large optimisation budgets, only a small proportion of candidate solutions can ever be evaluated, and often only with significant levels of observation noise. Consequently, most existing optimisation routines are unsuitable, as brute-force methods require too many evaluations.
Bayesian optimisation (BO, see Shahriari et al., 2016, for a review) has surfaced as the _de facto_ approach for solving noisy black-box optimisation tasks under restricted evaluation budgets, with numerous successful applications across the empirical sciences and industry. However, vanilla BO relies on Gaussian processes (GPs, Rasmussen and Williams, 2006), which incur a significant computational overhead for each individual optimisation step. This cost becomes increasingly unwieldy as data volumes increase, making it unsuitable for the high-throughput tasks motivated above.
Several ways to scale up BO with large data volumes have been explored, including using local models (Eriksson et al., 2019) or neural networks (Hernandez-Lobato et al., 2017). Among those alternatives, using sparse GPs (Titsias, 2009) are particularly attractive as they dramatically reduce the computational cost of GPs and have enabled BO to be applied to a range of applications including molecular search (Griffiths and Hernandez-Lobato, 2020), laser optimisation (McIntire et al., 2016), model optimisation (Nickson et al., 2014), alloy design (Yang et al., 2021), and risk-adverse optimisation (Picheny et al., 2022).
Figure 1: A toy problem showing two sparse GP surrogate models, one with its inducing points chosen using an existing method (top) and the other using one of our proposed BO-specific methods (bottom) that focuses modelling resources into promising areas of the search space. Our model is better suited for assisting BO find this function’s minima.
In a nutshell, sparse GPs replace the full set of observations by a smaller representative set of pseudo-observations referred to as _inducing points_. The choice of the inducing point locations has a critical influence on the behaviour of the model, as it encodes local expressivity. However, existing approaches for inducing point allocation (IPA) focus purely on regression tasks, i.e., the global accuracy of models, and so sacrifice high-fidelity (local) modelling of promising regions which is required, as confirmed by our experiments, for effective optimisation (Figure 1). For this reason, there is a need for BO-specific IPA strategies; however, to our knowledge, no such methods exist in the literature.
Our contributions can be summarised as follows:
1. We demonstrate that existing IPA strategies do not support high-precision BO.
2. We introduce the use of quality-diversity decomposed DPPs as an IPA, allowing the trade-off of an IPA's diversity against an underlying preference.
3. We propose a guide for practical BO-specific IPA methods along with several specific recommendations.
4. We show that our methods out-perform established baselines across synthetic and real-world high-throughput optimisation and active learning tasks.
## 2 Background
Bayesian Optimisation.BO is a highly data-efficient method for finding the optima of a smooth function \(f:\mathcal{X}\to\mathds{R}\). By using a probabilistic surrogate model, typically a GP, coupled with a data acquisition strategy, evaluations are focused into promising areas of the search space \(\mathcal{X}\), allowing identification of good solutions within heavily constrained evaluation budgets.
Popular examples of data acquisition strategies include expected improvement (EI, Jones et al., 1998), knowledge gradient (Frazier et al., 2008), entropy search (Hennig and Schuler, 2012), or Thompson sampling (TS, Kandasamy et al., 2018). While our framework is not specific to any acquisition strategy, we focus mainly on Thompson sampling, a simple yet effective strategy that evaluates the maxima (minima) of random samples from the surrogate model when performing black-box maximisation (minimisation). TS is an obvious choice for high-throughput BO due to its natural ability to handle highly parallelised optimisation resources, e.g. for molecular search (Hernandez-Lobato et al., 2017) or distributed computing (Kandasamy et al., 2018). Moreover, Vakili et al. (2021) have recently shown that the decoupled TS approach of Wilson et al. (2020) can provide a drastic efficiency gain over traditional TS without significant impact on regret performance.
Gaussian Processes.GP models are a popular choice as surrogate models for BO, as they combine flexibility with reliable uncertainty estimates. A GP can be defined as an infinite collection of random variables, any finite number of which are distributed according to a multivariate Gaussian (Rasmussen and Williams, 2006). Consider a dataset \(\mathcal{D}=(X,\textbf{y})\) consisting of \(N\) input-output pairs \((\textbf{x}_{n},y_{n})\), where \(\textbf{x}\in\mathcal{X}\) and \(y\in\mathds{R}\). In Gaussian process regression, we model this dataset as a noisy realization of a latent function,
\[y_{n}=f(\textbf{x}_{n})+\epsilon_{n},\quad\epsilon\sim\mathcal{N}(0,\sigma^{2}),\]
where we have given \(f\) a GP prior, \(f\sim\mathcal{GP}(\mu_{0}(\cdot),k(\cdot,\cdot))\), and \(\sigma^{2}\) is the noise variance. \(\mu_{0}:\mathcal{X}\to\mathds{R}\) is the (prior) mean function, whereas \(k:\mathcal{X}\times\mathcal{X}\to\mathds{R}\) is a positive semidefinite covariance function or kernel; taken together, these are sufficient to fully describe the GP prior, which states that \(f(X)\sim\mathcal{N}(\mu_{0}(X),K_{X})\), where we have defined \(K_{X}\coloneqq[k(\textbf{x},\textbf{x}^{\prime})]_{\textbf{x},\textbf{x}^{ \prime}\in\mathcal{X}}\) (abusing the notation slightly). For notational simplicity, we henceforth assume the mean function to be zero. By conditioning on the observed data, we can compute the exact posterior \(p(f|\textbf{y})\) as a GP with mean and covariance functions
\[\mu(\textbf{x}) =\textbf{k}_{X}(\textbf{x})^{T}(K_{X}+\sigma^{2}I_{N})^{-1} \textbf{y} \tag{1}\] \[\Sigma(\textbf{x},\textbf{x}^{\prime}) =k(\textbf{x},\textbf{x}^{\prime})-\textbf{k}_{X}(\textbf{x})^{T} (K_{X}+\sigma^{2}I_{N})^{-1}\textbf{k}_{X}(\textbf{x}^{\prime}),\]
where we have defined \(\textbf{k}_{X}\coloneqq[k(\textbf{x}^{\prime},\textbf{x})]_{\textbf{x}^{ \prime}\in\mathcal{X}}\) and the identity matrix \(I_{N}\in\mathds{R}^{N\times N}\). While we can compute the exact posterior predictive using these equations, in practice we are often limited to using small datasets, as computing the required \((K_{X}+\sigma^{2}I_{N})^{-1}\) requires \(O(N^{3})\) computational complexity and \(O(N^{2})\) memory.
Sparse Variational Gaussian Processes.To mitigate the computational cost of GP modelling and allow for larger datasets, sparse variational approaches (Titsias, 2009; Hensman et al., 2013) have been developed. Instead of conditioning on the \(N\) training points, sparse GPs learn a set of \(M<<N\)_inducing variables_\(\textbf{u}\in\mathds{R}^{M}\), defined at _inducing locations_\(Z=\{\textbf{z}_{m}\}_{m=1}^{M},\textbf{z}_{m}\in\mathcal{X}\), so that \(\textbf{u}=f(Z)\). By defining an approximate posterior over the inducing variables \(q(\textbf{u})=\mathcal{N}(\textbf{u},\textbf{m},S)\) with variational parameters \(\textbf{m}\in\mathds{R}^{M},S\in\mathds{R}^{M\times M}\), we can simultaneously learn the inducing locations and variational parameters by maximizing the _evidence lower bound (ELBO)_:
\[\mathcal{L}=\mathbb{E}_{q(\textbf{f})}\left[\log p(\textbf{y}|\textbf{f})\right] -\mathrm{KL}(q(\textbf{u})||p(\textbf{u})),\]
where \(q(\textbf{f})=\mathcal{N}(\textbf{f};\mu_{\textbf{f}},\Sigma_{\textbf{f}})\) is the approximate posterior over the function values defined at the data points implied by conditioning on **u**,
\[\mu_{\textbf{f}} =\textbf{k}_{Z}(X)^{T}K_{Z}^{-1}\textbf{m}\] \[\Sigma_{\textbf{f}} =K_{X}+\textbf{k}_{Z}(X)^{T}K_{Z}^{-1}(S-K_{Z})K_{Z}^{-1}\textbf{ k}_{Z}(X),\]
where we have defined \(\textbf{k}_{Z}(\cdot)\) and \(K_{Z}\) analogously to \(\textbf{k}_{X}(\cdot)\) and \(K_{X}\), respectively. We refer to this model as the _sparse
variational Gaussian process (SVGP)_. The SVGP model requires \(O(M^{2}\tilde{N})\) computational complexity and \(O(M\tilde{N})\) memory, where \(\tilde{N}\) is the size of a minibatch, a significant saving over the exact GP.
While the inducing locations can be learned according to the ELBO along with the model hyperparameters and variational parameters, Burt et al. (2020) argues that this yields a very challenging high-dimensional and non-convex optimization task with a complicated dependence structure that is difficult to solve, converges slowly and often provides sub-optimal models. Whereas for regression this may be allowable with sufficient computational resources, for BO we must be able to reliably and quickly fit models, and therefore it would be preferable to allocate \(Z\)_a priori_ and keep them fixed. Moreover, optimizing \(Z\) according to the ELBO encourages the inducing points to approximate the posterior globally (Matthews et al., 2016), which we will argue is wasteful for BO applications. Therefore, we focus the remainder of our work on methods for inducing point allocation (IPA), which we will use to set the inducing points at the start of each BO step. We start by describing prior work for IPA, which focuses on regression, before moving to our BO-specific IPA contributions.
## 3 Inducing Point Allocation for Regression
Existing IPA strategies include taking a random subset of the data, sampling uniformly across the problem's search space, or using centroids obtained by running a K-means algorithm on the data (Hensman et al., 2013). The remainder of this Section details the recent DPP-based method of Burt et al. (2019), laying out important groundwork for our proposed BO-specific IPA strategies.
**Determinantal Point Processes.** For regression tasks, a meaningful criterion for IPA would be to have the points spread as uniformly as possible across the input data \(X\). It would also be sensible to have a criterion that takes the kernel and its hyperparameters into account. Burt et al. (2020) showed that one way of achieving these is by using an \(M\)-determinantal point process (\(M\)-DPP, Kulesza and Taskar, 2012). An \(M\)-DPP chooses the \(M\) points in \(Z\) by sampling them from the data \(X\) with probability proportional to the determinant of the Gram matrix \(K_{Z}\):
\[\mathds{P}(\mathcal{Z}=Z)\propto\big{|}K_{Z}\big{|}. \tag{2}\]
Notice that this criterion meets our two criteria described above: 1) if two points are close together in \(Z\), the determinant will typically be small since the kernel will have high covariance for those points, giving the \(M\)-DPP repulsive properties so that the selected points have a uniform spread, and 2) the determinant clearly depends on the kernel. Using results from the \(M\)-DPP literature, Burt et al. (2020) was able to show that sampling inducing points in this way from a DPP will lead to a small expected KL divergence between approximate and true posteriors, \(KL[q(f)||p(f|\mathbf{y})]\). Moreover, these results have recently been used to prove regret bounds in BO for sparse GP methods (Vakili et al., 2021).
**Conditional Variance Reduction.** In practice, sampling from a DPP is computationally expensive. Therefore, Burt et al. (2020) suggests finding the _maximum a posteriori (MAP)_ estimate of a DPP, i.e., finding the set of inducing points \(Z\) with maximum probability according to Eq. 2. While exact MAP estimation of a DPP is known to be NP-hard (Ko et al., 1995), Chen et al. (2018) provides an algorithm for approximate MAP estimation in \(O(M^{2}N)\), which Burt et al. (2020) uses in practice. This algorithm greedily builds its set of points \(Z\) by choosing the \(j^{th}\) point from \(X\setminus Z_{1:j-1}\) as
\[\mathbf{z}_{j}=\operatorname*{argmax}_{\mathbf{z}\in X\setminus Z_{1:j-1}} \big{|}K_{Z_{1:j-1}\bigcup\{\mathbf{z}\}}\big{|}. \tag{3}\]
Interestingly, this DPP-based IPA strategy (3) is equivalent to greedily building a set of inducing points by maximising the posterior predictive variance of a noise-free GP model \(f\sim\mathcal{GP}(0,k)\) conditioned on previously selected observations, i.e., choosing
\[\mathbf{z}_{j}=\operatorname*{argmax}_{\mathbf{z}\in X}\sigma_{j-1}(\mathbf{ z}), \tag{4}\]
where \(\sigma_{j-1}^{2}(\mathbf{z})=k(\mathbf{z},\mathbf{z})-\mathbf{k}_{Z_{1:j-1}} (\mathbf{z})^{T}K_{Z_{1:j-1}^{-1}}^{-1}\mathbf{k}_{Z_{1:j-1}}(\mathbf{z})\) is the _conditional variance_ of the GP (see Hening and Garnett, 2016; Burt et al., 2019, for detailed description and discussion). Therefore, we refer to this method of selection as conditional variance reduction (CVR), as it selects the datapoint with the highest conditional variance as the next inducing point, in hopes that this variance will be reduced.
## 4 Inducing Point Allocation for Bayesian Optimisation
BO typically requires updating the surrogate model(s) at each step to leverage the latest information available, so it makes sense to include updating the inducing point locations (see Algorithm 1). In the case of CVR, which requires a kernel, Vakili et al. (2021) use the kernel fitted during the previous BO step. Unfortunately, as we will demonstrate across all our experimental results, regression-inspired IPA strategies are not satisfactory for use within BO loops. While a level of global accuracy is needed to prevent the re-investigation of areas already identified as sub-optimal, Figure 2 shows that accurate modelling in promising areas is necessary to allow the precise identification of the optimum. For a more formal intuition into the unsuitability of existing IPA strategies see Appendix A. For these reasons we now propose DPP-based IPA strategies that are able to change the relative trade-off of local and global modelling capabilities.
We now provide the primary contribution of this work -- a general method for IPA suitable for down-stream decision making tasks. Unlike existing IPA strategies, our proposed methods ensure the model focuses its resources on promising (local) areas of the space whilst maintaining a sufficiently accurate global model.
### A General IPA Formulation
**Quality-Diversity Decomposition.** Although CVR only leverages the repulsive properties of DPPs, it is also possible, through a convenient reparameterisation, to encode a notion of the quality of the sampled points. Consider the DPP defined as in (2) but with \(K_{Z}\) replaced by
\[L_{Z}=\left[q(\mathbf{z}_{i})k(\mathbf{z}_{i},\mathbf{z}_{j})q(\mathbf{z}_{j}) \right]_{(\mathbf{z}_{i},\mathbf{z}_{j})\in Z\times Z}, \tag{5}\]
where \(q:\mathcal{X}\rightarrow\mathds{R}\), i.e., we observe \(Z\) with probability \(\mathds{P}(\mathcal{Z}=Z)\propto\left\lvert L_{Z}\right\rvert\). In our case, we can choose \(q:\mathcal{X}\rightarrow\mathds{R}^{+}\) so that it can be seen as a _quality function_, designed to provide large values for points lying in promising areas of the space and low values elsewhere. Indeed, due to the decomposition
\[|L_{Z}|=|K_{Z}|*\prod_{i=1}^{N}q(\mathbf{z}_{i})^{2}, \tag{6}\]
as derived in Section 3.1 of Kulesza and Taskar (2012), it is clear that a particular \(Z\) will occur with high probability only if it contains points that have large quality scores (as measured by \(q(\mathbf{z}_{i})\)) **and** have a diverse spread (as measured by \(|K_{Z}|\)), see Figure 3. Hence, this constitutes an intuitive tool for building IPAs well-suited to the demands of BO (see Figure 1(d) for a demonstration, and Appendix A for a more formal justification).
**Greedy (Approximate) Maximisation.** Given a particular
Figure 3: 25 elements (red) chosen from 250 candidates (green) by a DPP with (left) constant and (right) locally varying quality functions (background colour).
Figure 2: The top row shows (a) 250 available training data points (green) alongside (b,c,d) three different 25 point IPAs (red) chosen for a function minimisation task. Existing approaches which (b) use the centroids from a k-means clustering of the available data or (c) use the CVR strategy provide balanced coverage of the whole search space. In contrast, our IMP-DPP strategy (d) focuses modelling resources into promising central areas. The bottom rows show expected improvement acquisition functions evaluated in the promising region according to (a) an exact GP trained on all available data and those (b,c,d) arising from SVGPs with the IPAs above. Of the SVGPs, only our proposed IMP-DPP’s acquisition function agrees with the exact GP.
\(q\), we can simply apply the same greedy algorithm used by CVR, just with \(K_{Z}\) replaced by \(L_{Z}\), to efficiently build a set of BO-specific inducing points as an \(O(NM^{2})\) approximate MAP estimate of the DPP implied by Eq. 5. Conveniently, this resulting MAP estimate has an intuitive interpretation, as specified in Theorem 1, with proof in Appendix B.
**Theorem 1**.: _Suppose inducing points \(\mathcal{Z}\) are distributed according to a DPP with similarity kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathds{R}\) and quality function \(q:\mathcal{X}\to\mathds{R}\), i.e., \(\mathds{P}(\mathcal{Z}=Z)\propto\left|L_{Z}\right|\). Then, according to the greedy approximation, the \(j^{th}\) component of the MAP estimate of \(\mathcal{Z}\) is given by_
\[\mathsf{z}_{j}=\operatorname*{argmax}_{\mathsf{z}\in X}\;q(\mathsf{z})\sigma_ {j-1}(\mathsf{z}), \tag{7}\]
_where \(\sigma_{j-1}^{2}(\mathsf{z})\) is the conditional variance of the noise-free GP model conditioned on the already selected points \(Z_{1:j-1}\) (cf. Eq. 4)._
### Choosing a Quality Function
While any quality function can be used, in practice \(q\) should be carefully chosen to deliver the right quality-diversity trade-off. Intuitively from Eq. 5, the relative amplitudes of the quality function and the similarity kernel are key to this trade off, so for effective IPA, \(q\) must be chosen to complement \(k\) (rather than dominate or be dominated by it).
In addition, we propose the following four properties to guide our choice of the quality functions:
* **Discriminative:**\(q\) should return large values in areas of the space that are worthwhile modelling whilst providing smaller contrasting values elsewhere. This means high values in regions expected to be close to the optimum and/or with large predictive uncertainty.
* **Informative:**\(q(\mathbf{z})\) should encode our current knowledge about the objective function \(f\) at \(\mathbf{z}\), which is available through the already-collected evaluations \(y_{i}=f(\mathbf{x}_{i})\) and/or the surrogate model(s) of the previous BO step.
* **Shift invariance:** the resulting IPA should be invariant to adding an offset to the data. Given a GP model, adding an offset should only affect its mean function and leave the kernel unchanged, which means that the quality function must also be insensitive to shift.
* **Scale invariance:** the resulting IPA should be invariant to linear re-scaling of the data. Given a GP model, a multiplicative factor on the data may result in a multiplicative factor on the kernel. Hence, for eq. 7 to deliver identical results, we need \(q\) to be invariant to re-scaling, up to a multiplicative (positive) constant.
### A Linear Quality Measure
We propose here a simple and intuitive choice for the quality function that shows strong empirical performance (see Section 6). Many other choices are possible, for example, we also derived a quality function based on information-theoretic considerations. Although well-motivated, we found this entropy-based approach to be less effective, likely due to the computational approximations required, than the simpler choice that we are about to present. To streamline our exposition, the derivation and results of the information-theoretic approach are deferred to Appendix C.
**Noise-free evaluations.** A natural quality function (for a single-objective maximisation problem) that satisfies the four above-mentioned properties is the following linear function of \(y_{i}\):
\[q_{\text{Lin}}(\mathbf{z}_{i})=y_{i}-\hat{f},\qquad\text{with }\hat{f}=\min_{i}y_{i}. \tag{8}\]
A linear rescaling of the data will change \(q\) by a multiplicative factor only, and subtracting \(\hat{f}\) makes it positive and shift invariant. Furthermore, \(\hat{f}\) ensures the discriminative property, i.e. that \(q\) is zero at the worst observation and largest at the best.
**Noisy evaluations.** For problems with large observation noise, \(y_{i}\) can give misleading estimates of \(f(\mathbf{z}_{i})\) and so it is unwise to use (8). However, in these settings, we can make use of the previous BO step's surrogate model \(\mathcal{M}_{n-1}\) and instead calculate the expected value of \(q_{\text{Lin}}\). Additionally, to ensure positivity, we swap the linear function for the (piece-wise linear) Rectified Linear Unit (ReLU), yielding the quality function
\[q_{\text{IMP}}(\mathbf{z}_{i})=\mathds{E}_{f\sim\mathcal{M}_{n-1}}\left[\max( f(\mathbf{z}_{i})-\hat{f},0)\right], \tag{9}\]
where the baseline is now the minimal predicted value of the objective function, i.e. \(\hat{f}=\min_{\mathbf{z}\in D_{n}}\mu_{n-1}(\mathbf{x})\) for \(\mu_{n-1}(\cdot)\) the posterior mean of \(\mathcal{M}_{n-1}\). Note that (9) takes the form of the well-known Expected Improvement (EI), just with a modified baseline, and so can be calculated in closed form (see Jones et al., 1998).
Reassuringly, the performance of (9) is robust to the specific choice of baseline, with Appendix D showing negligible performance differences when using the minima or mean of the predicted objective function values, or even when using a softplus relaxation of the ReLU. However, significantly tightening the baseline to be the maximum of the objective function (as typically used by EI acquisition function) yields a dramatic drop in performance. Indeed, EI is not designed to discern between all our collected points, only to help identify where there could be new maxima.
### Beyond Single Objective BO
The quality function described above is designed for single-objective BO problems. However, our approach based on
the quality-diversity decomposition of DPPs is a general way to ensure that sparse models are accurate in the areas where they will be used and so may apply to a much larger variety of optimisation problems, including those with constraints, multiple objectives, and more generally active learning problems such as level set estimation. The quality function should be tailored to each problem: for instance in level-set estimation, the important regions to model are not regions where the output value is maximal, but where it is close to the targeted level. In Section 6 we demonstrate such extensions.
## 5 Related Work
**Alternative Sparse Surrogate Models.** Three other formulations of sparse GPs have been used in BO loops. Firstly, McIntire et al. (2016) propose a compelling modification of sparse online GPs(Csato and Opper, 2002), where they up-weight promising areas of the feature space (as measured by the expected improvement of candidate evaluations). However, online GPs, which see only a single pass of the data, provide worse approximations than SVGPs, which have multiple chances to learn from each datapoint. Moreover, due to its requirement of \(N\) individual challenging optimisations for each individual model fit, this approach is unsuitable for the high-throughput scenarios tackled in this paper (and consequently was only tested by McIntire et al. (2016) on problems with \(M=30\) and \(N=60\)). Another way to alleviate the cost of GP inference is by approximating the spectral density of its kernel (Lazaro-Greedilla et al., 2010). However, spectral approximations are not appropriate for BO as they seek to preserve global structure and, as such, have no way of providing local high-fidelity modelling. Indeed, applying spectral GPs to BO requires expensive and heuristic modifications to its loss function (Yang et al., 2021) and even then fails to match the performance of exact GPs. In contrast to these two alternatives, our proposed approach retains the state-of-the-art computational complexity and performance of SVGPs. Finally, Maddox et al. (2021) (with similar work by Chang et al. (2022)) propose the OVC method for the fast conditioning of SVGPS, allowing efficient calculation of popular look-ahead acquisition functions, albeit those outside of the high-throughput domain.
**Additional uses of DPPs in BO.** Outside of IPA, DPPs are also commonly used in the context of batch BO, where the goal is to recommend diverse collections of points. Prominent examples include the approaches of Kathuria et al. (2016); Dodge et al. (2017) and Nava et al. (2022), as well as Moss et al. (2021) where, similarly to our ENT-DPP, an information-theoretic motivation is used to inform the construction of the DPP's diversity and quality terms. DPPs have also been used in high-dimensional BO (Wang et al., 2017) to sample diverse subsets of the available search space dimensions.
**Scalable BO via Local Models.** A popular alternative approach for BO under large evaluation budgets is to use multiple cheaper local models in lieu of a single expensive global model (Gramacy and Apley, 2015; Rulliere et al., 2018; Cole et al., 2021, 2022). Particularly powerful BO routines employing local models like TURBO of (Eriksson et al., 2019) and, for multi-objective BO, MORBO (Daulton et al., 2022) are ideal for applications where the only goal is to find a reasonable solution because global modelling (and optimisation) is challenging (e.g., for high-dimensional optimisation problems). However, the local models built by TURBO are not always useful in settings where the goal is to collect data that allows the building of a useful final model, e.g., in the active learning applications we consider below or when we need a rough understanding of global behaviour to ensure global convergence.
## 6 Experimental Results
We now provide an empirical evaluation of our proposed IPA framework across a suite of high-throughput BO problems using the open-source BO library Trieste (Berkeley et al., 2022). We then illustrate the general applicability of our IPA framework, by demonstrating how quality functions can be designed for multi-objective and active learning problems. Additional experimental details are contained in our appendices. Implementations of our IPAs are contained within the Trieste (Berkeley et al., 2022) and BoTorch (Balandat et al., 2020) libraries.
### Single Objective Optimisation
For clarity, all our synthetic experiments follow the same setup. We consider an SVGP model with either \(M=250\) or \(500\) inducing points using either 1) our proposed IPA strategy with the improvement-based quality function (9) which we call IMP-DPP, 2) the CVR of Burt et al. (2019) (see Section 3), 3) choosing the centroids of a K-means clustering of the data, and 4) choosing inducing points spread uniformly across the search space. SVGP models are fit using an Adam optimiser with learning rate \(0.1\), using an early stopping criteria with a patience of \(50\) and a learning rate halving on plateau schedule with a patience of \(10\).
For all DPP-based IPAs we follow the thoroughly tested approach of Burt et al. (2020) and Vakili et al. (2021) and use the kernel of the previous BO step's model to allocate the IPA and then refit the kernel when training the current BO step's model on the new data and the chosen IPA. We allocate a total evaluation budget of \(N=5{,}000\) evaluations split across \(50\) BO steps in batches of \(100\) points. When the total number of queried points is less than the desired number of inducing points (e.g. for the first 4 optimisation steps when \(M=500\)), we use just the \(N\) available training points as our IPA. Subsequent batches are collected using the decoupled Thompson sampling scheme presented in
Vakili et al. (2021). We use \(100\) random Fourier features to build a Fourier representation of samples and maximise each using an L-BFGS optimiser starting from the best of a random sample of \(10{,}000\) points. BO using an exact GP model is included as a baseline; however, we can report only the first \(10\) optimisation steps, after which it became prohibitively expensive (i.e., for \(N>1{,}000\)).
Figure 4 demonstrates optimisation performance across the 4d Shekel, 5d Michalewicz, 5d Ackley, 6d Hartmann, and 4d Rosenbrock functions (see Appendix E for definitions), where we have contaminated the evaluations of each with Gaussian noise of variance \(0.01\), except for the easier Hartmann where we consider a larger variance of \(0.1\). Note that we re-scaled these baselines so that they have a variance of \(1.0\) (under random samples across the search space) and so these noise levels are large, resulting in challenging optimisation tasks. Unsurprisingly, greater performance is achieved when using larger number of inducing points for all the considered methods, except for the easier Rosenbrock function, where all methods perform equally well. For the Shekel and Michalewicz functions, only IMP-DPP achieves precise optimisation, even when using just \(M=250\) inducing points. For the Michalewicz function, IMP-DPP with \(M=500\) provides a dramatic improvement over the other methods. In contrast, on the Hartmann function we see all \(M=500\) approaches, as well as IMP-DPP with \(M=250\), achieve comparable optimisation performance. In addition to performing improved optimisation, we show (in Appendix F) that our SVGP-based approaches incur significantly lower computational overheads than exact GPs. Moreover, unlike the exact GP, the SVGP approaches maintain a constant overhead as BO progresses.
Interestingly, for some of the more challenging functions considered in Figure 4, the exact GP leads to optimisation that gets stuck in local minima, whereas the SVGP approaches are able to fully converge. We hypothesise that SVGPs have an advantage in these non-stationary settings as they are able to ignore promising yet not optimal areas of the space that would otherwise mislead the algorithm -- a helpful consequence of their limited modelling resources. Similar behaviour is noted by Maddox et al. (2021) when also using SVGPs for online decision making.
### Active Learning
To demonstrate the generality of our proposed IPA framework, we now depart from single-objective BO and instead consider an active learning task inspired by Balandat et al. (2020). We wish to learn which spatial locations in Nigeria have rates of a malaria-causing parasite _Plasmodium Falciparum_ over a critical threshold.
We model the occurrence of a breach in the critical threshold at location \(\mathbf{x}\) through a Bernoulli likelihood \(y_{\mathbf{s}}|f\sim\mathcal{B}(\Phi(f(\mathbf{x})))\) where \(f\) denotes a latent sparse GP with \(50\)
Figure 4: Results are averaged over \(50\) runs and we report the mean and its \(95\%\) confidence intervals for the simple regret of the maximiser of the posterior mean across previously queried points. Our proposed IMP-DPP is the only IPA strategy that provides consistently high performance.
inducing points and \(\Phi:\mathds{R}\rightarrow[0,1]\) is the inverse probit function (see Hensman et al., 2015, for details). Starting from a random initial design of 100 evaluations, we then use the BALD acquisition function of Houlsby et al. (2011) to sequentially improve our model over \(10\) data acquisition steps, each time collecting evaluations at \(100\) informative locations then updating the classification surrogate models.
As the performance of the classifier is determined by the accuracy of its classification boundary (i.e., where \(f\approx 0\)), it is natural to consider a quality function that encourages the placing of inducing points where \(|f|\) is small. To this end, we consider the active learning quality function
\[q_{\text{AL}}(\mathbf{z})=\mathds{E}_{f}\left[\hat{f}-|f(\mathbf{z})|\right], \tag{10}\]
where \(\hat{f}=\max(|\max(f)|,|\min(f)|)\) is the largest absolute value obtained by the latent GP. This quality function has maximal score at \(f=0\), i.e., the level set of the latent GP corresponding to the classification boundary. Figure 5 demonstrates the benefit of using this custom quality function to drive IPA in the considered active learning problem.
### Multi-objective Optimisation
In multi-objective optimisation (MOO) we seek to find high-performing solutions according to \(K\) (\(\geq 2\)) competing objective functions \(f^{1}(\mathbf{x}),\ldots,f^{K}(\mathbf{x})\). In these tasks, where improvements in one objective may harm another, the ability to characterise trade-offs between these competing objectives becomes crucial. Consequently, multi-objective optimisation corresponds to finding the so-called Pareto set which contains all locations representing optimal trade-offs, i.e., those that cannot be perturbed to yield an improved score in a single objective without a deterioration in the score of another objective (see Emmerich (2005) for an introduction). Therefore, when using sparse models as surrogate models for MOO, it is no longer sufficient to focus modelling resources into the "best" areas of the space; rather, we want to focus coverage around the Pareto front. Therefore we consider the quality function
\[q_{\text{HV}}(\mathbf{z})=\mathds{E}_{f_{1},\ldots,f_{K}}\left[\prod_{k=1}^{K} \max(f_{k}(\mathbf{z}_{i})-\hat{f}_{k},0)\right], \tag{11}\]
where \(f_{k}\) represents the model of the \(k^{th}\) objective function and \(\hat{f}_{k}\) its minimal value, i.e., we consider a product of our single objective quality functions. As Eq. 11 can be interpreted as the Hyper-Volume (HV) of the set containing all the previously collected points that are dominated by \(\mathbf{z}\), we refer to the IPA resulting from this quality function as HV-DPP. We allocate inducing points for each model separately but use the same shared quality function (that uses information from all the models) to encourage the allocation of points along the Pareto front. Although \(q_{\text{HV}}\) has a strong bias for points in the central area of the front, it is fast to evaluate and we found it adequate for enabling effective high-throughput BO. Future work will build a more sophisticated quality function that provides an even focus along the whole Pareto front.
**Synthetic benchmark.** Figure 6 demonstrates high-throughput optimisation of a noisy variant of the 4-dimensional ZDT3 problem (see Appendix E for a problem description). We start with 100 random evaluations and use the Chebyshev scalarisation acquisition function described by Paria et al. (2020) to collect 50 batches of 100 evaluations for the sparse methods (each with 100 inducing points), and 10 batches for the exact GP. As is standard practice in multi-objective optimisation, we measure performance in terms of the difference between the hyper-volume dominated by the true Pareto optimal front and the one found by BO.
**Real-world problem.** For our final example, we turn to the problem of designing an effective yet light-weight automotive heat exchanger (radiator), as considered by Paleyes et al. (2022) (see Appendix E for a full description). This challenging 9d problem has two objectives and three constraints, so requires five surrogate models, each of which will need
Figure 5: Incidence threshold breaches predicted by two surrogate models each fine-tuned over \(10\) steps of high-throughput active learning. (a) Performance is evaluated across a randomly sampled held-out test set. (b) CVR fails to accurately learn the complex classification boundary and obtains an accuracy of only \(79\%\). (c) In contrast, our proposed IPA focuses inducing points (red dots) along the classification boundary, yielding a improved model with an accuracy of \(89\%\).
IPA. For the constraint models, we use the \(q_{AL}\) quality function (as presented for the active learning task) and for the objective models we use the \(q_{HV}\) quality function. We start with \(100\) random evaluations and use Paleyes et al. (2022)'s HIPPO acquisition function to allocate \(10\) batches of \(100\) further evaluations. Figure 6 shows that an SVGP surrogate model using our proposed IPA and only \(100\) inducing points is able to find a comparable Pareto front to an expensive exact GP. Moreover, in Appendix F, we provide wall-clock timing for these experiments, demonstrating that the SVGP incurs order-of-magnitude lower optimisation overheads than the exact GP.
## 7 Conclusions and Further Work
We have proposed the first BO-specific methods for selecting the locations of inducing points in sparse GPs. By exploiting the quality-diversity decomposition of DPPs, we are able to dramatically improve DPP-based IPA, transforming what is often, in the context of BO, a poorly performing IPA (the conditional variance reduction of Burt et al., 2019) to the best (our IMP-DPP). Moreover, we have shown that our proposed framework provides a general framework for ensuring that sparse GPs are accurate in key areas, and so has applications across a range of down-stream tasks.
In future work we will apply our BO-specific IPAs to real-world problems where sparse GPs are already being used, e.g., quantile optimisation (Torossian et al., 2020). We will also investigate their applicability to other inducing point-based methods that are also used in decision making loops, like deep GPs (Damianou and Lawrence, 2013). Moreover, our SVGPs could also be applied to high-dimensional optimisation problems by extending single-model trust region approaches (Diouane et al., 2022) to support large optimisation budgets. Finally, note that our proposed IPA does not require Euclidean input spaces (unlike standard SVGP formulations which optimise inducing point locations using gradient descent). Therefore, we also wish to use our proposed scalable surrogate models to enable high-throughput versions of active learning loops over discrete structures that can be modelled with GPs, e.g., genes (Moss et al., 2020) and molecules (Moss and Griffiths, 2020; Thawani et al., 2020; Griffiths et al., 2022; Rankovic et al., 2022).
|
2310.06000 | Towards Replication-Robust Data Markets | Despite widespread adoption of machine learning throughout industry, many
firms face a common challenge: relevant datasets are typically distributed
amongst market competitors that are reluctant to share information. Recent
works propose data markets to provide monetary incentives for collaborative
machine learning, where agents share features with each other and are rewarded
based on their contribution to improving the predictions others. These
contributions are determined by their relative Shapley value, which is computed
by treating features as players and their interactions as a characteristic
function game. However, in its standard form, this setup further provides an
incentive for agents to replicate their data and act under multiple false
identities in order to increase their own revenue and diminish that of others,
restricting their use in practice. In this work, we develop a
replication-robust data market for supervised learning problems. We adopt
Pearl's do-calculus from causal reasoning to refine the characteristic function
game by differentiating between observational and interventional conditional
probabilities. By doing this, we derive Shapley value-based rewards that are
robust to this malicious replication by design, whilst preserving desirable
market properties. | Thomas Falconer, Jalal Kazempour, Pierre Pinson | 2023-10-09T12:56:24Z | http://arxiv.org/abs/2310.06000v3 | # Incentivizing Data Sharing for Energy Forecasting: Analytics Markets with Correlated Data
###### Abstract
Reliably forecasting uncertain power production is beneficial for the social welfare of electricity markets by reducing the need for balancing resources. Describing such forecasting as an _analytics task_, the current literature proposes _analytics markets_ as an incentive for data sharing to improve accuracy, for instance by leveraging spatio-temporal correlations. The challenge is that, when used as input features for forecasting, correlated data complicates the market design with respect to the revenue allocation, as the value of overlapping information is inherently combinatorial. We develop a correlation-aware analytics market for a wind power forecasting application. To allocate revenue, we adopt a Shapley value-based attribution policy, framing the features of agents as players and their interactions as a characteristic function game. We illustrate that there are multiple options to describe such a game, each having causal nuances that influence market behavior when features are correlated. We argue that no option is _correct_ in a general sense, but that the decision hinges on whether the market should address correlations from a _data-centric_ or _model-centric_ perspective, a choice that can yield counter-intuitive allocations if not considered carefully by the market designer.
Introduction
In electricity markets, agents with uncertain power production can benefit social welfare by sharing their data to make more accurate forecasts (e.g., through collaborative analytics (Li et al., 2020)) and thus preserve the economic viability of balancing the system (Dvorkin Jr et al., 2019). This is especially true for wind power producers -- by leveraging data that is distributed (i.e., both geographically and proprietorially) agents can better their predicitons by harnessing spatio-temporal correlations between sites (Tastu et al., 2010). However, in practice, such altruistic sharing of information amongst market competitors is likely to be hindered by privacy concerns or perceived conflicts of interest. Instead, data can be _commoditized_ within a market-based framework, with remuneration used as an incentive for data sharing (Bergemann and Bonatti, 2019).
Amongst the first proposals for such a framework were _data markets_, which facilitate direct purchasing of raw data through bilateral transactions (Balazinska et al., 2011). On the surface, this offers a convenient way to procure data from others, however pricing raw data in these markets can be challenging as the value it brings to the buyer depends on the specific analytics task at hand (Cong et al., 2022). With this in mind, if one instead acknowledges that data is indeed typically procured to enhance capabilities in some downstream analytics task, as is typically true for energy forecasting applications, the commodity need not be the data itself, but rather the improvement in such capabilities it can provide (Agarwal et al., 2019). This is the motivation behind _analytics markets_ -- data of distributed agents is used to enhance an analytics task without the need to directly transfer raw data, preserving privacy by design (Pinson et al., 2022). The market revenue is then a function of the enhanced capabilities provided, and the value this brings to the owner of the task.
For the market revenue to be allocated _fairly_, each dataset owned by a distributed agent should be remunerated based on its marginal contribution to the enhancement of the analytics task (e.g., improved forecast accuracy). However, this can be challenging to quantify when these datasets are correlated. In data markets, where datasets are valued sequentially, correlations can even reduce social welfare, with agents eventually selling their data for less than their initial valuation as their information becomes redundant (Acemoglu et al., 2022). Whilst this is not the case in our proposed analytics market (i.e., valuation occurs in parallel, hence one agent cannot intentionally undercut another), the value of overlapping information is inherently combinatorial, thereby difficult to compute. To address this challenge, recent work proposes to leverage concepts from cooperative game theory, framing features as players and their interactions as a characteristic function game (Ghorbani and Zou, 2019). For many practitioners, the Shapley value (Shapley, 1997) is the solution concept of choice for such a game, allocating each player its expected marginal contribution towards a set of other players, satisfying a collection of axioms that, in our context, yield several desirable market properties by design (i.e., individual rationality, zero-element and truthfulness, symmetry, linearity and budget balance) (Agarwal et al., 2019). However, representing an analytics task as a cooperative game is not straightforward in general, with each approach having causal nuances that influence market behavior when features are correlated.
Our contribution is the development of a _correlation aware_ analytics market with a choice of Shapley-value based attribution policies that explicitly take into account the overlapping information. We explore the intricacies of our proposed attribution policies for a case where several agents contribute to enhancing the accuracy of a wind power forecasting task -- however our framework can be used for any related application exposed to linearly correlated data. We argue that the choice of attribution policy depends on the way in which the market designer intends to remunerate causal effects, which we label as either _data-centric_ or _model-centric_, each suited to a particular context. We conclude that correlated data will yield counter-intuitive allocations if this choice is not considered carefully by the market designer.
The remainder of this paper is structured as follows: Section 2 introduces our adopted mechanism
design. Section 3 assesses the theoretical implications of our proposed methods. Section 4 then illustrates our findings using both synthetic and real-world data. Finally, Section 5 gathers a set of conclusions and perspectives for future work.
## 2 Preliminaries
We assume a stylised electricity market setup where agents are required to notify the system operator of their expected production in a forward stage, specifically one hour ahead of delivery, for which they receive a fixed price per unit of energy. In real-time, they receive a penalty for deviations from the scheduled production, thus their downstream revenue is an explicit function of forecast accuracy, and will determine their valuation for a marginal improvement model-fitting, denoted by \(\lambda\in\mathbb{R}_{+}\), which we assume to have been learnt through some preliminary analyses.
### Market Overview
We frame the analytics task as a regression model and the learning process used to infer its parameters, that is, our attention is centered on so-called _regression markets_(Pinson et al., 2022). This framework builds upon prior work on data acquisition from both strategic (Dekel et al., 2010) and privacy-conscious (Cummings et al., 2015) agents. The owner of the regression model is characterized by their private valuation for a marginal improvement in the predictive performance, which sets the price for the distributed agents, whom propose their own data as features and are eventually remunerated based on their respective marginal contributions.
### Market Agents
These comprise a set of wind power producers, \(\mathcal{A}\), one of which \(c\in\mathcal{A}\) is the _central agent_ seeking to improve their forecasts, whilst the remaining agents \(a\in\mathcal{A}_{-c}\) are _support agents_, whom propose data as features, whereby \(\mathcal{A}_{-c}=\mathcal{A}\setminus\{c\}\). Let \(y_{a,t}\in\mathbb{R}_{+}\) be the power recorded by each producer \(a\in\mathcal{A}\) at time \(t\), perceived to be a sample from the stochastic process \(\{Y_{a,t}\}_{Vt}\). We write \(\mathbf{x}_{T,t}\) as the vector of all observations used as features at time \(t\), indexed by the ordered set \(\mathcal{I}\), such that each subset of agents \(\mathcal{B}\subseteq\mathcal{A}\) owns a subset \(\mathcal{I}_{\mathcal{B}}\subseteq\mathcal{I}\) of indices. We further let \(\mathcal{I}_{a}\) denote the subset owned by a particular agent. Finally, the set of input-output pairs observed up until time \(t\) owned by a particular subset of agents is denoted by \(\mathcal{D}_{\mathcal{I}_{\mathcal{B},t}}=\{\mathbf{x}_{\mathcal{I}_{\mathcal{B },t}},y_{c,t^{\prime}}\}_{Vt^{\prime}\leq t}\).
### Regression Framework
Since we are predominantly interested in examining market outcomes, rather than competing with state-of-the-art forecasting methods, we consider only a very short-term lead time (i.e., 1-hour ahead), thereby permitting a fairly simple time-series analysis. That being said, the mechanism design adopted here readily allows more complex regression models for those aiming to capture the intricacies of wind power production, for instance the bounded extremities of the power curve (Pinson, 2012).
To model the target signal, \(Y_{c,t}\), we adopt a Bayesian regression framework, formulating the likelihood as a deviation from a deterministic mapping under an independent additive Gaussian noise process, the variance of which is treated as a hyperparameter. The mapping, \(f\), is a linear interpolant parameterized by a vector of coefficients, \(\mathbf{w}\), and represents the conditional expectation of the target signal -- we adopt an _Auto-Regressive with eXogenous input_ model with a maximum lag of \(\Delta\), such that the expectation of the likelihood corresponding to the _grand coalition_ (i.e., using all available input features) at any particular time step can be decomposed as follows:
\[\begin{split}& f(\mathbf{x}_{t},\mathbf{w})=\\ &\underbrace{\begin{array}{l}w_{0}+\sum_{\delta=1}^{\Delta}w_{c, \delta}y_{c,t-\delta}\\ \text{Terms belonging}\\ \text{to the central agent.}\end{array}}_{\begin{array}{l}\text{Terms belonging}\\ \text{to the support agents.}\end{array}}+\underbrace{\sum_{a\in\mathcal{A}_{-c}}\sum_{ \delta=1}^{\Delta}w_{a,\delta}y_{a,t-\delta}}_{\begin{array}{l}\text{Terms belonging}\\ \text{to the support agents.}\end{array}}.\end{split} \tag{1}\]
### Market Clearing
Once the data is collected and the valuation of the central agent is revealed, the market is then cleared by a non-profit market operator who is not the same
as the task owner. We consider a two-stage (i.e., in-sample and out-of-sample) batch market, as in Pinson et al. (2022). We do, however, relax the assumption that features are independent, but still assume that any owned by the support agents that are redundant (i.e., highly correlated with those owned by the central agent) are removed via the detailed feature selection process. A key step in the market clearing procedure is parameter inference -- to mitigate bias we opt for a centred isotropic (i.e., uninformative) Gaussian prior, which is conjugate for our likelihood, resulting in a tractable Gaussian posterior that summarizes our updated beliefs, which, for a particular subset of agents is given by
\[p(\mathbf{w}_{Z_{B}}|\mathcal{D}_{Z_{B},t})=\frac{p(\mathcal{D}_{Z_{B},t}| \mathbf{w}_{Z_{B}})p(\mathbf{w}_{Z_{B}}|\mathcal{D}_{Z_{B},t-1})}{\int p( \mathcal{D}_{Z_{B},t},\mathbf{w}_{Z_{B}})\mathrm{d}\mathbf{w}_{Z_{B}}},\ \forall t, \tag{2}\]
where recall \(\mathcal{D}_{Z_{B},t}\) is the set of input-output pairs observed up until time \(t\). The market revenue is then a function of the exogenous valuation, \(\lambda\), and the extent to which fitting of the model is improved, which we measure using the negative logarithm of the posterior predictive likelihood (i.e., the convolution of the likelihood with the posterior), denoted by \(\ell_{t}=-\log[p(y_{t}|\mathbf{x}_{t})]\), \(\forall t\). We can obtain an estimate of the expected value using a batch of observations for each subset of features, such that the market revenue is given by \(\pi=\lambda\left\{\mathbb{E}[\ell_{t}]_{Z_{c}}-\mathbb{E}[\ell_{t}]_{Z_{d}}\right\}\).
### Revenue Allocation
To allocate market revenue amongst support agents, we define an attribution policy based on the Shapley value. Let \(v:\mathcal{C}\in\mathcal{P}(\mathcal{I}_{-c})\mapsto\mathbb{R}\) be a characteristic function that maps the power set of indices of the features owned by support agents to a real-valued scalar (i.e., the set \(\mathcal{C}\) represents a coalition in the cooperative game). Thus, the Shapley value of the \(i\)-th feature, \(\forall i\in\mathcal{I}_{-c}\), is given by
\[\phi_{i}=\frac{1}{|\mathcal{I}_{-c}|}\sum_{\mathcal{C}\in\mathcal{P}(\mathcal{ I}_{-c})}\binom{|\mathcal{I}_{-c}|-1}{|\mathcal{C}|}^{-1}v(\mathcal{C})-v( \mathcal{C}\cup\{i\}). \tag{3}\]
As with the objective function, evaluating Eq. (3) for a subset of features is perceived as an estimate of its expected value, and thus the market revenue is allocated in a manner such that the payment received by each of the support agents is given by \(\pi_{a}=\sum_{i\in\mathcal{I}_{a}}\lambda\mathbb{E}[\phi_{i}]\), \(\forall a\in\mathcal{A}_{-c}\).
### Challenges
Further to data valuation, the Shapley value has garnered significant attention within the machine learning community, emerging as the _de-facto_ method for interpreting predictions (Lundberg and Lee, 2017). However, calculating Shapley values requires evaluating the objective function using subsets of features, which is not that straightforward in general -- once trained, machine learning models typically require an input vector containing a value for each feature to avoid matrix dimension mismatch. To address this issue, the characteristic function must define a _lift_ of the original objective (Merrill et al., 2019) in order to simulate removal of features (Covert et al., 2021).
Specifically, recall that our objective function, \(\ell\), is related to the mapping \(f:\mathbb{R}^{|\mathcal{I}|}\mapsto\mathbb{R}\) described in Eq. (1), and is therefore itself only defined in \(\mathbb{R}^{|\mathcal{I}|}\). To calculate Shapley values, we need to associate a set of values for each \(\mathbf{x}_{t}\in\mathbb{R}^{|\mathcal{I}|}\), one for each of the \(2^{|\mathcal{I}|}\) subsets of input features. Accordingly, we lift the objective function to the space of all subsets of features by formulating the characteristic function mapping as \(v(\ell,\mathbf{x}_{t},\mathcal{C}):\mathbb{R}^{|\mathcal{I}|}\times 2^{| \mathcal{I}|}\mapsto\mathbb{R}\), \(\forall\mathcal{C}\). Hence, for the grand coalition, \(v(\ell,\mathbf{x}_{t},\mathcal{I})=\mathbb{E}[\ell_{\mathcal{I},t}|\mathbf{X}_ {t}=\mathbf{x}_{t}]\), where \(\mathbf{X}_{t}\) is a multivariate random variable from which the features are perceived to be sampled. For a particular feature, the Shapley value is therefore not generally well-defined, since there exists many methods to formulate such a lift (Sundararajan and Najmi, 2020). Accordingly, in the next section we explore the following: (i) how to formulate and compute such a lift within a linear regression setup, (ii) what impact different lifts have on the revenue allocation in relation to causality, and (iii) how each lift influences the financial risk exposure of the support agents.
## 3 Problem Definition
Commonly adopted lifts can broadly be categorized as either _observational_ or _interventional_, differing only
in the functional form the characteristic function that underpins the cooperative game. The former is often seen in work pertinent to regression markets (e.g, (Agarwal et al., 2019) and (Pinson et al., 2022)), whilst the latter as an approximation for the former for interpreting model predictions (Lundberg and Lee, 2017). In the following, we explore the intricacies of each in the context of correlated data.
### Lift Formulations
The observational lift uses the _observational conditional expectation_, defined as the expectation of the objective over the conditional density of the out-of-coalition features, given that those in the coalition, as well as those owned by the central agent, take on their observed values, such that
\[v^{\text{obs}}(\ell,\mathbf{x}_{t},\mathcal{C})=\mathbb{E}\left[\ell_{t}\left| \mathbf{X}_{\mathcal{C}^{\prime},t}=\mathbf{x}_{\mathcal{C}^{\prime},t}\right],\ \forall\mathcal{C}, \tag{4}\]
where \(\mathcal{C}^{\prime}=\mathcal{I}_{c}\cup\mathcal{C}\). In a similar vein, the interventional lift uses the _interventional conditional expectation_, whereby the features in the specific coalition are manually fixed to their observed values, thereby intentionally manipulating the data generating process, which we can express mathematically using Pearl's _do_-calculus (Pearl, 2012) such that
\[v^{\text{int}}(\ell,\mathbf{x}_{t},\mathcal{C})=\mathbb{E}\left[\ell_{t}\left|do( \mathbf{X}_{\mathcal{C}^{\prime},t}=\mathbf{x}_{\mathcal{C}^{\prime},t})\right],\ \forall\mathcal{C}. \tag{5}\]
As an illustration, consider two correlated random variables, \(X\) and \(Y\), with the causal relationship in Figure 1. Suppose we observe a value \(X=x\), then the observational conditional distribution describes: _the distribution of \(Y\) given that \(X\) is observed to take on the value \(x\)_, given by \(p(y|x)=p(x,y)/p(x)\). The interventional conditional distribution describes instead: _the distribution of \(Y\) given we artificially set the value of \(X\) to \(x\)_, denoted \(p(y|do(x))\), which we obtain by assuming that \(Y\) is distributed by the original data generating process. Graphically, an intervention removes all edges going into the corresponding variable. Consequently, we get that, \(p(y|do(x))=p(y|x)\) but \(p(x|do(y))=p(x)\), that is, the distribution of \(y\) under the _intervention_\(X=x\) is equivalent to the distribution of \(y\)_conditioned_ on \(X=x\). Yet, for \(Y=y\), \(x\) and \(y\) become disconnected (i.e., independent), hence \(x\) has no effect on \(y\), which is simply sampled from its marginal distribution.
Formally, in our market context, the difference between Eq. (4) and Eq. (5) is that dependencies amongst features in and out of the coalition are broken in the latter, as observing \(\mathbf{X}_{\mathcal{C}^{\prime},t}=\mathbf{x}_{\mathcal{C}^{\prime},t}\) would theoretically change the distribution of the remaining features if the random variables were, for instance, connected through latent confounding effects. Instead, by intervening on the features within the coalition, the distribution of the out-of-coalition features is unaffected, such that \(v^{\text{int}}(\ell,\mathbf{x}_{t},\mathcal{C})\) is calculated as
\[\int\mathbb{E}\left[\ell_{t}|\mathbf{X}_{\mathcal{C}^{\prime},t}=\mathbf{x}_{\mathcal{ C}^{\prime},t},\mathbf{X}_{\mathcal{C}^{\prime},t}=\mathbf{X}_{\mathcal{C}^{\prime},t} \right]p(\mathbf{x}_{\mathcal{C}^{\prime},t})d\mathbf{x}_{\mathcal{C}^{\prime},t}, \tag{6}\]
where \(\mathcal{C}=\mathcal{I}_{-c}\setminus\mathcal{C}\) (i.e., \(\mathcal{I}\setminus\mathcal{C}^{\prime}\)).
### Computing Lifts
Typically, the decision of which lift to use is driven by their relative computational expenditure (Lundberg and Lee, 2017) -- in general, evaluating the conditional expectation of the objective function is intractable, necessitating complex and costly methods for approximation (Covert et al., 2021), whereas cheap and relatively simple algorithms exist to _intervene_ on the features (Sundararajan and Najmi, 2020). Whilst the most suitable method for evaluating the conditional expectation is widely disputed (Chen et al., 2022), one such method merely requires training separate models for each subset of features; if each model is optimal with respect to the objective, then this is equivalent to marginalizing out features using their conditional distribution (Covert et al., 2021).
Although this indeed scales poorly to high dimensional datasets, since we do not intend to propose a novel method for approximation, this approach is deemed sufficient for our case study, particular given our restriction to linear regression models. Similarly, one can evaluate the interventional conditional
Figure 1: Causal graph indicating a direct effect between two random variables, \(X\) and \(Y\).
expectation of the objective function for linear regression models by imputing, or even removing completely, the features not present in a coalition.
### Causal Nuances
Notice that Eq. (4) and Eq. (5) are in fact equivalent when the input features are independent -- we calculate \(v^{\text{obs}}(\ell,\mathbf{x}_{t},\mathcal{C})\) by replacing \(p(\mathbf{x}_{\mathcal{C}t})\) with \(p(\mathbf{x}_{\mathcal{C}t}|\mathbf{x}_{\mathcal{C}^{\prime}t})\) in Eq. (6), which would be equal in such a case. However, the same cannot be said in the presence of multicollinearity. With reference to causal inference, the interventional conditional expectation disregards causal effects _between features_, and is thus only able to capture _direct effects_, neglecting _root causes_ with strong _indirect effects_(Heskes et al., 2020). Accordingly, this lift will be more effective at crediting features on which the model has an explicit algebraic dependence, albeit with the possibility of model evaluation on points outwith the true data manifold, when the assumption of independence is violated (Frye et al., 2020). We can visualize this with an illustration as in Figure 2, for which we generated training data by sampling two features from a standard Gaussian distribution. When independent, intervening on either feature yields samples that remain within the original manifold. However, with correlated data, there is a greater probability of extrapolating further away from the training distribution, where the model is not trained and behaviour is unknown.
In contrast, the observational lift will attribute features in proportion to indirect effects, ensuring contributions honor the data manifold (Aas et al., 2021). Some work argues this to be counter-intuitive, as features not explicitly used by the model have the possibility of receiving non-zero attribution (Sundararajan and Najmi, 2020), and that Lundberg and Lee (2017) were mistaken to only convey Eq. (5) as a cheap approximation of Eq. (4) (Janzing et al., 2020). Whilst this dispute has been used as an argument to reject the general use of Shapley values for interoperability in machine learning (Kumar et al., 2020), the choice between the observational and interventional lifts can in fact be viewed as application dependant, conditional on as to whether one wants to be _true to the data_ or _true to the model_, respectively (Chen et al., 2020) (i.e., the trade-offs of each approach are context-specific). We note that both of these lifts preserve the axioms of the original Shapley value, and subsequently the market properties provided, albeit in expectation.
### Model-Centric Persepective
In our framework, the predictive performance of the regression model out-of-sample is contingent upon the availability of features that were used to train the model in-sample, which, in practice, requires data of the support agents to be streamed continuously in a timely fashion, particularly for an online setup (Pinson et al., 2022). If the stream of one or more of these features were to be interrupted, the efficacy of the forecast provided to the central agent would likely drop, the extent to which would relate not to any root causes or indirect effects regarding the data generating process, but rather solely the magnitude of direct effects. Ergo, in a market with an attribution policy based on the interventional Shapley value, hereafter
Figure 2: Illustration of the possibility for model evaluation on points outwith the true data manifold. The green and blue lines represent the level set within which 0.99 quantile of the training data when features are independent and correlated, respectively. The green lines represent the data extrapolated as a result of intervening on \(X_{1}\) (solid) and \(X_{2}\) (dashed).
referred to as _model-centric_, larger payments would be made to support agents whom own features to which the predictive performance of the model is most sensitive, providing an incentive for additional efforts to decrease the chance of their data being unavailable. This resembles availability payments in electricity markets, whereby assets are remunerated simply for being available in times of need.
### Data-Centric Persepective
We refer to markets based on the observational Shapley value as _data-centric_ -- continuing the previous example, it would be unclear as to whether comparatively larger payments made this market are consequential of features having a sizeable impact on predictive performance, or merely a result of indirect effects through those that do. On the other hand, one may argue that payments made by the data-centric market are more _fair_, accounting for the fact that a feature having only indirect effects does not necessarily diminish its propensity to increase predictive performance in the absence of its counterpart with a direct effect.
That being said, another perspective to consider is that of replication, and robustness thereto, whereby a support agent replicates its data, acting under multiple identities to maximize revenue (Han et al., 2023). Several data-centric mechanism designs have been proposed to remedy this issue, most of which come at a cost, for instance the proposal in Agarwal et al. (2019) sacrifices budget balance and remains exposed to spiteful agents (i.e., those which are willing to sacrifice their own revenue in pursuit of minimizing that of other agents). In contrast, since the dependency between the original feature and the replication induces only an indirect effect, model-centric mechanisms are robust to replication by design.
### Financial Risks
The last point we consider is model behaviour out-with the data manifold. From statistical learning theory, it is known that whilst multicollinearity does not systematically bias the posterior mean, the variance of the coefficients is inflated as it becomes difficult to distinguish between correlated features. Near-perfect correlations will ultimately reduce to determinant of the covariance matrix of the data to zero (i.e., towards singularity), with such ill-conditioning making the required matrix inversion highly sensitive to small changes in the data, which in turn can lead to numerical instabilities. This then distorts the posterior mean, especially when the number of observations is limited. For the observational lift, given a separate model is trained for each coalition, attributions are not dependant on behaviour outside the data distribution, and are thus not subject to such variance inflation, however the same cannot be said for the interventional case.
To illustrate this variance inflation, in our Gaussian framework with an uninformative prior, we can write the posterior variance of the \(i\)-th coefficient as, \(\sigma^{2}(w_{i})=\kappa_{i}/\xi|\mathcal{D}_{i}|\), where \(\xi\) is the intrinsic noise precision of the target, and \(\kappa_{i}\) is the variance inflation factor, a measure of the extent to which the posterior variance increases by virtue of multicollinearity, given by
\[\kappa_{i}=\mathbf{e}_{i}^{\top}\bigg{(}\sum_{r^{\prime}\leq t}\mathbf{x}_{r^{ \prime}}^{\top}\mathbf{x}_{r^{\prime}}\bigg{)}^{-1}\mathbf{e}_{i},\;\forall i \in\mathcal{I}, \tag{7}\]
where \(\mathbf{e}_{i}\) is the the unit vector corresponding to the index of the coefficient. The variance inflation factor has a lower bound of 1 (i.e., when the covariance matrix of the data is full rank), but has no upper bound, such that \(\kappa_{i}\mapsto\infty,\forall i\), as the determinant tends to zero, and therefore the estimated variance too tends to infinity with increasing multicollinearity. We shall now derive the link between the coefficients and the Shapley values and examine how this variance inflation impacts the payments received by the support agents. From a variance-decomposition perspective one can readily show that the Shapley value for the \(i\)-th feature is equivalent to the variance in the target signal that it explains, such that, \(\phi_{i}=w_{i}^{2}\,\sigma^{2}(X_{i})\), which approximates the interventional Shapley value when features are correlated.
Given that in our Bayesian regression analysis the posterior distribution is Gaussian, the Shapley value for each feature will follow a noncentral Chi-squared distribution with one degree of freedom.
For a particular feature, we can write the probability density function of the distribution of the Shapley value in closed-form as
\[\begin{split}\frac{1}{\sigma^{2}(X_{i})\sigma^{2}(w_{i})}p(\phi_{i})= \\ \sum_{k=0}^{\infty}\frac{e^{\eta/2}(\eta/2)^{k}}{k!}\chi^{2}(1+2k), \forall i,\end{split} \tag{8}\]
where the noncentral Chi-squared distribution is seen to simply be a Poisson-weighted mixture of central chi-squared distributions, \(\chi^{2}(\cdot)\), with noncentrality \(\eta=\mathbb{E}[w_{i}]^{2}/\sigma^{2}(w_{i})\). Since we know the moment generating function for such a mixture, we derive the second moment as follows:
(9)
This implies that the variance of the attribution, and subsequently the payment, for any given feature is a quadratic function of both the magnitude and the variance of the corresponding coefficient, and therefore the variance inflation induced by multi-collinearity. Such variance inflation would manifest as a large standard error with respect to the _true_ coefficients, with the possibility for each element to take on much larger values and even change sign. Imputing a feature would therefore both inflate the variance of the predictive distribution (i.e., by virtue of the inflated posterior variance), and possibly massively bias the mean, which in turn would reduce the predictive likelihood and distort the contribution estimated for the feature.
### Other Considerations
Finally, we note that similar attributions derived using Eq. (4) can be obtained using regularization methods in combination with Eq. (5), for instance, using Elastic Net priors (Zou and Hastie, 2005) can average out coefficients of correlated features, but can require sophisticated hyperparameter tuning. Similarly, an orthogonal projection would rather eliminate dependencies altogether, but may have interoperability consequences (Johnson, 2000). Distorted attributions resulting from the model-centric market could in theory be remedied by the _zero-Shapley_ or _absolute-Shapley_ proposed in Liu (2020), but it is unclear at present what impact this would have on the market properties.
## 4 Experimental Analysis
To demonstrate our findings, we design two case studies1, the first of which is simulation-based, that is, we construct synthetic datasets to allow us explicitly define the causual structure. In the second case study, we use hourly wind generation data from sites in South Carolina (USA) to demonstrate the impact of the two attribution policies on the robustness to replication of the market.
Footnote 1: Our code is available at: [https://github.com/tdfalc/regression-markets](https://github.com/tdfalc/regression-markets)
### Synthetic Data
Consider a scenario whereby the market comprises just three agents, \(\mathcal{A}=\{a,b,c\}\), with agent \(c\) as the central agent and the remaining being support agents. The data available in the market are observations from the stochastic processes that correspond to the wind power outputs at each site owned by respective agents, each following standard Gaussian distributions. We let the measurements of agents \(a\) and \(b\) be correlated, considering the causal graph in Figure 3.
Each of the available inputs are used as features to predict the wind power output at the site owned by agent \(c\) at time \(t+1\); \(w_{1}\) and \(w_{2}\) are the _true_ coefficients of the data generating process, which additionally includes a noise term. In Figure 3, observe
Figure 3: Causal graph indicating direct effects (solid lines) and indirect effects (dashed lines) between random variables associated with each market agent in \(\mathcal{A}=\{a,b,c\}\)
that the measurement of agent \(a\) at time \(t\) has a direct effect on the target (i.e., proportional to \(w_{2}\)), and as a result agent \(b\) has an indirect effect (i.e., proportional to \(\rho w_{2}\)). As there is no actual edge between the measurement of agent \(b\) and the target, its direct effect is zero. Likewise, any indirect effect from the measurement of agent \(a\) necessarily passes through that of agent \(b\), and hence must also be zero. We simulate a vector autoregressive (VAR) process such that the resulting dataset includes first-order linear autocorrelations (i.e., each agent's data is correlated with its own signal at the previous time step), given as follows:
\[\mathbf{y}_{t}=\begin{bmatrix}0.1&0.9&0.0\\ 0.0&0.6&0.0\\ 0.0&0.0&0.7\end{bmatrix}\mathbf{y}_{t-1}+\boldsymbol{\eta}_{t},\;\forall t, \tag{10}\]
where the first, second and third indices of \(\mathbf{y}_{t}\in\mathbb{R}^{3}\) correspond to the measurements of agent \(c\), \(a\) and \(b\), respectively. The dependency between the data of agents \(a\) and \(b\) is captured in the additive Gaussian noise term, \(\boldsymbol{\eta}_{t}\), in which we set \(\rho=0.9\). We perform a Monte-Carlo simulation by which we re-run the market clearing procedure \(10^{3}\) times, each with a new sample of \(10^{3}\) datapoints, recording the normalized revenue allocation (i.e., with respect to the value of the grand coalition) derived for both observational and interventional conditional expectations. Additionally, we intervene on the features by measuring the increase in the objective function when each of the support agents measurements are mean-imputed. In Figure 4, we plot the empirical average of both the normalized revenue allocations and the normalized increase in the objective function after the intervention is carried out.
As expected, the model-centric market allocates the majority of the revenue to agent \(a\), that is, only the feature with a direct effect on the target. In contrast, given the relatively large correlation between this feature and that of agent \(b\), the data-centric market splits the revenue proportionally to both direct and indirect effects, such that both agents receive near-equal revenue. Neither of these allocations is necessarily _correct_, but rather it is up to interpretation of which is deemed most appropriate. It could be proclaimed that the observational allocation is more fair, as if agent \(a\) was not present during the in-sample market stage, agent \(b\) could still provide much of the uplift in predictive performance. However, if we now consider the increase in the objective function when either of the two features are mean-imputed, we see that despite the dependency between the features, that of agent \(b\) has no effect on the prediction itself, as only that with a direct effect is algebraically included in the resultant model. This aligns with the allocation provided by the model-centric market, and hence if the designer wishes to incentivize agents that own features most important to the model itself to guarantee availability, the interventional conditional expectation would be the lift of choice.
Lastly, we acknowledge that the previous simulation only considers the influence of each market design on the expected revenue, \(\int\pi_{a}p(\pi_{a})\,d\pi_{a},\forall a\in\mathcal{A}_{-c}\), not the variability, or rather the financial risks exhibited by the support agents. We perform a similar Monte-Carlo simulation, but this time vary the dependency \(\rho\), recording the in-sample revenue of each support agent. We evaluate the risk, measured using the expected shortfall (i.e., the conditional value-at-risk), \(-1/\tau\int_{\pi_{a}\leq q_{\tau}(\pi_{a})}\pi_{a}p(\pi_{a})d\pi_{a},\forall a \in\mathcal{A}_{-c}\), where \(q_{\tau}(\cdot)\) is the quantile with nominal level \(\tau\).
In Figure 5, we present the expected shortfall ex
Figure 4: Empirical average of the normalized increase in \(\ell\) after intervening on each feature (orange), and the normalized revenue allocations for both the model-centric (blue) and data-centric (pink) markets.
hibited by agent \(b\) as a function of \(\rho\), where which we set \(\tau=0.05\) and additionally set \(\lambda=0.1\) USD per time step and per unit improvement in \(\ell\). For the data-centric market, we see that the financial risks decrease with \(\rho\) due to the increasing indirect effect of the feature on the target, that is, as \(\rho\mapsto 1\), we get that \(\pi_{b}/(\pi_{a}+\pi_{b})\mapsto 0.5\) as expected. Since the observational Shapley value is only subject to sampling uncertainty and not exposed to variance inflation, in this case the increased revenue lowers the financial risk. In contrast, the expected shortfall resulting from the model-centric market increases with \(\rho\) to values \(\geq 0\), thus the conditional value-at-risk is a loss for the agent. Whilst we expect multi-collinearity to increase the risk in general, the particular relationship observed here can be explained by our assumption of a well-specified Gaussian framework, wherein we can re-write Eq. (9) as \(\sigma^{2}(\phi_{i})=2/(|\mathcal{D}_{t}|\xi)\big{(}2\mathbb{E}[w_{i}]^{2}+1/(| \mathcal{D}_{t}|\xi)\big{)}\). If we further assume standardized features, for moderate levels of multi-collinearity we get that \(\sigma^{2}(\phi_{i})\leq(4|\mathcal{D}_{t}|\xi+2)/(|\mathcal{D}_{t}|\xi)^{2}\), therefore given our large sample size, we expect the variance of the returns to be relatively low. However, as \(\rho\mapsto 1\), the determinant of the covariance matrix of the data approaches singularity, this inequality no longer holds as the posterior mean is no longer bounded and the variance, and hence the risk, is inflated considerably.
### Real Data
We now turn our attention to a real-world case study, considering both in-sample and out-of-sample stages (i.e., both inference and genuine forecasting) wherein we assume a _malicious agent_ replicates their data in attempt to increase their revenue. To facilitate reproduction of our work, we use an open source dataset, namely the _Wind Integration National Dataset (WIND) Toolkit_, as detailed in Draxl et al. (2015).
#### 4.2.1 Data Description
This dataset comprises wind power measurements simulated for a set of 9 wind farms in South Carolina (USA), all located within 150 km of each other -- see Table 1 for a characteristic overview. Although this data is not exactly _real_, it effectively captures the spatio-temporal aspects of wind power production, with the added benefit of remaining free from any spurious measurements, as can often be the case with real-world datasets. Measurements are available for a period of 7 years, from 2007 to 2013, with an hourly granularity, which we normalize to take values in the range of \([0,1]\).
\begin{table}
\begin{tabular}{l r r r} \hline \hline Agent & Id. & \(C_{\text{f}}\) (\%) & \(P\) (MW) \\ \hline \(a_{1}\) & 4456 & 34.11 & 1.75 \\ \(a_{2}\) & 4754 & 35.75 & 2.96 \\ \(a_{3}\) & 4934 & 36.21 & 3.38 \\ \(a_{4}\) & 4090 & 26.60 & 16.11 \\ \(a_{5}\) & 4341 & 28.47 & 37.98 \\ \(a_{6}\) & 4715 & 27.37 & 30.06 \\ \(a_{7}\) & 5730 & 34.23 & 2.53 \\ \(a_{8}\) & 5733 & 34.41 & 2.60 \\ \(a_{9}\) & 5947 & 34.67 & 1.24 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Agents and corresponding site characteristics considered in South Carolina (USA). \(C_{\text{f}}\) denotes the capacity factor and \(P\) the nominal capacity. The identify number is that from the WIND Toolkit database.
Figure 5: Expected shortfall exhibited by agent \(b\) as a function of the correlation \(\rho\) between the agent’s feature and that of agent \(a\), for both the model-centric (blue) and data-centric (pink) markets.
Each wind farm is considered a market agent. For simplicity, we let \(a_{1}\) be the central agent, however in practice each could assume this role in parallel. We assume that only 1 lag is submitted to the market by all agents -- for wind power forecasting, the lag not only captures the temporal correlations of the production at a specific site, but also indirectly encompasses the dependencies amongst neighboring sites due to the natural progression of wind patterns. To illustrate this, we plot the location of, and the autocorrelation at, each site in Figure 6 and Figure 7, respectively. We see that the measurements at sites directly neighbouring \(a_{1}\) have the largest dependency as expected, which then decreases for the sites further away. The autocorrelation is relatively similar across all sites, with notable peaks at the same time of the previous days.
#### 4.2.2 Model Assumptions
We assume the regression framework as described in Section 3, with the expectation of the likelihood for the grand coalition given by Eq. (1). We perform a pre-screening, such that given the redundancy between the lagged measurements of \(a_{2}\) and \(a_{3}\) with that of \(a_{1}\), we remove them from the market in line with our assumptions. We split the data such that the first half are used to clear the in-sample regression market and fit the regression model, whilst the latter is used for the out-of-sample market. We clear both markets considering each agent is honest, that is, they each provide a single report of their true data. Next, we re-clear the markets, but this time assume agent \(a_{4}\) is malicious, replicating their data, albeit obfuscated with some additive noise, and thereby submitting multiple separate features to the market in effort to increase their revenue. Such replications are depicted in Figure 8 -- for illustration we have ignored any (in-)direct effects of the other features, and assume the feature of agent \(a_{4}\) has a direct effect on the target, hence each replicate will have an indirect effect.
#### 4.2.3 Results
We set the number of replicates to four, and let \(\lambda=0.5\) USD per time step and per unit improvement in \(\ell\), for both in-sample and out-of-sample market stages. However, we are primarily interested in the revenue allocation rather than the magnitude -- see Pinson et al. (2022) for a complete analysis of the monetary incentive to each agent participating in the market. Overall the in-sample and out-of-sample objectives improved by 10.6% and 13.3% respectively with the help of the support agents. In Figure 9, we plot the resultant allocation for each agent with and without the malicious behavior of agent \(a_{4}\), for both the data-centric and model-centric markets. When this agent is honest, we see that the data-centric market spreads credit relatively evenly amongst most the features, suggesting that many of them exhibit similar indirect effects on the target. The model-centric market favours agent \(a_{8}\), which, as expected, owns the features with the greatest spatial correlation with the target. In this market, most of the additional revenue of agent \(a_{8}\) appears to be lost from agent \(a_{9}\) compared with the data-centric market, suggesting that whilst these features are correlated, it is agent \(a_{8}\) with the greatest direct effect.
When agent \(a_{4}\) replicates their data, in the data
Figure 6: Geographic location of each wind farm. The point sizes indicate the relative correlation between the measurements at each site and that of the central agent, \(a_{1}\).
centric market we see agents \(a_{5}\) to \(a_{8}\) earn less, whilst agent \(a_{4}\) earns considerably more. This demonstrates that the observational conditional expectation indeed spreads revenue proportionally amongst indirect effects, of which there are now four more due to the replicates, and consequently the malicious agent out-earns the others. Such proportional spreading of credit amongst indirect effects would keep occurring as the number of replications increases, such that the proportion given to the remaining agents decreases asymptotically to zero (Agarwal et al., 2019). In contrast, since the data-centric market only remunerates direct effects, each replicate is allocated zero revenue, hence the malicious agent is no better off than before. Lastly, we observe that in both cases, the market outcomes were relatively consistent between the in-sample and out-of-sample stages, likely due to the large batch size considered, combined with limited nonstationarities within the data.
## 5 Conclusions
The energy transition is well under way, with digitization playing a key role in the development of new business models to facilitate carbon-free power systems. For this to continue, providing system operators with reliable forecasts of stochastic power generation, such as wind, will be imperative to preserve the economic viability of balancing supply and demand. That being said, we cannot rely on the optimistic belief that altruistic data sharing will occur amongst market competitors, thereby requiring market mechanisms to incentivize information exchange. Whilst there have been several recent proposals of such mechanisms (i.e., regression markets), most ignore the intricacies of the Shapley value-based attribution policies on which they rely -- in this paper, we have shown that the way in which the attribution policy is formulated has interesting consequences on the market outcomes, in particular with respect to causality.
Specifically, we have shown that this choice of formulation boils down to whether the designer intends the market to be model-centric or data-centric, respectively. We do not argue that either is _better_ but rather that both have trade-offs depending on the context, and if anything, the designer should be mindful to not take this choice for granted, else counter-intuitive market outcomes can be expected. We also note that neither design captures the complete casual picture, and with access to it one could even attribute all credit for indirect effects to the root cause if deemed appropriate. However, causal graphs are often unknown or require substantial domain expertise to compute, hence future work could be to incorporate data-driven methods for causal discovery in the market clearing procedure. Lastly, although the model-centric market is robust to replication by design, and yields allocations that better represent
Figure 8: Causal graph indicating direct effects (solid lines) and indirect effects (dashed lines) induced by agent \(a_{4}\) replicating their feature, obfuscated with some noise. The prime superscript denotes a replicated feature.
Figure 7: Autocorrelation of the measurements at each site, with each lag indicating a 1-hour time shift.
the reliance of the model on each feature, the support agents are indeed exposed to considerable financial risks. Future work could examine the extent to which the mentioned remedies help this problem, as well as their impact on the market outcomes.
|
2308.13121 | Devouring The Centaurus A Satellites: Modeling Dwarf Galaxies with
Galacticus | For the first time, systematic studies of dwarf galaxies are being conducted
throughout the Local Volume, including Centaurus A (NGC 5128), which is the
nearest elliptical galaxy. Given Centaurus As mass (roughly ten times that of
the Milky Way), AGN activity, and recent major mergers, investigating these
dwarfs and their star formation physics is imperative. However, simulating the
faintest dwarfs in a massive galaxy like Centaurus A with sufficient resolution
in a hydrodynamic simulation is computationally expensive and currently
unfeasible. In this study, we seek to reproduce Centaurus A dwarfs using the
same star formation physics as the Milky Way. We employ the semi-analytic model
Galacticus to model dwarfs within a 600 kpc region. Utilizing astrophysical
prescriptions and parameters matching the Milky Way satellites, we explore
predictions for various properties and star formation histories (SFHs) to
investigate environmental effects. We also reproduce cumulative luminosity and
luminosity metallicity relations consistent with observations for the overall
Centaurus A satellite population, while predicting half-light radii, velocity
dispersion, and SFHs for the dwarf galaxies in Centaurus A. The agreement
between our predicted SFHs for Centaurus A dwarfs and those of the Milky Way
implies the presence of universal processes governing star formation in these
galaxies. Overall, our findings shed light on the star formation physics of
dwarf galaxies in the Centaurus A system, revealing insights into their
properties and dependence on the host environment. | Sachi Weerasooriya, Mia Sauda Bovill, Matthew A. Taylor, Andrew J. Benson, Cameron Leahy | 2023-08-25T00:11:02Z | http://arxiv.org/abs/2308.13121v1 | # Devouring The Centaurus A Satellites: Modeling Dwarf Galaxies with Galacticus
###### Abstract
For the first time, systematic studies of dwarf galaxies are being conducted throughout the Local Volume, including Centaurus A (NGC 5128), which is the nearest elliptical galaxy. Given Centaurus A's mass (roughly ten times that of the Milky Way), AGN activity, and recent major mergers, investigating these dwarfs and their star formation physics is imperative. However, simulating the faintest dwarfs in a massive galaxy like Centaurus A with sufficient resolution in a hydrodynamic simulation is computationally expensive and currently unfeasible. In this study, we seek to reproduce Centaurus A dwarfs using the same star formation physics as the Milky Way. We employ the semi-analytic model Galacticus to model dwarfs within a 600 kpc region. Utilizing astrophysical prescriptions and parameters matching the Milky Way satellites, we explore predictions for various properties and star formation histories (SFHs) to investigate environmental effects. We also reproduce cumulative luminosity and luminosity-metallicity relations consistent with observations for the overall Centaurus A satellite population, while predicting half-light radii, velocity dispersion, and SFHs for the dwarf galaxies in Centaurus A. The agreement between our predicted SFHs for Centaurus A dwarfs and those of the Milky Way implies the presence of universal processes governing star formation in these galaxies. Overall, our findings shed light on the star formation physics of dwarf galaxies in the Centaurus A system, revealing insights into their properties and dependence on the host environment.
Dwarf galaxies (416) -- Galaxy evolution (594) -- Galaxy formation (595) -- Theoretical models (2107) 0000-0002-4870-2886]Sachi Weerasooriya
0000-0002-8870-0888]Mia Sauda Bovill
0000-0002-4073-0888]Matthew A. Taylor
0000-0002-4888-0888]Andrew J. Benson
0000-0002-0703-0888]Cameron Leahy
## 1 Introduction
Dwarf galaxies are the fundamental building blocks of larger structures. They are the most abundant type of galaxies in the universe at all redshifts (_e.g._, Binggeli et al., 1988; Ferguson & Binggeli, 1994; Marzke & da Costa, 1997). Sensitivity to their host environments makes them excellent objects to study the effects of the surrounding environment and its role in dwarf galaxy physics, as shallow gravitational potentials make them extremely sensitive to environmental feedback (_e.g._, Dekel & Silk, 1986; Thoul & Weinberg, 1996; Benson et al., 2002; Okamoto et al., 2010). The dwarf galaxies of the Local Group are well studied both observationally (Albareti et al., 2017; Aguado et al., 2019; Martin et al., 2016; Abbott et al., 2018; McConnachie, 2012; York et al., 2012) and theoretically (Hopkins et al., 2018; Applebaum et al., 2021; Pandya et al., 2020; Shipp et al., 2022; Weerasooriya et al., 2023).
While Milky Way dwarfs have been simulated hydrodynamically, simulations of massive systems outside our Galactic neighborhood up to \(z\,=\,0\) do not meet the resolution required to resolve star formation physics of fainter dwarfs. For example hydrodynamic simulations like TNG50 (Nelson et al., 2019) can only resolve dwarf galaxies down to \(M_{*}=10^{8}\,\mathrm{M}_{\odot}\). Studies of dwarfs in the Local Group alone cannot give a clear picture of the role environment plays in the physics of their evolution. With the upcoming unprecedented amount of observations through soon to be available from facilities like the Nancy Grace Roman telescope and Rubin Observatory, it is essential that we explore dwarfs beyond the Local Group with theoretical models. One method of investigation is through the luminosity- metallicity relation, which captures the stellar feedback physics of dwarf galaxies. Studies show that the mass-metallicity relationships of both Milky Way and M31 follow sim
ilar trends (Kirby et al., 2020). Thus, it is imperative that we look beyond the Local Group (LG) to understand whether or not properties in dwarf galaxies are independent of the environment.
Observational and theoretical studies have focused on dwarf galaxies that are hosted by Milky Way and Andromeda-like environments or cluster scale environments such as Virgo \(\sim 10^{15}\,\mathrm{M}_{\odot}\) and Fornax \(\sim 10^{14}\,\mathrm{M}_{\odot}\)(McConnachie, 2012; Richardson et al., 2011; Ferrarese et al., 2012; Eigenthaler et al., 2018). Centaurus A provides the most accessible opportunity to study dwarf galaxies within a higher mass host and in an environment which sits between group and cluster scales. But, few have explored dwarfs of Centaurus A (Muller et al., 2015, 2017, 2019). Thus, the question of how dwarf galaxy physics are affected by different mass hosts needs further exploration in these intermediate mass host environments. Centaurus A is the closest, easily observable giant elliptical galaxy located 3.8 Mpc from the Milky Way (Harris et al., 2010) with poorly constrained mass of \(4.7\times 10^{12}\,\mathrm{M}_{\odot}\)-\(1.8\times 10^{13}\,\mathrm{M}_{\odot}\)(Pearson et al., 2022; van den Bergh, 2000).
In recent years the halo of Centaurus A has been targeted by several surveys including the Survey of Centaurus A's Baryonic Structures (SCABS; Taylor et al., 2016, 2017, 2018) and the Panoramic Imaging Survey of Cen & Sculptor (PISCeS; Sand et al., 2014; Crnojevic et al., 2014; Crnojevic et al., 2016). These are the first systematic surveys of the dwarf satellites of Centaurus A. In addition, studies by Muller et al. (2015, 2017, 2019, 2021, 2022) have also covered both M83 and Centaurus A with DECam. As a result of these surveys the number of known or suspected dwarf galaxies has almost doubled over the past 5-10 years.
The mass of Centaurus A is currently not well constrained. Several studies have predicted the virial mass of Centaurus A using a variety of methods. For example Woodley et al. (2007) has estimated the mass of Centaurus A (pressure supported and rotational supported mass) using a globular cluster population within 50 kpc (\(1.3\times 10^{12}\,\mathrm{M}_{\odot}\)). van den Bergh (2000) has calculated the mass of Centaurus A using the virial theorem (\(1.4\times 10^{13}\,\mathrm{M}_{\odot}\)) and projected mass method out to 640 kpc (\(1.8\times 10^{13}\,\mathrm{M}_{\odot}\)). Muller et al. (2022) estimates a dynamical mass of \(1.2\times 10^{13}\,\mathrm{M}_{\odot}\) within 800 kpc. Pearson et al. (2022) has constrained the lower limit (\(4.7\times 10^{12}\,\mathrm{M}_{\odot}\)) on the mass of Centaurus A using stellar stream models. The upper range of virial masses measured for Centaurus A is \(\sim 10^{13}\mathrm{M}_{\odot}\)(van den Bergh, 2000; Peng et al., 2004; Woodley et al., 2007; Lokas, 2008; Harris et al., 2015), which falls between the masses of Milky Way and large clusters such as Virgo and Fornax.
To date, few theoretical studies of the Centaurus A system have been carried out. Bovill et al. (2016) used a high-resolution N-body simulation that did not include baryons to study the Centaurus A globular clusters, and Muller et al. (2019) compares the luminosity function within 200 kpc to Centaurus A analogs in the TNG100 of Illustris-TNG simulations. TNG100 has a resolution of \(m_{DM}=7.5\times 10^{6}\,\mathrm{M}_{\odot}\) and \(m_{\mathrm{baryon}}=1.4\times 10^{6}\,\mathrm{M}_{\odot}\) resolving galaxies with \(M_{*}>10^{8}\,\mathrm{M}_{\odot}\)(Pillepich et al., 2018). However, this mass is only softly below an SMC-like halo mass and is inadequate to resolve fainter dwarfs. Therefore, we need to explore more computationally efficient techniques, such as semi-analytic models (SAMs) that provide an efficient method to explore the star formation physics of dwarf galaxies.
In this work, we run the SAM Galacticus(Benson, 2012) on merger trees generated with Extended Press-Schechter (EPS) and from \(N\)-body simulations of Centaurus A analogs. We use the same star formation physics that reproduces the properties of observed dwarfs of the Milky Way down to ultra-faint dwarfs (Weerasooriya et al., 2023). The goal of this work is to test whether the same astrophysical prescriptions and parameters of the Milky Way can reproduce the observed cumulative luminosity function and the luminosity-metallicity relation of the Centaurus A satellites, and if it can, make predictions for their properties and SFHs, in an effort to investigate the effects of host environment on these dwarfs. In Section 2, we describe the details of the simulation and modeling. In Section 3, we describe the sample of observational data taken from the literature. Next, in Section 4, we compare our models to the observed properties of the known Centaurus A dwarfs before exploring their potential star formation histories. Lastly, we present our discussions and conclusions in Section 6.
## 2 Simulation
We use a high-resolution cosmological N-body simulation of an isolated Centaurus A halo from Bovill et al. (2016). This simulation is run from \(z\,=\,150\) to \(z\,=\,0\) with WMAP9 cosmology (\(\sigma_{8}\sim 0.821\), \(H_{0}\sim 70.0\,km\,s^{-1}\,\mathrm{Mpc}^{-1}\), \(\Omega_{b}\sim 0.0463\), \(\Omega_{\Lambda}\sim 0.721\)). Initial conditions were generated with MUSIC(Hahn & Abel, 2011) and the simulation run with Gadget2(Springel, 2005) and analyzed with the AMIGA(Knollmann & Knebe, 2009) and CONSISTENT_TREES(Behroozi et al., 2013). The Centaurus A analog is selected to be a \(\sim~{}10^{13}~{}M_{\odot}\) halo with no halos \(M\geq 10^{12}~{}\mathrm{M}_{\odot}\) within \(3\,\mathrm{Mpc}\,h^{-1}\) at \(z\,=\,0\).
### Semi-Analytic Model (SAM)
Despite the influx of observational data, high-resolution hydrodynamic simulations of Centaurus A analogs have not yet been run to \(z=0\). They are on the edge of current computational capabilities. Note that the studies such as van den Bergh (2000); Peng et al. (2004); Woodley et al. (2007); Lokas (2008); Harris et al. (2015); Pearson et al. (2022) use a variety of methods to determine the mass of Centaurus A, while in this work we defined the virial mass of a halo in terms of the spherical overdensity given by Bryan & Norman (1998). Thus they are not necessarily consistent with the methods used in our models. Due to significant uncertainties in these mass estimates of Centaurus A, we investigate a wide range of possible halo masses within virial radius for Centaurus A using 30 EPS trees with different merger histories.
We model the baryonic properties of the Centaurus A system using the Semi-Analytic Model (SAM) Galacticus (Benson, 2012). We apply the same astrophysical prescriptions and parameters that reproduced the Milky Way satellites in Weerasooriya et al. (2023). These include a reionization redshift of 9, filtering velocity (the scale below which halos are unable to accrete gas efficiently from the IGM after reionization) of 25 km s\({}^{-1}\), cooling velocity (the scale below which cooling becomes inefficient in the circumgalactic medium) of 19 km s\({}^{-1}\). Please refer to Weerasooriya et al. (2023) for further details. Of these parameters, quenching from ram pressure, and tidal stripping primarily determine how dwarf satellites are affected by a host environment. For example, ram pressure stripping can strip gas out of galaxies resulting in a shortage of gas supply and eventually quench star formation. However, we would like to remind the reader that the effect of ram pressure stripping efficiency was negligible for the Milky Way satellites (Weerasooriya et al., 2023). Regardless, we implement a ram pressure with the same high efficiency as in Weerasooriya et al. (2023).
We apply the astrophysical prescriptions and parameters which reproduce the star formation physics and star formation histories of the Milky Way satellites (Weerasooriya et al., 2023) to the N-body merger trees and Extended Press Schechter (EPS) merger trees of the Centaurus A analog. We make this assumption because there are very few studies exploring the star formation histories of Centaurus A dwarfs (Cote et al., 2009; Crnojevic et al., 2011). The models run with merger trees from the N-body simulation probe only one possible mass of Centaurus A. Therefore, we run several EPS trees with different mass Centaurus A analogs spanning the full range of possible Centaurus A halo masses. While the EPS trees allow us to efficiently probe a range of \(M_{vir}\) for the Centaurus A halo, they do not provide positional information on the satellites. As such, EPS trees do not allow us to look at the dependence of satellite populations on their distance from the host.
## 3 Observational Sample
Observations of the Centaurus A dwarfs are inhomogeneous, therefore in this section we describe the sample of Centaurus A dwarfs taken from the literature that we will compare to our models. We use observational data for Centaurus A satellites from a variety of sources including Crnojevic et al. (2010, 2014); Crnojevic et al. (2016); Crnojevic et al. (2019); Karachentsev et al. (2013); Muller et al. (2015, 2017, 2019); Taylor et al. (2016, 2018). In Table 1, we present a compilation of observations from a variety of sources used in this study along with their references. In addition to the dwarfs in the above sample, we also include 38 new dwarf galaxy candidates from Taylor et al. (2023, in prep.) in our overall analysis, but do not list their properties in Table 1. Table 1 only includes galaxies with distance or velocity measurements that verify them as members of Centaurus A with distances \(\leq 5.8\) Mpc (Table 1).
The current sample of observations of Centaurus A is incomplete due to three major reasons: 1.) Lack of systematic sky coverage that leads to spatial incompleteness. For example the PISCeS survey is spatially incomplete, and biased toward coverage of the Northeastern region of Centaurus A's halo. These non-uniformities in design or analysis could make completeness calculations more complex. 2.) The detectability limits of different surveys, e.g. SCABS is limited to \(M_{V}<-7.2\) within 150 kpc covering an area of 50 square degrees and surface brightness limit of 27.8 mag arcsec\({}^{-2}\) in the g band (Leahy et al., private communication), while PISCeS can detect dwarfs down to \(M_{V}<-8\) within 150 kpc and surface brightness limit of 26.5 \(mag\,arcsec^{-2}\) in the g band (Crnojevic et al., 2014, 2019). 3.) The lack of distance measurements in the outskirts also hinders the determination of membership (Muller et al., 2017; Taylor et al., 2018). Distances are essential to determine the membership of satellites, their shape, and their brightness. Thus higher uncertainties in distances can cause these data to have higher errors in their sizes and magnitude measurements. Note that while the virial radius of Centaurus A is \(\sim 409\) kpc, most of these satellites are beyond that limit, yet within Centaurus A's'splashback radius' of 1.1 Mpc. The splashback radius is a physically motivated halo boundary that eliminates spurious evolution of radius and mass caused by standard definitions of virial radius (Diemer & Kravtsov, 2014).
The observational properties of Centaurus A satellites are still largely unknown. Thus our knowledge of these galaxies and their properties is nowhere near as complete as our knowledge of the Milky Way system. Quantitative estimates of the incompleteness of Centaurus A dwarfs have not been made by previous studies, and such exploration is beyond the scope of our current work. A preliminary quantitative exploration of the completeness limits in SCABS Centaurus A satellites is underway by Leahy et al. (private communication). Their exploration of completeness using 5000 Monte Carlo dwarf galaxy realizations reveals completeness of 96% for dwarf galaxies of \(\lesssim 18\) mag in the g-band. They report 50% completeness at g-band magnitudes of 20.01 and 50% completeness at surface brightness of \(\sim 27.8\) mag arcsec\({}^{-2}\). In Figure 1, we compare our models within 150 kpc to the total number of galaxies expected in the region based on these preliminary artificial galaxy experiments. We note here that given the unconfirmed natures of the Taylor et al. 2023 dwarf candidates, this estimate should be considered as an upper limit with results interpreted in that context. Based on the comparison, our galaxy models agree with the observed luminosity function. However note that the observed luminosity function is slightly steeper.
## 4 Properties of Centaurus A Dwarfs
In this section, we compare different properties of the modeled Centaurus A dwarfs with observational data described in Section 3. We start by exploring the properties of the Centaurus A dwarfs including luminosities, half-light radii, and velocity dispersions.
### Luminosity Function
In Figure 2, we plot the cumulative luminosity function of Centaurus A satellites for observations within \(r_{vir}=600\) kpc of the projected radius and for dwarfs modeled with Galacticus out to 600 kpc in the y-z plane. For the purpose of comparison of Centaurus A dwarfs, we analyze satellites viewed in projection, as they would be observed. We remind the reader that this selection can be made only when using N-body trees since positional information is available. Since we do not know the viewing angle of our modeled Centaurus A, we rotate the line of sight relative to the \(\hat{x}\) direction of the simulation in 5-degree steps (note that the step size used here is arbitrary). This accounts for uncertainty in the luminosity functions. The median luminosity function over all potential viewing angles is shown in green, while the maximum and minimum is shown by the shaded envelope.
We also run models using merger trees generated by Extended Press Schechter (EPS) to explain the mass
Figure 1: The number of galaxies per apparent magnitude in the \(g_{DES}\) filter. The sea green curve shows the luminosity function of our modeled dwarfs within 150 kpc, the olive green curve shows the number of satellite galaxies expected to be observed based on completeness tests run by Leahy et al. (private communication) for SCABS, and the purple curve shows the observed luminosity function.
Figure 2: Luminosity functions for the Centaurus A satellites within \(r_{vir}=600\) kpc for models and observations. Purple line shows the observational sample within 600 kpc projected radius. The predicted luminosity functions for Centaurus A satellites within 600 kpc and \(M_{V}\leq-4\) in the y-z plane are shown in green (N-body). Lines shown in shades of blue represent EPS trees with different Centaurus A masses. Each blue line shows the median per \(M_{V}\) bin for 30 different EPS trees, and a shaded region of minimum and maximum.
range of Centaurus A's halo. We build the EPS merger tree using the method of Cole et al. (2000) with an accretion limit of 0.1, merge probability of 0.1, and mass resolution \(1.41\times 10^{7}\,\mathrm{M}_{\odot}\) and recalibrated merger rates from Parkinson et al. (2008). Then we generate the EPS merger tree halo mass functions by implementing the method by Tinker et al. (2008) available in Galacticus. These halo mass distributions are then used to simulate galaxies. EPS models for different mass merger trees are shown in shades of blue. These models inherently do not have positional information. We run EPS models for \(5-9\times 10^{12}\,\mathrm{M}_{\odot}\) and \(1\times 10^{13}\,\mathrm{M}_{\odot}\) halos with 30 merger trees for each halo mass. The solid curves show the median per \(M_{V}\) bin with shaded areas indicating the minimum and maximum over 30 EPS trees. All of these models follow the general shape of the luminosity function for observations within 600 kpc. However, there is some variation in the shape of the luminosity functions between N-body and EPS merger trees.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Name & \(RA_{h}\) & \(RA_{m}\) & \(RA_{s}\) & \(Dec_{deg}\) & \(Dec_{arcmin}\) & \(Dec_{s}\) & \(M_{B}\) & \(M_{V}\) & Distance & References \\ & (h) & (min) & s & (deg) & (arcmin) & (arcsec) & (mag) & (mag) & (Mpc) & \\ \hline NGC4945 & 13 & 05 & 26.1 & -49 & 28 & 16.0 & -20.34 & -20.6 & 3.47 & 1,7 \\ NGC5102 & 13 & 21 & 57.8 & -36 & 37 & 47.0 & -18.24 & -20.37 & 3.66 & 1,7 \\ E274-01 & 15 & 14 & 13.5 & -46 & 48 & 45.0 & -17.35 & -19.2 & 3.09 & 2,7 \\ NGC5253 & 13 & 37 & 5.0 & -31 & 23 & 30.0 & -17.33 & -17.7 & 3.9 & 1,7 \\ E383-087 & 13 & 49 & 17.5 & -36 & 03 & 48.4 & -16.83 & -17.17 & 3.19 & 1,7 \\ NGC5206 & 13 & 30 & 41.0 & -47 & 53 & 42.0 & -16.43 & -16.97 & 3.6 & 1,7 \\ NGC5408 & 14 & 00 & 18.0 & -41 & 08 & 11.0 & -15.91 & -17.3 & 4.81 & 2,7 \\ E324-24 & 13 & 27 & 37.4 & -41 & 28 & 50.0 & -15.49 & -15.6 & 3.78 & 2,7 \\ E26958 & 13 & 07 & 38.0 & -46 & 43 & 30.0 & -14.99 & -16.8 & 4.78 & 6,7 \\ NGC5237 & 13 & 37 & 38.9 & -42 & 50 & 51.0 & -14.82 & -15.08 & 3.33 & 3,5,7 \\ \hline \end{tabular}
\end{table}
Table 1: **References:** (1) Lauberts & Valentijn (1989), (2) de Vaucouleurs et al. (1991),(3) Karachentsev et al. (2003),(4) James et al. (2004), (5) Doyle et al. (2005), (6) Sharina et al. (2008), (7) Karachentsev et al. (2013), (8) Müller et al. (2015), (9) Müller et al. (2017), (10) Crnojevic et al. (2014); Crnojevic et al. (2016); Crnojevic et al. (2019), (11) Taylor et al. (2018). Table 1 is published in Machine-readable format. Only a portion of this table is shown here to demonstrate its form and content.
Figure 3: Left panel: Number of luminous satellites around Centaurus A brighter than \(M_{V}\leq-6\) predicted within 150 kpc for different lines of sight as the model is rotated in your y-z plane (shown in green). The purple line shows the total number of satellites in the observed sample. Right panel: luminosity functions for the Centaurus A satellites within 150 kpc in the y-z plane are shown in green. We plot the median of the cumulative luminosity function over all potential viewing angles in green and shade within the minimum and maximum luminosity functions.
While we cannot constrain the mass of Centaurus A from the models due to poorly understood completeness in the observations for fainter dwarfs, comparison with models at the bright end of the luminosity function suggests that Centaurus A is likely to have a higher mass (\(10^{13}\,\mathrm{M_{\odot}}\)). We find our models slightly overproduce the number of dwarfs at all luminosities, but the observed luminosity function within 600 kpc is still likely to be incomplete. However, our modeled dwarfs reproduce the overall shape of the observed cumulative luminosity function for N-body merger trees and EPS trees traced at all modeled host masses. Note that any such inference on the mass of Centaurus A halo from this approach is subject to the caveat that Galacticus predictions could be inaccurate.
Poorly understood completeness limits of the observations within 600 kpc of Centaurus A make exploration of these satellites and their properties limited. However, the observational sample is relatively complete within 200 kpc (Muller et al., 2019) down to \(M_{V}=-10\). SCABS and PISCeS surveys cover a spatial region within a projected radius of 150 kpc down to dwarfs as faint as \(M_{V}<-7.2\)(Crnojevic et al., 2014; Crnojevic et al., 2016; Taylor et al., 2016) and we can assume completeness within 200 kpc down to \(M_{V}\sim-10.0\) (see section 3 for a detailed completeness discussion). We plot the luminosity function within 150 kpc and match the number of luminous satellites within 150 kpc in Figure 3. The left panel shows the number of luminous satellites in our model in green and observed number of satellites in purple, and the right panel shows the luminosity function within 150 kpc for both our \(N\)-body model (green) and observations (purple). Our current model predicts a median that is lower than the observed number of dwarf galaxies located within 150 kpc of Centaurus A, but the observed number is well within the distribution predicted by our model (refer to the left panel in Figure 3). The observed luminosity function demonstrates a slightly steeper slope within the 150 kpc radius. Nevertheless, our dwarf galaxy models are consistent with the lower boundary of the luminosity function envelope (see Figure 3). This is consistent with other studies of the inner 200 kpc of the Centaurus A halo (Muller et al., 2019).
### Half Light Radii
In Figure 4, we show the luminosity and half-light radii for the observed Centaurus A dwarfs and dwarfs modeled with 30 EPS merger trees with \(\sim 1\times 10^{13}\,\mathrm{M_{\odot}}\)
Figure 4: We show the modeled half light radii vs. the absolute \(g_{SDSS}\) band magnitude of the Centaurus A satellites in blue, observed half-light radii of Centaurus A dwarfs from Taylor et al. in prep (g’ band), and Crnojevic et al. (2016); Crnojevic et al. (2019) (V band). Note that not all values here are in V band. Peak wavelengths in \(g_{SDSS}\) and V bands are \(\sim 555\,\mathrm{nm}\) and \(\sim 551\,\mathrm{nm}\) respectively. Given the small differences in peak wavelengths in g’ and V bands, we plot the data available in respective bands. The observations of McConnachie (2012) for the Milky Way (V band), and M31 (V band) are also shown in blue stars and grey circles. Iso-surface brightness lines are shown in grey dashed lines. These results are consistent with models we predicted for the Milky Way satellites (Weerasooriya et al., 2023).
Figure 5: Luminosity–metallicity relation for the Centaurus A satellites. The observed values are shown by squares (Crnojević et al., 2010, 2019; Müller et al., 2019, 2021). Metallicities of the Milky Way satellites are shown as grey triangles (McConnachie, 2012). Predicted abundances for satellites for 30 EPS merger trees of \(10^{13}\,\mathrm{M_{\odot}}\) are shown in blue. Each hexagonal bin may contain multiple satellites.
We compare our results with available Centaurus A dwarfs (Crnojevic et al., 2016; Crnojevic et al., 2019), Milky Way dwarfs, and M31 dwarfs (McConnachie, 2012). Note, the majority of observations beyond the Local Group do not go below 0.01 \(L_{\odot}\,pc^{-2}\). Galaxy sizes are computed by finding the radius at which the galaxy is rotationally supported against the combined gravitational potential of itself and the dark matter halo, given the computed angular momentum content of the galaxy. The half-light radii of the modeled Centaurus A satellite galaxies are computed in the \(g_{SDSS}\) band. Galacticus determines the half-light radii using the dark matter profile of halos. Dark matter profile is determined from a NFW profile. The scale radii for trees based on simulations are set from the N-body simulations while for trees based on EPS trees, the concentrations are calculated using the model by (Gao et al., 2008). We use SDSS filters since SCABS data are calibrated to the SDSS photometric system (Taylor et al., 2016).
At high luminosities, the modeled sizes of the dwarfs agree with observations of the Centaurus A dwarfs. However, modeled sizes for the fainter dwarfs are larger than the observed values. This is consistent with the systematically larger sizes of the fainter modeled dwarfs compared to observations of the Milky Way satellites (Weerasooriya et al., 2023). Galacticus tends to over-predict the half-light radii with merger trees generated from EPS and N-body simulations. EPS trees are unreliable for halos with \(~{}<~{}10^{10}M_{\odot}\) due to dynamic range limitations of EPS (Somerville and Kolatt, 1999; Zhang et al., 2008). This may lead to inaccuracies in halo masses and/or formation time, which may subsequently affect the sizes of the galaxies which they contain. N-body trees over estimate half-light radii if a halo has \(N<1000\) particles (Weerasooriya et al., 2023). A study of how the resolution affects the size of the modeled dwarfs will be a subject of future study.
### Metallicity
The currently available metallicities of the Centaurus A dwarfs are limited. In Figure 5 we have shown the observations of [Fe/H] for Centaurus A satellites from Crnojevic et al. (2010, 2014, 2019) and Muller et al. (2019, 2021). Notice that Muller et al. (2019) finds a [Fe/H] - 2.25 dex metallicity floor between M\({}_{V}\sim\) -8 to -10. However, their measurement errors are \(\sim 0.5\) dex, consistent with our modeled values. While spectroscopic measurements of metallicities by Muller et al. (2021) agree well with Milky Way dwarfs, their photometric measurements are reported to have a larger scatter. The authors state that the scatter might be due to the age-metallicity degeneracy and incorrect assumptions of uniformly old (\(\sim 10\) Gyr) stellar populations. Using the same astrophysical prescriptions and parameters as those that reproduced the luminosity-metallicity relations of the Milky Way dwarfs, we present the modeled metallicities of the Centaurus A dwarfs in 30 \(1\times 10^{13}\) M\({}_{\odot}\) EPS trees. The metallicities of the modeled dwarfs in this work agree well with currently available observations of Centaurus A satellites (Crnojevic et al., 2010, 2019; Muller et al., 2019). This could potentially mean that the Centaurus A satellites have a similar enrichment history to that of the Milky Way's satellites and/or that dwarf metallicities are independent of their local environment. In Weerasooriya et al. (2023), we show that ram pressure does not significantly affect the luminosity-metallicity relation of the Milky Way satellites. Therefore, we do not expect a significant change in luminosity metallicity relation as a function of halo mass.
### Velocity Dispersion
The velocity dispersion of satellite galaxies probes the dark matter mass of the host (Wake et al., 2012; Bogdan and Goulding, 2015; Schechter, 2015), and consequently its gravitational potential. However, velocity dispersions of Centaurus A satellites are unknown with the exception of KK197. As such, additional data is required to comment on the dwarfs population as a whole. In Figure
Figure 6: Predicted \(\sigma_{sat}\) of Centaurus A satellites (blue hexagons). These \(\sigma_{sat}\) are calculated at radii enclosing half of the stellar mass of each satellite galaxy. Grey triangles show the observed velocity dispersion of the Milky Way satellites (McConnachie, 2012). Note that the predicted values of Centaurus A satellites are consistent with the velocity dispersion of the observed Milky Way satellites.
6, we show the velocity dispersion of KK197 (Muller et al., 2021), and those of the Milky Way satellites (McConnachie, 2012). We calculate the velocity dispersion of the modeled dwarfs (\(\sigma_{sat}\)) at stellar half-mass radii for the modeled dwarfs using the N-body tree. While no theoretical studies have looked at the velocity dispersions of the Centaurus A dwarfs, most of our modeled sample falls within the observed velocity dispersions of the Milky Way satellites (McConnachie, 2012). Given that velocity dispersion probes dark matter halo mass and the evolution of these galaxies, this suggests that Centaurus A dwarfs occupy similar halos, at a given stellar mass, as their Milky Way counterparts.
## 5 Star Formation Histories
Next we explore the modeled star formation histories (SFHs) of dwarfs. We remind the reader that due to the dearth of SFHs for the known Centaurus A satellites, we have used the star formation physics from Weerasooriya et al. (2023). Given the lack of observations of \(z=0\) properties, SFHs of the Centaurus A dwarfs are even harder to come by. Thus, we make the assumption that astrophysical prescriptions and parameters of the Milky Way satellites could be applied to Centaurus A as well. Given our success reproducing the star formation history of the Milky Way satellites in Weerasooriya et al. (2023), overall shape of the luminosity function, and properties at \(z=0\) of Centaurus A satellites, we consider it worthwhile to examine predicted SFHs for the Centaurus A dwarfs.
Unlike the Milky Way satellites, extensive observational studies on star formation histories of Centaurus A satellites are not available, with the exception of KK197, ESO-269066, ESO-381018 (Makarova et al., 2007) and five dwarf irregulars KK182 (Cen6), ESO269-58, KK196 (AM1318-444), HIPASS J1348-37, ESO384-16 (Crnojevic et al., 2012). Both KK197 and ESO-269066 are dwarf spheroidals, while ESO-269066 is dwarf irregular. Dwarf spheroidals typically have old stellar populations whose light is dominated by their red giant branches, while dwarf irregulars are metal-poor and have varying levels of current star formation Crnojevic et al. (2012). states KK197 and ESO-269066 have unusual RGB color scatter, which shows active star formation with high metallicity, while ESO-381018 is a typical dwarf irregular. Two of the dwarf irregulars (KK196 and ESO269-58) studied are within 600 kpc (see Figure 6 of Crnojevic et al. (2012)). Positioned in the middle of Centaurus A's southern radio lobe, KK196 has a star formation rate of \(0.0046\pm 0.0004\,{\rm M}_{\odot}\,yr^{-1}\) and has formed more than \(60\%_{-30\%}^{+20\%}\) of stars more than 5 Gyrs ago (Crnojevic et al., 2012). Meanwhile, ESO269-58, located \(300\pm 50\) kpc from Centaurus A, has few blue loop, red supergiants and a very broad red giant branch stars, and dense asymptotic giant branch zone. This dwarf has a higher star formation rate compared to KK196 with \(0.07\pm 0.04\,{\rm M}_{\odot}\,yr^{-1}\), and has formed \(50\%_{-15\%}^{+15\%}\) of stars more than 5 Gyrs ago. While its star formation activity has been enhanced between 3-5 Gyrs ago, Crnojevic et al. (2012) also find that Centaurus A has lowered its star formation rate in the last 1 Gyr.
In Figure 7, we show the modeled cumulative star formation histories of the Centaurus A satellites as a function of look-back time colored by their absolute V band magnitude at \(z=0\) for our N-body model. As expected, our Centaurus A modeled SFHs are similar to that of the Milky Way satellites in Weerasooriya et al. (2023). Their corresponding distributions for time taken to gain 90% of the stellar mass observed at z=0 (\(\tau_{90}\)) are given in Figure 8. In the upper left panel, we show dwarfs with \(-8<M_{V}\leq-4\). These are the faintest galaxies. Most ultra-faint dwarfs quenched 8-12 Gyrs ago as expected since the faintest observed Milky Way satellites are the fossils of the first galaxies (Bovill and Ricotti, 2011; Brown et al., 2012). The upper right panel shows the satellites in the range \(-10<M_{V}\leq-8\); the majority of these satellites reached 90% of their present stellar mass 12 Gyrs ago. Most dwarfs in the upper right (\(-10<M_{V}\leq-8\)) and lower left panel (\(-14<M_{V}\leq-10\)) reached 90% of their present-day stellar mass \(\sim 11.2\) Gyrs ago. Known SFHs from observations of Centaurus A dwarfs discussed earlier would fall in the lower left panel of Figure 7. However, our model overpredicts the quenching times in comparison to the two known quenching times. Most of the brightest galaxies in the lower right panel (\(-18<M_{V}\leq-14\)) acquired 90% of their stellar mass 8.8 Gyrs ago.
## 6 Summary & Conclusions
We have presented models of dwarf galaxies around Centaurus A analogs using the SAM Galacticus with astrophysical prescriptions and parameters calibrated to match the observed properties of Milky Way dwarf galaxies. In this study, we apply the best-fit parameter space chosen to reproduce the properties of the Milky Way satellites (Weerasooriya et al., 2023) to several N-body and EPS realizations of Centaurus A analogs in order to study the predicted satellite population. Our models are able to reproduce the overall properties of the dwarf population reasonably well.
We have explored properties such as the cumulative luminosity function, luminosity-metallicity, and half-light radii with respect to observations of the Centaurus A system. We also predict their velocity dispersions,
star formation histories, etc. We summarize our results as follows,
* The same astrophysical prescriptions and parameters that allow Galacticus to reproduce the Milky Way are used for the global properties of the Centaurus A satellites. While being cognizant of the lack of observations in the Centaurus A system, we conclude that the overall properties and trends of the Centaurus A satellite population is
Figure 7: Cumulative star formation histories of the Centaurus A satellites (within 600 kpc) as a function of look back time colored by their luminosity in the DES \(g\)-band. The star formation histories are divided into four panels \(-8<M_{V}\leq-4\), \(-10<M_{V}\leq-8\), \(-14<M_{V}\leq-10\), and \(-18<M_{V}\leq-14\). Each line is colored by the galaxy’s luminosity in DES g band magnitude at \(z=0\) (faintest in blue and brightest in yellow). The majority of the faintest dwarfs quench very early on. Most luminous galaxies are quenched later (6 Gyrs ago or later). Note that the galaxies on the upper left panel have not yet been observed. However, we predict their SFHs, given observations will reach those magnitudes in the era of the Roman.
similar to the dwarf galaxies around the Milky Way.
* Our N-body models within 600 kpc and EPS trees of Centaurus A reproduce luminosity functions that follow the overall shape of observations.
* Centaurus A satellites follow a similar metal enrichment/stellar feedback to the Milky Way satellites. That is we obtain similar luminosity-metallicity relations for both the Milky Way and Centaurus A as expected. This suggests that the physics that governs the metal enrichment of dwarf galaxies is largely independent of the environment.
* Assuming identical star formation physics of the Milky Way satellites for the Centaurus A satellites, SFHs and quenching times of the Centaurus A satellites also follow a similar trend to the Milky Way satellites.
_Software:_ Galacticus (Benson, 2011), ROCKSTAR(Behroozi et al., 2013a), AMIGA(Knollmann & Knebe, 2009), CONSISTENT_TREES(Behroozi et al., 2013b), JUPYTER(Kluyver et al., 2016), NUMPY(Harris et al., 2020), SCIPY(Virtanen et al., 2020), and MATPLOTLIB(Hunter, 2007).
|
2303.07226 | Scaling Vision-Language Models with Sparse Mixture of Experts | The field of natural language processing (NLP) has made significant strides
in recent years, particularly in the development of large-scale vision-language
models (VLMs). These models aim to bridge the gap between text and visual
information, enabling a more comprehensive understanding of multimedia data.
However, as these models become larger and more complex, they also become more
challenging to train and deploy. One approach to addressing this challenge is
the use of sparsely-gated mixture-of-experts (MoE) techniques, which divide the
model into smaller, specialized sub-models that can jointly solve a task. In
this paper, we explore the effectiveness of MoE in scaling vision-language
models, demonstrating its potential to achieve state-of-the-art performance on
a range of benchmarks over dense models of equivalent computational cost. Our
research offers valuable insights into stabilizing the training of MoE models,
understanding the impact of MoE on model interpretability, and balancing the
trade-offs between compute performance when scaling VLMs. We hope our work will
inspire further research into the use of MoE for scaling large-scale
vision-language models and other multimodal machine learning applications. | Sheng Shen, Zhewei Yao, Chunyuan Li, Trevor Darrell, Kurt Keutzer, Yuxiong He | 2023-03-13T16:00:31Z | http://arxiv.org/abs/2303.07226v1 | # Scaling Vision-Language Models with Sparse Mixture of Experts
###### Abstract
The field of natural language processing (NLP) has made significant strides in recent years, particularly in the development of large-scale vision-language models (VLMs). These models aim to bridge the gap between text and visual information, enabling a more comprehensive understanding of multimedia data. However, as these models become larger and more complex, they also become more challenging to train and deploy. One approach to addressing this challenge is the use of sparsely-gated mixture-of-experts (MoE) techniques, which divide the model into smaller, specialized sub-models that can jointly solve a task. In this paper, we explore the effectiveness of MoE in scaling vision-language models, demonstrating its potential to achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost. Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling VLMs. We hope our work will inspire further research into the use of MoE for scaling large-scale vision-language models and other multimodal machine learning applications.
+
Footnote †: \(\ast\) equal contribution; \(\lx@sectionsign\) work initiated during an internship at Microsoft.
## 1 Introduction
The ability to understand and generate natural language from visual information is a critical component of many real-world applications, including visual question answering (VQA), visual reasoning, and multimodal information retrieval. In recent years, the success of deep learning in natural language processing (NLP) has led to the development of large-scale vision-language models (VLMs) [63, 7, 39, 16, 25, 1, 67, 59, 37, 58, 23, 36, 69] that leverage powerful neural network architectures to encode and decode multimodal information. However, state-of-the-art vision-language models like Flamingo-80B [1], BEtT-3-1.9B [66], and PaL-1-17B [6] can be computationally expensive and difficult to train, which has motivated researchers to explore ways of improving their efficiency and effectiveness.
Recently, sparsely activated _Mixture of Experts (MoE)_ models have been successfully employed to scale both vision [53, 44, 47] and text models [57, 33, 74, 13]. These models are motivated by the need to increase model parameters while controlling compute costs. In addition, these models provide other advantages, including sparsity that can mitigate catastrophic forgetting in continual learning [9, 29], and an inductive bias that can enhance performance in multitask learningg [46, 32, 26]. Overall, the use of MoEs has proven to be a promising strategy for scaling deep learning models across various domains.
Building on the success of MoEs in individual domains and applying the intuition that sparse models may better handle different tasks versus dense counterparts, we investigate the potential of MoEs for vision-language modeling. To this end, we take the first step in this direction and explore models that can process both images and text for vision-language tasks. One similar effort has been studied in LIMoE [47], where the authors proposed a modal-agnostic CLIP-style [51] multimodal MoEs architecture, but their focus is mainly on the contrastive pre-training objective and vision-only downstream tasks. There are two limitations in this setting: (1) The increasing model capacity of MoEs under the the simple contrastive objective can easily lead to over-fitting issues. (2) The vision-only benchmarking does not reveal the full power of scaling up multimodal models. Alternatively, our goal is to demonstrate the effectiveness of MoEs under generative modeling for vision-language tasks and provide a more comprehensive foundation for future research in this area.
Specifically, we propose a novel VLM architecture that employs MoE to scale both the text-based and vision-based feed-forward networks (T-FFN and V-FFN, respectively) in a unified framework. Our approach divides the model into multiple sub-models, each of which is responsible for processing a modal-specific subset of the input data. The text and vision input representations are then aligned via three mask data modeling objectives [66].
We train a range of VL-MoE models and evaluate the
model on vision-language classification, vision-language retrieval, vision-only and language-only tasks, Our experiments demonstrate that MoE can significantly improve the efficiency and effectiveness of VLMs, enabling them to handle large-scale, real-world multimedia data. We scale baseline model up to a 2B parameter VL-MoE\({}_{\textsc{BASE/32E}}\), which only applies 180M parameters per token and achieves competitive performance with dense models that make use of similar or more pre-training image-text pair data and apply 3-4\(\times\) more parameters per token.
In summary, our contributions are as follows:
* We propose VL-MoE, the first large-scale generative MoEs multimodal models for vision/language-only, as well as vision-and-language tasks.
* We explore various scaling strategies, including increasing dense model size, increasing expert numbers, and scaling either T-FFN or V-FFN alone, to investigate the trade-offs between model complexity and performance on various downstream tasks.
* We present ablations to understand VL-MoE model's behavior, interpretability, and our design choices.
## 2 Related Work
**Vision-Language Modeling.** Vision-language pretraining [63, 45, 61, 71, 51, 40, 25, 37, 67, 3, 65, 1, 69, 66, 36, 6, 51, 23, 59, 58, 70, 60, 43] involves developing model architecture and pretraining objectives to learn effective multimodal representations from large-scale image-text pairs. Two main approaches are encoding distinct modalities separately with different encoders.
For model architecture, there are two main designs. The first design, utilized by models such as [51, 23, 70] separately encodes each modality with different encoders. While this approach performs well for image-text retrieval tasks, it struggles with complex vision-language tasks like visual reasoning. The second design, employed by models like [63, 37, 45, 38, 25, 6, 1], uses a complex fusion module with cross-modal attention to combine modalities and learn powerful multimodal representations. However, this design sacrifices efficiency for improved performance. Recently, a new design has emerged with the MoME Transformer used in both VLMo and BEiT-3. This design unifies the dual-encoder and fusion-encoder models by introducing a mixture-of-modality-experts technique. With MoME, various modalities are encoded within a shared Transformer block, allowing for improved scalability and achieving state-of-the-art performance on vision-language tasks. There is an increasing interest to grow the VL model capacity with an affordable compute budget, including MoE [47] and the injection of new trainable modules on pre-trained models [1, 58, 43, 41, 35, 28]; the former remains less studied.
For pretraining objectives, multiple cross-modal pretraining objectives have been studied. They can be categorized into two classes: (1) _Discriminative modeling_, including image-text contrastive learning [51, 23], image-text matching [63, 25, 37, 3] and word-patch/region alignment [7, 25]; (2) _Generative modeling_, including masked language modeling [63, 61, 25] or prefix language modeling [67], masked region modeling [63], multimodal prefix language modeling [67]. Recently, BEiT-3 shows strong scaling results by unifying the generative multimodal pretraining objective with masked data modeling, which comprises masked image modeling and masked language modeling on the monomodal encoders and masked multimodal modeling on the multi
Figure 1: The encoding process of VL-MoE for various modality inputs, for which gray and colored blocks indicate non-activated and activated modules, respectively. (a) For image input only, the encoding process switches to V-MoE or V-FFN (b) For text input only, the encoding process switches T-MoE or T-FFN. (c) For image-Text Pair input, the encoding process switches, V-MoE & T-MoE and VL-FFN. (d) For the early layers, we scale the V-FFN and T-FFN with Sparse Mixture-of-Experts as V-MoE and T-MoE, respectively. VL-MoE will utilize conditional computation to allocate tokens in a modality-specific fashion. V/T-MoE converts multiple V/T-FFNs as experts, where the image/text input will be conditionally routed by V/T-Router Network.
modal encoder.
In this paper, we perform MoE study, by adopting the MoME Transformer as the backbone dense network and generative (masked data) modeling as pretraining objectives given its simplicity and scaling ability.
Sparse Mixture of Experts models.We build upon the concept of deep sparse MoEs, which have been studied independently in both Computer Vision [53, 44, 47] and Natural Language Processing [53, 44, 47, 57, 33, 15, 13, 74, 8, 72, 29, 32] in the context of conditional computation. The goal of conditional computation is to increase the number of model parameters without a proportional increase in computational cost, which is achieved by selectively activating only relevant parts of the model based on input-dependent factors [4, 5, 10]. MoE models use a learned gating mechanism that activates only a subset of \(k\) experts out of \(E\gg k\) for a given input, allowing an input to select either all experts [14] or only a sparse mixture thereof, as in recent massive language models [15, 13]. While many works aim to improve the gating mechanism itself [18, 34, 54, 72], MoE models have also been studied for multitask learning [18, 32] with per-task routers [46], although a shared pool of experts is typically used.
MoE models have been explored for multimodal learning as well, with LIMoE [47] being most relevant to our work. However, their MoE scaling considers the CLIP-style contrast as the pre-training objective, and vision/retrieval tasks as the downstream evaluation. To the best of our knowledge, the proposed VL-MoE is the first the MoE scaling study to consider the generative modeling objective in the VL pre-training, and we valuate its scaling performance in a more comprehensive manner, including vision/language-only, as well as vision-and-language tasks.
## 3 Method
We first describe the masked data modeling pretraining objectives. We next discuss MoEs, sparse MoEs and present how we apply sparse MoEs methodology to vision-language models, before explaining our design choices for the routing algorithm and the implementation of VL-MoE.
### Vision-Language Masked Data Modeling
We utilized a unified masked data modeling objective [66] to pretrain VL-MoE on monomodal (i.e., images and texts) and multimodal data (i.e., image-text pairs). This approach has been demonstrated to be scaling-friendly with small batch-sizes. Our pretraining process involved masked image modeling on monomodal image data, masked language modeling on monomodal text data, and masked vision-language modeling on multimodal image-text pairs.
Masked Language ModelingWe use masked language modeling (MLM) to learn language representations from large-scale text-only data. For MLM, 15% of tokens in monomodal text data are randomly masked, and the model is trained to recover the masked tokens from the corrupted input text. Masked tokens are replaced by a [MASK] token 80% of the time, a random token 10% of the time, and kept the original tokens 10% of the time, following BERT [11].
Masked Image ModelingIn addition to masked language modeling, VL-MoE uses masked image modeling (MIM) to learn vision representations from large-scale image data. For MIM, block-wise masking is applied to 40% of image patches, and the pretraining objective is to reconstruct the discrete visual tokens of masked patches, following BEiT [2]. The image tokenizer of BEiTv2 [49] is used to obtain the discrete tokens as the reconstructed targets.
Masked Vision-Language ModelingTo learn aligned vision-language representation, we use masked vision-language modeling (VLM), which extends masked language modeling and masked image modeling to multimodal data. The task aims at recovering masked image patches and text tokens based on visual and linguistic clues. In VLM, text tokens (with 50% mask ratio) are randomly masked as in MLM, and the model is trained to recover the masked text tokens based on the joint image-text representations. Image patches are also masked with the same ratio as in MIM, and the corresponding visual tokens are predicted based on the image-text pair. The VLM task further encourages the model to learn alignments between image and text pairs.
### VL-MoE Architecture
Input Representation.To obtain text representations, the input text is tokenized and projected onto word embeddings (\(\{\mathbf{w}_{i}\}_{i=1}^{M}\)), where \(M\) is the length of the tokenized text sequence. Two special tokens, a start-of-sequence token ([T_CLS]) and a special boundary token ([T_SEP]), are added to the sequence. Text representations are obtained by summing the word embeddings and text position embeddings, resulting in \(\mathbf{H}^{w}=[\mathbf{w}_{\texttt{[T_CLS]}},\mathbf{w}_{1},\dots,\mathbf{w}_{M},\mathbf{w}_{ \texttt{[T_SEP]}}]+\mathbf{T}_{pos}\).
For image representations, the input 2D image \(\mathbf{v}\in\mathbb{R}^{H\times W\times C}\) is split and reshaped into \(N=HW/P^{2}\) patches \(\mathbf{v}^{p}\in\mathbb{R}^{N\times(P^{2}C)}\), where \(C\) is the number of channels, \((H,W)\) is height and width of the input image, and \(P\) is the patch size. These patches are then flattened into vectors and linearly projected to obtain patch embeddings following vision Transformers [12, 64, 2]. We prepend a learnable special token [I_CLS] to the sequence. The resulting image input representations are given by \(\mathbf{H}^{v}=[\mathbf{v}_{\texttt{[T_CLS]}},\mathbf{v}_{1},\dots,\mathbf{v}_{N}]+ \mathbf{V}_{pos}\), where \(\mathbf{H}^{v}\in\mathbb{R}^{(N+1)\times D}\)
\(\mathbf{V}\in\mathbb{R}^{(P^{2}C)\times D}\) is a linear projection, \(\mathbf{V}_{pos}\in\mathbb{R}^{(N+1)\times D}\) are learnable 1D position embeddings.
To form image-text input representations, we concatenate image and text input vectors, resulting in \(\mathbf{H}_{0}^{cl}=[\mathbf{H}_{0}^{w};\mathbf{H}_{0}^{v}]\).
Backbone Network.The dense backbone network of VL-MoE is a shared multimodal Transformer, illustrated in Figure 1. To encode different modalities, we utilize a mixture-of-modality-experts (MoME) Transformer[3, 66], which takes image and text representations of monomodal data, as well as representations of image-text pairs as input. The MoME Transformer comprises multiple layers of blocks, each consisting of a multi-head self-attention layer and a feed-forward expert layer. While the self-attention module is shared across modalities, each feed-forward expert layer contains a pool of modality-specific experts (V-FFN, T-FFN, or VL-FFN) that act as a substitute for the feed-forward network in standard Transformers. This allows for hard routing over the pool of feed-forward networks based on the modality of the input tokens.
Conditional Computation with MoEs.The concept of conditional computation involves selectively activating different parts of a neural network based on the input [4]. One specific approach is to use a mixture-of-experts (MoE) model, where different "experts" handle different regions of the input space [22]. In this paper, we adopt the MoE layer proposed in [57], which consists of \(E\) experts and is defined as \(\texttt{MoE}(\mathbf{x})=\sum_{i=1}^{E}g(\mathbf{x})_{i}\ e_{i}(\mathbf{x})\). Here, \(\mathbf{x}\) is the input to the layer, \(e_{i}:\mathbb{R}^{D}\mapsto\mathbb{R}^{D}\) is the function computed by expert \(i\), and \(g:\mathbb{R}^{D}\mapsto\mathbb{R}^{E}\) is the "routing" function that determines the input-dependent weights for the experts. Both \(e_{i}\) and \(g\) are implemented as neural networks. Although this formulation still involves a dense network, it can be made sparse by restricting \(g\) to assign only \(k\ll E\) non-zero weights, thereby eliminating the computation of unused experts. This approach allows for super-linear scaling of the number of model parameters in both training and inference.
VL-MoE.We apply sparse MoE to vision-language models in the context of the MoME. As illustrated in Figure 1, inputs from different modalities are routed to V-FFN and T-FFN in the first (\(L-F\)) layers, and V-FFN, T-FFN, or VL-FFN in the last \(F\) layers. To avoid instability due to modality input imbalance when applying MoEs to modal-agnostic VL-modules in V-MoE [53], we only use MoE for V-FFN and T-FFN in the first (\(L-F\)) layers. V-FFN and T-FFN have two layers and a GeLU [19] non-linearity: V/\(\texttt{T-FFN}(\mathbf{x})=\mathbf{W}_{2}\ \sigma_{\text{gelu}}(\mathbf{W}_{1}\mathbf{x})\). For VL-MoE, we replace a subset of V-FFN and T-FFN with V-MoE and T-MoE layers, where each expert is an FFN with the same architecture \(e_{i}(\mathbf{x})=\texttt{FFN}_{\theta_{i}}(\mathbf{x})\) but different weights \(\theta_{i}=(\mathbf{W}_{1}^{i},\mathbf{W}_{2}^{i})\). This design pattern is similar to that of GShard [33] and V-MoE [53] models. In V-MoE and T-MoE layers, each token \(\mathbf{x}\in\mathbb{R}^{D}\) is processed sparsely by \(k\) out of \(E\) available experts. To select which one, a lightweight V/T-Router predicts gating weights _per token_: \(g(\mathbf{x})=\texttt{softmax}(\mathbf{W}_{g}\mathbf{x})\in\mathbb{R}^{E}\), where \(\mathbf{W}_{g}\in\mathbb{R}^{D\times E}\) is learned. The \(k\) activated experts' outputs are combined linearly according to the gating weights: \(\texttt{MoE}(\mathbf{x})=\sum_{e=1}^{k}g(\mathbf{x})_{e}\cdot\texttt{FFN}_{e}(\mathbf{x})\).
To ensure computational efficiency and implementation constraints, each expert in VL-MoE has a fixed buffer capacity, which determines the maximum number of tokens it can process. The assumption is that tokens are approximately balanced across experts. In case the capacity is exceeded, some tokens are not processed by the expert and are dropped, leading to a decrease in the success rate. This rate is a vital indicator of balanced routing and training stability. To mitigate this problem, we employ Batch Priority Routing (BPR) [53, 47], which selectively skips tokens based on their routing weights. BPR prioritizes tokens with larger routing weights, as they are deemed more informative. Our results show that BPR is crucial for stable training of VL-MoE. We further analyze token routing decisions in Section 5 and dropped tokens in Appendix.
Figure 2: Effect of VL-MoE scaling on three mask language modeling (MLM), mask image modeling (MIM), and masked vision-language modeling (VLM) pre-training tasks across training flops.
## 4 Experiment
### Pretraining Setup
**Pretraining Data.** Our pretraining process uses both monomodal and multimodal data. The monomodal data comprises ImageNet-22K for images and English Wikipedia and BookCorpus [73] for text. The multimodal data combines four datasets of image-text pairs: Conceptual Captions [56], SBU Captions [48], COCO [42], and Visual Genome [30], containing a total of \(4\) million images and \(10\) million image-text pairs.
**Pretraining Setting.** For the base-size model, we employ a \(12\)-layer Transformer network with \(768\) hidden size and \(12\) attention heads, following VIT [12], BEiT [2], and VLMO [3]. The use of VL-FFN starts at \(10\)th layer. The small-size model is an \(8\)-layer Transformer network with \(384\) hidden size and \(6\) attention heads, where VL-FFN is used in \(8\)th layer. We randomly initialize the model parameters using the method described in BEiT [2]. The image resolution is set to \(224\times 224\), and the patch size is \(16\times 16\). The maximum sequence length for text is \(128\). We use a batch size of \(6,144\) and train the model from scratch for \(200\)k steps, which is equivalent to \(40\) epochs of the image-text pairs. Each batch contains \(2,048\) images, \(2,048\) texts, and \(2,048\) image-text pairs. We perform image augmentation using random resized cropping, horizontal flipping, and color jittering, following the same method as BEiT [2]. The text data is tokenized using a SentencePiece [31] tokenizer with a vocabulary size of 64k. We use the Adam optimizer [27] with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) to optimize the model. The peak learning rate is 2e-3, and we use linear warmup for the first \(10,000\) steps and cosine learning rate decay. The weight decay is \(0.05\), and we disable dropout and use stochastic depth [20] with a rate of \(0.1\). The three pretrain losses are equally weighted as in BEiT-3 [66].
**MoE Setting.** For the default setting of MoEs in VL-MoE\({}_{\text{BASE/32E}}\), we use \(E=32\) experts for T-FFN and V-FFN, respectively. All VL-MoEs activate \(k=1\) expert per token, similar to Switch Transformer [15] and LIMoE [47]. We replace every second dense T-FFN or V-FFN sublayer with MoE sublayer following GShard [33] and Switch Transformer [15]. We use BPR for stability in V-MoE [53]. For auxiliary loss, we use loading loss in [57] for T-FFN's MoE and averaged loading loss and importance loss in V-MoE [53] for V-FFN's MoE. The combination ratio for auxiliary loss is set as \(0.01\) in all our experiments We use \(32\) expert parallelism and tutel[21] for fast routing and computation. All the models are based on DeepSpeed [52]. Pre-training experiments are done on 32 Nvidia Tesla V100-32GB GPUs. Following ST-MoE [74], we _freeze_ all the MoE modules (router and expert network) during finetuning process. The capacity factor \(C\) is set to be 1.05 during training and 1 during inference following [53].
**VL-MoE in Pretraining.** We present the validation performance of VL-MoE on the three pretraining tasks across different scales. The results show that the cost-performance tradeoff of VL-MoE in terms of pretraining flops dominates the dense models by a wide margin, indicating that VL-MoE offers significant improvements across all scales, from small/8E to base/32E. We also provide a wall-clock time versus validation performance figure in the Appendix, which shows a similar scaling trend of VL-MoE. Thanks to careful kernel optimization and expert parallelism in DeepSpeed [52], the maximum wall-clock overhead of VL-MoE\({}_{\text{BASE/32E}}\) compared to dense counterparts can be reduced to only \(13\)%.
### Vision-and-Language Downstream Tasks
In our study, we explore the performance of VL-MoE on vision-and-language downstream tasks through fine-tuning experiments on three standard tasks: visual question answering [17], natural language for visual reasoning [62], and image-text retrieval [50, 42]. Following BEiT-3, we use \(480\times 480\) image resolution for VQA fine-tuning and \(384\times 384\) for the other tasks.
Figure 3: Token routing decisions on COCO. Examples of vision tokens routing decisions and breakdown of language token routing decisions at the V/T-MoE layer placed in the \(6\)-th encoder block –i.e. middle of the network– for VL-MoE\({}_{\text{BASE/32E}}\).
Visual Question Answering (VQA).For VQA, the task is to generate/choose the correct answer given a natural image and a question. Following previous work [25, 3, 66], we utilize the VQA 2.0 dataset [17] and formulate it as a classification problem with \(3,129\) most frequent answers. We finetune VL-MoE as a fusion network to encode both the image and question. We use the final encoding vector of the [T_CLS] token as the representation of the image-question pair, and feed that into a classifier layer to predict the label.
Natural Language for Visual Reasoning (NLVR2).Visual reasoning task aims to predict whether a text description is true about a pair of images. We use NLVR2 [62] dataset for evaluation. Following OSCAR [40], VinVL [71] and VLMo [3], we reformulate the triplet input into two image-text pairs, each containing the text description and one image. We use VL-MoE as a fusion network to jointly encode the image and text. The concatenated final vector of [T_CLS] token from the two pairs is then fed into a classification layer to predict the label.
Image-Text Retrieval.For image-text retrieval, it contains both image-to-text retrieval and text-to-image retrieval for different target modalities. We use the widely used COCO [42] and Flickr30K [50] datasets to evaluate the model, and adopt the Karpathy split [24] following common practices. Noted that in the architecture of VL-MoE and BEiT-3 [66], it does not involve the image-text matching module as existing in CLIP [51]. To enable image-text matching, we further fine-tune VL-MoE jointly with image-text contrastive and image-text matching with hard negative mining objectives as in VLMo [3] and BEiT-3. During inference, VL-MoE is used to encode images and text separately and compute the matching scores by the dot product of image and text vectors to obtain the top-\(k\) candidates.
Table 1 presents the results of our vision-language model on classification and retrieval tasks, including VQA, NLVR2, COCO, and Flickr30K. To ensure a fair comparison, we provide details on the amount of pretraining image-text pair data, pretraining steps, and the number of parameters per input token. Following LIMoE [47], we define the number of parameters per input token as the number of parameters that the model applies to each image-text token pair. Notably, VL-MoE\({}_{\text{BASE/32E}}\) contains \(2\) billion parameters in total, but only applies \(180\) million parameters per token. Additionally, all routers combined account for less than \(0.5\) million parameters. Our model outperforms previous baseline models on VQA, NLVR2, COCO, and Flickr30K by a significant margin, particularly when compared to a reproduced BEiT-3 [66], which was pretrained using the same settings as VL-MoE. Moreover, to the best of our knowledge, VL-MoE is the first to demonstrate that a mixture-of-experts architecture can successfully scale with a comparably modest architecture size and training counts, while achieving generalization performance on a range of tasks in the con
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline
**Model** & **\# Pretrained** & **\# Pretrained** & **\# Params** & \multicolumn{2}{c}{**VQA**} & \multicolumn{2}{c}{**NLVR2**} & \multicolumn{2}{c}{**COCO**} & \multicolumn{2}{c}{**Flickr30K**} \\ & **images** & **Steps** & **per token** & test-dev & test-std & dev & test-P & TR & IR & TR & IR \\ \hline \multicolumn{10}{c}{_Base-size models pretrained in the similar settings_} \\ UNITER\({}_{\text{BASE}}\)[7] & 4M & 200k & 86M & 72.70 & 72.91 & 77.18 & 77.85 & 64.4 & 50.3 & 85.9 & 72.5 \\ VILLA\({}_{\text{BASE}}\)[16] & 4M & 200k & 86M & 73.59 & 73.67 & 78.39 & 79.30 & - & - & 86.6 & 74.7 \\ UNIMO\({}_{\text{BASE}}\)[39] & 4M & 500K & 120M & 73.79 & 74.02 & - & - & - & - & 89.7 & 74.7 \\ ViLT [25] & 4M & 200k & 120M & 71.26 & - & 75.70 & 76.13 & 61.5 & 42.7 & 83.5 & 64.4 \\ ALBEF\({}_{\text{BASE}}\)[37] & 4M & 240k & 210M & 74.54 & 74.70 & 80.24 & 80.50 & 73.1 & 56.8 & 94.3 & 82.8 \\ VLMO\({}_{\text{BASE}}\)[3] & 4M & 200k & 180M & 76.64 & 76.89 & 82.77 & 83.34 & 74.8 & 57.2 & 92.3 & 79.3 \\ BEiT-3\({}_{\text{BASE}}\)\({}^{*}\) & 4M & 200k & 180M & 76.21 & 76.75 & 84.93 & 85.76 & 78.7 & 60.3 & 95.3 & 83.8 \\
**VL-MoE\({}_{\text{BASE/2E}}\)** & 4M & 200k & 180M & **78.23** & **78.65** & **85.54** & **86.77** & **79.4** & **61.2** & **96.1** & **84.9** \\ \hline \multicolumn{10}{c}{_Pretrained with more aggressive cost, including compute / data / model_} \\ \(\text{VLMO}_{\text{LARGE}}\)[3] & 4M & 200k & 560M & 79.94 & 79.98 & 85.64 & 86.86 & 78.2 & 60.6 & 95.3 & 84.5 \\ ALBEF\({}_{\text{BASE}}\)[37] & 14M & 800k & 210M & 75.84 & 76.04 & 82.55 & 83.14 & 77.6 & 60.7 & 95.9 & 85.6 \\ BLIPLAGE [36] & 129M & 1.26M & 427M & 78.24 & 78.17 & 82.48 & 83.08 & 81.9 & 64.3 & 97.3 & 87.3 \\ SimVL\({}_{\text{BASE}}\)[67] & 1.8B & 1M & 230M & 77.87 & 78.14 & 81.72 & 81.77 & - & - & - \\ SimVL\({}_{\text{MIUGE}}\)[67] & 1.8B & 1M & 1.7B & 80.03 & 80.34 & 84.53 & 85.15 & - & - & - & - \\ BEiT-3\({}_{\text{BASE}}\)[66] & 21M & 1M & 1.9B & 84.19 & 84.03 & 91.51 & 92.58 & 84.8 & 67.2 & 98.0 & 90.3 \\ PAL\({}_{\text{IRUGE}}\)[66] & 1.6B & 1M & 17B & 84.30 & 84.30 & - & - & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Finetuning results of different models on vision-language classification tasks and image-text retrieval tasks. We report vqa-score on VQA test-dev and test-standard split, accuracy for NLVR2 development and public test set (test-P) and top-1 recall for image retrieval (IR) and text retrieval (TR). (\({}^{*}\) denotes the model that is reproduced by us and trained with the same setting as VL-MoE.)
text of vision-language tasks. Interestingly, Switch Transformer [15] struggles with generalization for language MoE, while V-MoE [53] and LIMoE [47] only evaluate on downstream vision tasks. Additionally, VL-MoE even outperforms VLMO\({}_{\text{LARGE}}\) and ALBEF, which are pretrained with more image-text pair data and initialized from pretrained models, on COCO and Flickr30K and achieves competitive performance on VQA and NLVR2. We assume that this may be due to the fact that the capacity of VL-FFN has not been scaled in VL-MoE, as reflected in the pretraining plot in Figure 2 (the difference of VLM loss between VL-MoE and dense BEiT-3 model is smaller compared to that of MLM and MIM loss). We leave the scale of the VL-FFN module for future work, considering the increasing instability in modal-agnostic MoE architectures demonstrated in LIMoE [47].
### Vision/Language-Only Downstream Tasks
Image Classification.We use the image classification task to evaluate the model on the vision-only downstream task, where the objective of this task is to categorize an input image into its corresponding class. We employ the ILSVRC-2012 ImageNet dataset [55], which consists of \(1.3\)M images with \(1\)k classes. Following BEiT [2] and VLMo [3], we perform average pooling over the final vectors and feed the resulting vector into a linear classifier layer to predict the label.
Natural Language Inference.We use the natural language inference task to evaluate the model on the language-only downstream task. The task involves determining the relationship between two pieces of text. In this task, a model is given a premise sentence and a hypothesis sentence, and it needs to determine whether the hypothesis is true, false, or undetermined based on the information provided in the premise. We use Multi-Genre Natural Language Inference (MNLI) [68] dataset, which contains 433k sentence pairs annotated with textual entailment information. We evaluate on matched (MLM-m) setting only.
As shown Table 2, we compare VL-MoE with two base-size vision Transformers and V-MoE-B/16-E32 on image classification. For BEiT, BEiT-3\({}_{\text{BASE}}\) and VL-MoE\({}_{\text{BASE/32E}}\), we perform intermediate finetuning on ImageNet-22k to compare with ViT pretrained on ImageNet-22k. The model performs competitively with previous state-of-the-art supervised and self-supervised models on ImageNet-1k. Besides the dense counterpart BEiT-3\({}_{\text{BASE}}\), VL-MoE also outperforms other strong vision-language models (SimVLM) pretrained with more data and more steps on MNLI-m.
## 5 Discussions
We conduct ablation studies to analyze the contributions of Mixture-of-Experts module used in VL-MoE from different perspectives. We evaluate the models on visual reasoning (NLVR2), image-text retrieval (Flickr30k), image classification (ImageNet-1k) and natural language inference (MNLI-m).
Scaling Strategy.In addition to scaling both T-FFN and V-FFN, we have also explored different scaling strategies by applying Mixture-of-Experts (MoEs) modules for either T-FFN or V-FFN alone. The results of our experiments are presented in Table 3. Our findings indicate that scaling a single modality can improve the downstream performance on the corresponding modality as well as overall vision-language tasks. However, we observed that scaling both vision and language modalities leads to the most balanced performing model with \(70.6\)% averaged performance. This may be attributed to the fact that we employ three different pretraining objectives for each modality, and scaling each modality contributes to better optimization of the specific modality pretraining loss as well as the VLM loss. For further evidence, we include the pre-training loss in Appendix.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**Pretraining**} & \multicolumn{2}{c}{**Tasks**} \\ & \# Images & \# Steps & ImageNet & MNLI-m \\ \hline \multicolumn{5}{l}{_Vision Pretraining_} \\ ViT\({}_{\text{B/16}}\)[12] & 300M & 500k & 83.6 & - \\ BEiT\({}_{\text{B/16}}\)[2] & 1.2M & 500k & 85.2 & - \\ V-MoE\({}_{\text{B/16-32E}}\)[53] & 300M & 500k & **85.3** & - \\ \hline \multicolumn{5}{l}{_Vision-Language Pretraining_} \\ SimVLM\({}_{\text{BASE}}\) & 1.8B & 1M & 80.6 & 64.4 \\ BEiT-3\({}_{\text{BASE}}^{*}\) & 4M & 200k & 83.2 & 67.0 \\ VL-MoE\({}_{\text{BASE/32E}}\) & 4M & 200k & 84.5 & **68.1** \\ \hline \end{tabular}
\end{table}
Table 2: Results of base-size models on image classification (ImageNet-1K) and natural language inference (MNLI-m). We report top-\(1\) accuracy for ImageNet and MNLI-m.
Figure 4: Effect of auxiliary loss on training stability.
Number of Experts.The optimal number of experts in Mixture-of-Experts (MoEs) is still a topic of debate, as there is no agreement on the ideal number. Previous NLP research has experimented with a wide range of expert numbers, ranging from thousands in early studies [57, 15], to as low as 32 or 64 in more recent research [74, 13, 72], which has become the standard for vision models [53, 47]. In Figure 5, we investigate this further with VL-MoE, and our findings suggest that larger expert pools consistently yield performance improvements.
Effects of the Auxiliary Losses.As previously mentioned, experts in MoEs have a fixed buffer capacity, and without intervention, top-\(k\) MoEs tend to collapse, leading to poor performance as most tokens are dropped [57, 72]. To prevent this, prior research has employed auxiliary losses to promote balanced routing [53, 74, 72, 47]. However, as shown in LIMoE [47], in multimodal settings, new challenges emerge, such as modality misbalance, where one data type may be more prevalent than the other. We design VL-MoE in a modal-specific fashion to prevent the instability caused by imbalance of multimodal data and experiment with different auxiliary losses for V-MoE: loading balance loss [57], averaged loading balance and important loss ("vloss") [53], z-loss [74]). 1 We present the results on VL-MoE\({}_{\textsc{small}}\)/E32 in Figure 4, which suggest that Z-loss presents to hurt the vision-and-lanaguage pretraininig of VL-MoE and using loading balance loss only will introduce unstable training and underperforming models. The "vloss" turns out to lead to most stable training, which is consistent with V-MoE [53] and LIMoE [47]. BPR also helps in stablizing training.
Footnote 1: We find that the T-MoE is quite stable using different auxiliary losses, and resort to the most common loading balance loss in [57] for T-MoE. We detail the formula of each auxiliary loss in the Appendix.
Token Routing Examples in VL-MoE.In Figure 3, we provide a qualitative analysis of token routing decisions on COCO. For vision tokens, their specialization is clear, as they are routed to specific experts such as food and vegetable experts, eyes experts, OCR experts, etc. On the other hand, language tokens show signs of syntax specialization, with some experts processing mostly padding tokens, while others focus on nouns and adjectives (and some padding), excluding prepositions, determiners, or verbs.
## 6 Conclusion
In this paper, we have explored the use of Mixture-of-Experts (MoE) for scaling vision-language models. Our experiments demonstrate that MoE can be a promising technique for improving the efficiency and effectiveness of vision-language models. Specifically, we have shown that dividing a large vision-language model into smaller, specialized sub-models through MoE can achieve state-of-the-art performance on several benchmarks while reducing computational costs. Our experiments have also shown that larger expert pools yield consistent performance improvements. Furthermore, we have explored the impact of MoE on model interpretability and found it can improve the interpretability of vision-language models by providing better insights into how the model processes different inputs.
In conclusion, our findings suggest that MoE is a valuable technique for scaling vision-language models, enabling them to handle large-scale, real-world multimedia data. Our work opens up new research directions for exploring the effectiveness of MoEs in other vision-language tasks, such as visual question answering, visual reasoning and image-text retrieval, and we hope our findings will inspire further investigations into this research area.
Figure 5: Effect of Experts Number on Downstream tasks.
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline & \multicolumn{2}{c}{**Scaling Strategy**} & \multicolumn{2}{c}{**NLVR2**} & \multicolumn{2}{c}{**Flickr30k**} & \multicolumn{2}{c}{**ImageNet**} & \multicolumn{1}{c}{**MNLI-m**} & \multirow{2}{*}{**Avg.**} \\ & T-MoE & V-MoE & dev & test-P & TR R@1 & IR R@1 & Acc@1 & Acc \\ \hline
[1] & ✗ & ✗ & 67.42 & 68.21 & 80.4 & 61.7 & 67.2 & 54.3 & 66.5 \\
[2] & ✓ & ✗ & 72.42 & 72.73 & 83.2 & 64.7 & 67.8 & **58.3** & 69.9 \\
[3] & ✗ & ✓ & 71.19 & 72.23 & 82.9 & 64.5 & **69.2** & 55.2 & 69.2 \\
[4] & ✓ & ✓ & **72.98** & **73.34** & **84.7** & **65.3** & 69.0 & 58.1 & **70.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation studies of scaling strategies (all the results are based on VL-MoE\({}_{\textsc{small}}\)/E32 models). All the *-MoE uses 32 experts (where T stands for applying MoE on the T-FFN and V stands for applying MoE on the V-FFN).
## Acknowledgements
We gratefully acknowledge Wenhui Wang, Li Dong, Furu Wei for the early insightful discussions on the implementation details of MoME, Mengchen Liu for the MoEs models for unified contrastive learning. SS, KK and TD are partly supported by Samsung SAIT, Intel corporation, Intel VLAB team, Intel One-API center of excellence, DARPA's LwLL, PTG, SemaFor, as well as funding through BDD and BAIR.
|
2308.05743 | "QGP Signatures" Revisited | We revisit the graphic table of QCD signatures in our 1996 Annual Reviews
article "The Search for the Quark-Gluon Plasma" and assess the progress that
has been made since its publication towards providing quantitative evidence for
the formation of a quark-gluon plasma in relativistic heavy-ion collisions and
its characteristic properties. | John W. Harris, Berndt Müller | 2023-08-10T17:59:31Z | http://arxiv.org/abs/2308.05743v3 | # "QGP Signatures" Revisited
###### Abstract
We revisit the graphic table of QCD signatures in our 1996 _Annual Reviews_ article "The Search for the Quark-Gluon Plasma"and assess the progress that has been made since its publication towards providing quantitative evidence for the formation of a quark-gluon plasma in relativistic heavy-ion collisions and its characteristic properties.
## I Introduction
### Motivation
In our 1996 review article entitled "The Search for the Quark-Gluon Plasma" [1] we described the strategy [2] adopted by the scientific community to produce, identify, and characterize the quark-gluon plasma (QGP), the predicted state of nuclear matter at temperatures resembling those that were prevalent in the early universe during the first 10 \(\mu\)s. Figure 1 of this review shows a list of observables that promised to be tell-tale signs or signatures for the formation of a QGP in relativistic heavy-ion collisions.
At the time of our review fixed-target experiments at the CERN-SPS and BNL-AGS were being conducted, which provided first evidence for the prospects of several of these observables. The qualitative sketches in Fig. 1 represented the aspirations of the community of nuclear scientists at the time that were eager to begin the experimental search for the QGP at much higher energies at the Relativistic Heavy Ion Collider (RHIC) and, a decade later, at the Large Hadron Collider (LHC). These experiments began taking data in 2000 (RHIC) and 2010 (LHC), respectively, and have continued since then with regular upgrades of the accelerators and detectors, collecting a wealth of data over a large range of collision energies and for various collision systems. Early summaries of the experimental finding at RHIC were published in four collaboration "white papers" [3; 4; 5; 6]; a summary of results from the LHC was recently presented by ALICE [7]. It is thus worthwhile to assess the extent to which the expectations expressed in the 1996 review have been confirmed.
Even a casual look at the diagrams in Fig. 1 reveals several common features:
* The signature observables are shown to exhibit sudden drastic changes in magnitude or slope at a common threshold labeled as \(\varepsilon_{c}\).
* The abscissa axes are without quantitative numbers.
* Some of the vertical axes do not give quantitative information; in others detailed information is sparse.
To a certain extent, the absence of quantitative predictions was unavoidable because not enough was known at the time about the physical properties of the QGP, its threshold conditions, and the way in which its intrinsic properties would reveal themselves in experimental observables. It was therefore impossible to make reliable quantitative predictions, and even qualitative predictions required uncertain assumptions.
One of the assumptions that was commonly made at the time was that the transition from a hadron gas to a quark-gluon plasma is a discontinuous, possibly first-order, phase transition. This assumption was motivated by simplified models (c.f. [8], Fig. 8) and by lattice simulations of SU(3) gauge theory without dynamical quarks (c.f. [9], Figs. 2 and 10). If this were the case in real QCD,
Figure 1: Schematic representations of the possible telltale signs (“signatures”) for the formation of a QGP in relativistic heavy-ion collisions.
rather abrupt changes of certain observables with changing external conditions might be expected, although they would be somewhat smoothened by the transverse nuclear density profile. We now know that the hadron-QGP transition in nature is a smooth, albeit rapid, crossover [10]. Any characteristic changes in observables must therefore be much more gradual than originally anticipated, which is borne out by the data accumulated at SPS, RHIC, and LHC.
A common feature of all diagrams in Fig. 1 is that the abscissa axis is labeled by the transverse energy per unit pseudorapidity, \(dE_{t}/d\eta\), with a symbol \(\varepsilon_{c}\) that denotes the critical energy density at which hadronic matter transforms into a QGP.1 The precise value of \(\varepsilon_{c}\) was unknown at the time, but was anticipated to lie somewhat below 1 GeV/fm\({}^{3}\). Today it is known from lattice-QCD calculations [10] that \(\varepsilon_{c}\approx 0.3-0.4\) GeV/fm\({}^{3}\) depending on the precise definition of the pseudocritical temperature \(T_{c}\) where hadronic matter transitions into QGP. Because the transition is a continuous crossover, not a sharp discontinuity in the thermodynamic sense, an unambiguous and more precise definition of \(\varepsilon_{c}\) is impossible.
Footnote 1: The attentive reader will notice that the two quantities, \(dE_{t}/d\eta\) and \(\varepsilon_{c}\), have different units and thus should not be compared directly on the same axis. The resolution of this inconsistency is that what was meant to be shown as the axis label is (\(dE_{t}/(A_{\perp}\tau_{\rm ini}d\eta)\), where \(A_{\perp}\) is the transverse collision cross section and \(\tau_{\rm ini}\) denotes the thermalization time.
In order to connect the energy density \(\varepsilon\) reached in a heavy-ion collision with the measured transverse energy per unit pseudorapidity, \(dE_{t}/d\eta\), one needs to make certain model assumptions. It is most common to invoke the Bjorken model of boost invariant longitudinal hydrodynamics [11] to make this connection. In the Bjorken model the energy density varies with proper time \(\tau\) as
\[\varepsilon(\tau)=\varepsilon_{\rm ini}(\tau_{\rm ini}/\tau)^{1+c_{s}^{2}}, \tag{1}\]
where \(\tau_{\rm ini}\) is the formation time of the QGP, and \(c_{s}^{2}=\partial P/\partial\varepsilon\) denotes the speed of sound in the QGP. For the ideal QGP, \(c_{s}^{2}=1/3\) (we denote all quantities in natural units \(\hbar=c=1\).) We will use this value here for the sake of simplicity. This implies that the product \(\tau\varepsilon(\tau)\) is not constant, but gradually drops as the plasma expands. This fall-off occurs because the plasma does mechanical work \(dW=-pdV\) in the expansion process, causing the decrease of its internal energy as it expands, primarily in the longitudinal direction. At late times and at lower collision energies the expansion in transverse directions also becomes important, leading to an even faster drop in \(\tau\varepsilon(\tau)\). In order not to complicate things too much, we ignore this effect here.
The full evolution of the energy density during the nuclear collision can be realistically modeled with relativistic viscous hydrodynamics. Not only does \(\varepsilon\) vary with (proper) time \(\tau\), it also depends on the position within the fireball. A collision event thus cannot be characterized by a single value of the energy density. Furthermore, the energy density distribution varies from event to event, because both the nuclear density distribution at the moment of collision and the energy deposition are subject to quantum fluctuations.
Over the past two decades, we have learned much about how to model these processes effectively, and how to use detailed comparisons between model predictions and experimental data to constrain the initial conditions and other parameters that govern the dynamical evolution of the QGP. The application of these techniques, which apply Bayesian inference to extract the underlying physics from the data, is a main line of inquiry today. Here we will base our assessment on a more qualitative interpretation of the existing data, which is better suited for a "big picture" view that compares our current insight with the expectations in 1996.
This article is intended as an assessment of the progress that has been made since 1996 in the use of various observables to determine the physical properties of the QGP, ascertain its fleeting existence, and map the boundary between normal hadronic matter and the QGP. Our focus will be on the signatures shown in Fig. 1, however we also point out additional observables, such as elliptic flow, that have become recognized as significant to the field and future investigations. We recognize that a large fraction of research with relativistic heavy ions, especially at the highest energies, has increasingly shifted in the intervening two-and-a-half decades away from the study of equilibrium properties of the QGP to the quest for an understanding of the dynamical processes involved in its formation and evolution. We will only touch on this aspect, which has sometimes been described as a "paradigm shift", in the concluding section and refer readers interested in the current perspective of the questions to be addressed by future research in this field to the recent review article [12]. Tremendous progress has been made to understand the equilibrium properties of the QGP through theoretical modeling. Future progress will depend even more so on dynamical modeling and on continuing the close intellectual exchange between experiment and theory.
### Initial conditions
The single-particle entropy per unit of pseudorapidity at midrapidity can be related to the charged-particle multiplicity \(dN_{\rm ch}/d\eta\) as follows [13]:
\[\frac{dS}{dy}\approx 7\,J\,\frac{dN_{\rm ch}}{d\eta}. \tag{2}\]
where \(J\) is the Jacobian relating a central pseudorapidity interval \(d\eta\) to the corresponding rapidity interval \(dy\). For energies of interest here, \(1<J<1.35\)[14]. For example, for Pb+Pb at a \(\sqrt{s_{\rm NN}}\) with \(dN_{\rm ch}/d\eta\approx 1600\) this yields \(dS/dy\approx 12,500\). Alternatively, one can use
the volume obtained in the thermal hadron model fit [15]\(dV/dy=4175\pm 380\)\(\mathrm{fm}^{3}\) and the chemical freeze-out temperature \(T_{c}=156.6\) MeV to get an independent, consistent estimate:
\[\frac{dS}{dy}\;\approx\;5.5\,T_{c}^{3}\,\frac{dV}{dy}\;\approx\;11,500. \tag{3}\]
Encouraged by this result we use the relation (2) to derive estimates for the entropy density \(s_{f}\) at freeze-out from the measured charged-particle multiplicities \(dN_{\mathrm{ch}}/d\eta\). Assuming approximate entropy conservation expressed by the relation \(\tau s(\tau)=\mathrm{constant}\) for a boost-invariant expansion, we can then estimate the entropy density and temperature at the time of initial thermalization.
The entropy density \(s_{f}\) at the freeze-out time \(\tau_{f}\) can be related to the final entropy per unit rapidity [16; 14]
\[s_{f}\approx\frac{dS/dy}{A_{\perp}\tau_{f}}, \tag{4}\]
where \(A_{\perp}\) is the transverse area of the QGP, and \(\tau_{f}\) is the freeze-out proper time. The transverse area \(A_{\perp}\) can be estimated within the Glauber model. For central collisions of identical nuclei \(A_{\perp}\approx\pi R^{2}\), where \(R\approx 7\) fm is the nuclear radius (for \({}^{197}\)Au or \({}^{208}\)Pb).
The choice of the proper time of initial thermalization \(\tau_{\mathrm{ini}}\) is somewhat more ambiguous. A common choice for the QGP formation time is \(\tau_{\mathrm{ini}}\approx 0.6\) fm/c [17]. This choice is appropriate at energies where the colliding Au or Pb nuclei are Lorentz contracted to less than \(0.6\) fm in the longitudinal direction, which is the case for collision energies \(\sqrt{s_{\mathrm{NN}}}\geq 45\) GeV. At lower energies, the colliding nuclei are less strongly contracted. We therefore choose the formation time to be at least the transit time of the two nuclei,
\[\tau_{\mathrm{ini}}=\mathrm{max}[0.6\;\mathrm{fm/c},2R/\gamma], \tag{5}\]
where \(\gamma\) is the Lorentz factor for a given collision energy in the center-of-mass frame.
We then use the thermal expression for the entropy density \(s=bT^{3}\) with \(b\) determined by lattice-QCD (see Table 5 in [10]) to be \(b_{c}\approx 5.5\) at \(T_{c}\) and \(b_{\mathrm{ini}}\approx 15.5\) at \(T_{\mathrm{ini}}\). Since total entropy can only increase, the entropy at \(\tau_{\mathrm{ini}}\) cannot be larger than that at chemical freeze-out. In fact, both values should be approximately equal since the QGP has a low specific viscosity, which implies that the expansion is approximately isentropic. Combining everything we obtain the initial temperature as
\[T_{\mathrm{ini}}^{3}=\frac{dS/dy}{A_{\perp}b_{\mathrm{ini}}\tau_{\mathrm{ini} }}\,. \tag{6}\]
Many heavy-ion experiments at SPS, RHIC, and LHC have reported measurements of the charged-particle multiplicity \(dN_{\mathrm{ch}}/d\eta\). Here, we only consider data for the heaviest collision systems, Au+Au at RHIC [18; 19; 20; 14] and Pb+Pb at SPS [21; 22] and LHC [23; 16]. We use these data, together with (4) and entropy conservation, to convert the measured values of charged-particle multiplicity per unit pseudorapidity into estimates for the average initial energy density
\[\varepsilon_{\mathrm{ini}}\approx(3/4)s_{\mathrm{ini}}T_{\mathrm{ini}}. \tag{7}\]
The resulting estimates of \(\varepsilon_{\mathrm{ini}}\) covering the range \(7.7\mathrm{~{}GeV}\leq\sqrt{s_{\mathrm{NN}}}\leq 2.76\mathrm{~{}TeV}\) are shown in Fig. 2. The initial energy density for the lowest RHIC collision energy in the collider mode, \(\sqrt{s_{\mathrm{NN}}}=7.7\mathrm{~{}GeV}\), approximately coincides with the threshold for production of a QGP. It is worth mentioning that even if a QGP is formed at this energy, its lifetime must be extremely short, and most of the evolution of the fireball will occur in the hadronic phase.
## II Strangeness
A large increase in the production of strange antibaryons was predicted early on as a signature of quark deconfinement in baryon-rich quark matter [24]. More generally, the chemical saturation of strangeness in QGP is understood as a consequence of the presence of abundant thermal gluons [25]. As a result, hadrons containing strange quarks are expected to be produced with chemical equilibrium yields during the hadronization of a sufficiently long-lived QGP [26].
Figure 2: Average initial energy density reached in the 5% most central Au+Au (Pb+Pb) collisions in the collision energy range \(7.7\mathrm{~{}GeV}\leq\sqrt{s_{\mathrm{NN}}}\leq 2.76\mathrm{~{}TeV}\). The data are from [14] for RHIC energies and [23] for the LHC energy. The steeper fall-off for \(\sqrt{s_{\mathrm{NN}}}<10\mathrm{~{}GeV}\) is caused by the incomplete Lorentz contraction of the colliding nuclei at lower energies.
There are two points of view concerning chemical flavor equilibration. The widely prevailing view was that, once achieved during the QGP phase, the equilibrium would be maintained through hadronization. This implies that the measured hadron yields reflect _hadronic_ equilibrium, not weakly interacting _partonic_ equilibrium. If hadronization would proceed very fast as a sudden disintegration process, vestiges of the earlier partonic equilibrium might survive in the measured hadron yield [27]. The general consensus today is that the first scenario is realized in heavy-ion collisions. This view is amply supported by the data. In this view then, the attainment of flavor equilibrium is seen as a QGP signature, even though the observed hadron yield ratios reflect the thermodynamics of the hadron gas at \(T_{c}\).
The first confirmation of these expectations came from the WA97 experiment [28], which found a 20-fold enhancement of the production of \(\Omega\) and \(\overline{\Omega}\) hyperons in central fixed-target Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=17.3\) GeV compared with extrapolations from p+Pb collisions. These results were subsequently confirmed by the NA57 experiment [29] (see Fig. 3). A similar pattern was observed at the higher RHIC energy of \(\sqrt{s_{\rm NN}}=200\) GeV by the STAR experiment [30], as shown in Fig. 4.
The overall saturation of the strangeness flavor in the abundances of emitted hadrons can be assessed by a thermal fit to all particle yields with temperature \(T_{c}\), chemical potentials \(\mu_{B}\) and \(\mu_{s}\) for baryon number and strangeness, and a strangeness fugacity \(\gamma_{s}\) as adjustable parameters. Values \(\gamma_{s}<1\) indicative of undersaturation are typically found in p+p collisions. The evolution of \(\gamma_{s}\) as a function of collision energy from AGS energies to LHC energies is shown in Fig. 5.
The dashed curve in Fig. 5 represents the analytic fit
\[\gamma_{s}(A,\sqrt{s_{\rm NN}})=1-\zeta\exp\left(-\xi\sqrt{A\sqrt{s_{\rm NN}}}\right) \tag{8}\]
provided in [31], where \(A\) is the mass number of the colliding nuclei and \(\zeta=0.606\) and \(\xi=0.0209\) are fit parameters. The data presented in Fig. 5 clearly show an increase of \(\gamma_{s}\) toward unity with increasing collision energy, implying full saturation of the strange quark density
Figure 4: Multistrange baryon enhancement measured by STAR in \(\sqrt{s_{\rm NN}}=200\) GeV Au+Au collisions as a function of the number of participant nucleons \(N_{\rm part}\)[30].
Figure 5: Evolution of the strangeness fugacity \(\gamma_{s}\) as a function of \(\sqrt{s_{\rm NN}}\) in central Au+Au or Pb+Pb collisions based on chemical fits using the grand canonical ensemble [31; 32; 33]. The dashed curve shows the analytic fit (8) to \(\gamma_{s}(\sqrt{s_{\rm NN}})\).
Figure 3: Multistrange baryon enhancement measured by NA57 in \(\sqrt{s_{\rm NN}}=17.3\) GeV Pb+Pb collisions as a function of the number of participant nucleons \(N_{\rm part}\) (taken from the number of wounded nucleons \(N_{w}\) in [29]).
at hadronization in the top RHIC and LHC energy range and thus confirming the expectation depicted schematically for strangeness in Fig. 1.
As the collision energy increases and the net baryon density in the QGP falls, the chemical potential \(\mu_{s}\) associated with strangeness drops rapidly as anticipated in Fig. 1. The results from chemical fits to the RHIC data from central Au+Au collisions are shown in Fig. 6 and again confirm the original expectations.
One expects strangeness saturation to increase with the size and longevity of the QGP fireball. An apparent suppression for small volumes can be attributed to the conservation of net strangeness within the fireball volume, which requires that strange particles are produced in pairs, and is known as _canonical suppression_[34; 35]. One also expects the abundance of strange quarks to relax to the equilibrium value on a time-scale of order \(1-2\) fm/c [25]. As the size, as well as the life-time, of the QGP depends in similar ways on the size \(A\) of the colliding nuclei and on the collision energy \(\sqrt{s_{\rm NN}}\), both effects contribute inextricably to the analytical formula (8).
As it has become commonly accepted that hadron yields in Pb+Pb collisions at LHC energies are well described by setting \(\gamma_{s}=1\), the focus has more recently shifted to the system size dependence of strangeness saturation. Data on the dependence of strange baryon enhancement on system size has been reported by ALICE for p+p, p+Pb, and Pb+Pb collisions2[36] and is shown in Fig. 7. The data exhibit a systematic increase with system size of multi-strange baryon yields relative to the charged pion yield, which becomes more pronounced as the number of strange valence quarks in the baryon grows.
Footnote 2: The notation A+B, AB, and A–B for collision systems, where A, B denote the nuclei in the colliding beams, varies between experiments. Here we adopt the uniform notation A+B for consistency. We also omit the nuclear mass number in most instances, unless it is important to distinguish between different isotopic beams.
In order to explore whether the trend seen in Fig. 7 is consistent with the results obtained at lower energies, we compare the \(\Omega\) hyperon to charged pion ratio \((\Omega+\overline{\Omega})/(\pi^{+}+\pi^{-})\) measured by ALICE as a function of charged-particle pseudorapidity density \(dN_{\rm ch}/d\eta\) with the analytical fit (8). In order to make contact with the data, we replace the nuclear mass \(A\) in (8) with the scaled charged-particle density \(\frac{1}{8}dN_{\rm ch}/d\eta\). The scaling factor \(\frac{1}{8}\)
Figure 6: Evolution of the strangeness chemical potential \(\mu_{s}\) as function of \(\sqrt{s_{\rm NN}}\) in central Au+Au collisions based on chemical fits using the grand canonical ensemble [33].
Figure 7: Multi-strange baryon enhancement measured by ALICE in p+p, p+Pb, and Pb+Pb collisions versus charged particle pseudorapidity density \(dN_{\rm ch}/d\eta\). See [36] for details.
relates the charged-particle multiplicity \(dN_{\rm ch}/d\eta\) to the number of participant nucleons [37]. The comparison is shown in Fig. 8, where the dashed curve is given by \(0.0009\,\gamma_{s}^{3}\) accounting for the strangeness \(|S|=3\) of the \(\Omega\) hyperon. Given the vast extrapolation in energy and the heuristic substitution for \(A\) in the analytical formula, the system size dependence is remarkably well represented.
The overall conclusion is that the prediction of enhanced production of baryons containing multiple strange quarks in nuclear collisions at high energy, which results in chemical equilibrium yields for large collision systems, has been consistently confirmed by the data from SPS, RHIC, and LHC.
## III Quarkonium
### Conceptual overview
The suppression of J/\(\psi\) (\(c\bar{c}\)) production due to color screening has long been recognized as a promising signature of quark deconfinement in heavy-ion collisions [38]. Its excited states are predicted to dissociate more easily in a QGP and thus to be more strongly suppressed because of their lower binding energies and larger radii. The same applies for the \(\Upsilon\) states (\(b\bar{b}\)) with the proviso that \(b\bar{b}\) states have different binding energies and will therefore dissociate at different temperatures in the QGP than \(c\bar{c}\) bound states. The concept of _sequential melting_ of excited states of the J/\(\psi\) and \(\Upsilon\) states has been substantiated in lattice calculations [39].
The original expectation was that the suppression would be strongest for low quarkonium momenta, where the \(Q\bar{Q}\) pair is quasi-statically imbedded in the QGP and feels the full effect of color screening. At high \(p_{T}\), the suppression of J/\(\psi\) was expected to weaken and eventually disappear, because the \(c\bar{c}\) bound state is then formed outside the QGP due to relativistic time delay, and the small-sized color-singlet \(c\bar{c}\) precursor does not feel the effect of color screening.
Additional theoretical insights and experimental observations have led to a significant revision of this picture. It was recognized that there exist additional mechanisms for quarkonium melting than just the static \(Q\bar{Q}\) potential. Instead, the relevant quantity is the in-medium spectral function that includes non-static effects such as thermal ionization. The spectral function, which needs to be deduced from static lattice simulations by analytic continuation, has been widely studied (see [40] for charmonium and [41] for bottomonium). These studies confirmed that the principle of sequential melting transcends the simplified color screening picture.
Furthermore, feed-down from higher-lying, less strongly bound states will influence the degree of suppression of any lower-lying states. Other effects that can affect the degree of suppression have been labeled "cold nuclear matter" (CNM) effects. These include nuclear shadowing of the initial gluon distributions, momentum broadening of the initial-state partons, and final-state absorption by spectator nucleons. These effects are also present in hadron-nucleus interactions, where they may be studied to determine their contributions to the suppression observed in heavy-ion collisions.
The idea that high-\(p_{T}\) quarkonia should be less suppressed has also been revised on account of the insight that quarkonium production at high \(p_{T}\) proceeds mostly through the color-octet \(c\bar{c}\) channel via gluon fragmentation [42]. This still implies a growing formation time at high \(p_{T}\), but the color-octet nature of the precursor state means that it will suffer strong energy loss on its passage through the QGP. High-\(p_{T}\) charmonium is thus expected to be similarly suppressed as open-charm mesons, contrary to the original expectations.
Finally, at very high collision energies, the number of produced \(c\bar{c}\) pairs is large enough to engender substantial _regeneration_ of charmonium states at hadronization [43; 44]. At sufficiently high collision energy, charmonium yields are then expected to obey the same thermal equilibrium law as other hadron yields, except that their overall yield is governed by the production cross section for \(c\bar{c}\) pairs in the nuclear collision, providing further proof of deconfinement. This mechanism is most effective at low \(p_{T}\) where the density of \(c\bar{c}\) pairs is largest.
### Sequential suppression
Processes involving the production of heavy quarks are characterized by an energy scale \(2m_{Q}c^{2}\gg\Lambda_{\rm QCD}\) far
Figure 8: \(\Omega\) hyperon-to-charged-pion ratio measured by ALICE in p+p, p+Pb, and Pb+Pb collisions at LHC [36] as a function of charged-particle pseudorapidity density in comparison with the analytical fit (8) (see text for details).
above the QCD scale, and thus should naively scale as the number of binary nucleon-nucleon collisions. One therefore characterizes their yield in heavy-ion collisions by a _nuclear modification factor_\(R_{\rm AA}\) defined as the ratio of the inclusive yields per unit rapidity in \(A+A\) collisions and in proton-proton collisions scaled by the number of binary nucleon-nucleon interactions in a nuclear collision:
\[R_{\rm AA}(p_{T})=\frac{dN_{AA}/dp_{T}dy}{\langle T_{\rm AA}\rangle\cdot d^{2} \sigma_{pp}/dp_{T}dy}, \tag{9}\]
where \(\langle T_{\rm AA}\rangle\) is the longitudinally integrated nuclear density averaged over the experimentally selected events in a certain collision centrality window. A value \(R_{\rm AA}<1\) implies suppression in the nuclear collision relative to the extrapolation from independent proton-proton collisions.
J/\(\psi\) suppression in heavy-ion collisions was initially studied and observed at the CERN SPS in experiments NA38 [45], NA50 [46; 47], and NA60 [48]. Di-muon spectra were measured for invariant masses above 2.9 GeV/c\({}^{2}\), encompassing J/\(\psi\), \(\psi^{\prime}\), Drell-Yan pairs, and open charm decays in Pb+Pb (In+In) fixed-target collisions at \(\sqrt{s_{\rm NN}}\,=17.3\) GeV at the CERN SPS.
The dependence of the nuclear modification factor \(R_{\rm AA}\) for J/\(\psi\) production as a function of centrality, expressed in terms of the number of participant nucleons \(N_{\rm part}\), is displayed in Fig. 9 for data from NA38 [45], NA50 [47], and NA60 [48] at collision energy \(\sqrt{s_{\rm NN}}\,=17.3\) GeV. A clear pattern of suppression of the J/\(\psi\) is seen above \(N_{\rm part}\)\(\approx\) 100, increasing steadily up to the most central collisions of \(N_{\rm part}\,>\) 350. This was the first experimental verification of melting of the J/\(\psi\) in the presence of nuclear matter at high densities in heavy-ion collisions, although many questions about competing effects remained. The cross-section ratios measured for \(N_{\rm part}\)\(<\) 100 are in good agreement with the pattern of normal nuclear absorption extrapolated from proton-nucleus collisions [47] (see next subsection).
With the advent of high-energy heavy-ion colliders, experimental studies of the quarkonium states and the degree of their melting have flourished. Figure 9 also shows the centrality dependence of \(R_{\rm AA}({\rm J}/\psi)\) at the collision energy \(\sqrt{s_{\rm NN}}\,=\) 200 GeV at RHIC from PHENIX [50], with almost identical suppression at RHIC as at the SPS. It is difficult to draw unambiguous conclusions from this observation, because the QGP conditions at the two energies are quite different (see Fig. 2).
For Au-Au collisions at \(\sqrt{s_{\rm NN}}\)= 200 GeV, J/\(\psi\) is found to be more suppressed at forward rapidity than at mid-rapidity as can be seen in Fig. 9, which shows PHENIX data for \(R_{\rm AA}({\rm J}/\psi)\) measured at midrapidity (purple squares) and measurements at forward rapidity [51] (orange dots). One reason for the enhanced suppression at forward rapidity may be stronger gluon shadowing in the nucleus that moves in the backward direction. Production of a \(c\bar{c}\) pair in the forward-rapidity window probes the nuclear gluon distribution in the backward-going nucleus in the range \(x\sim(1.5-5)\times 10^{-3}\), where the nuclear gluon distribution is strongly suppressed.
A new suppression pattern is observed in Pb+Pb collisions at the LHC. The \(R_{\rm AA}({\rm J}/\psi)\) and \(R_{\rm AA}(\psi^{\prime})\) measured at forward rapidity by ALICE at \(\sqrt{s_{\rm NN}}\)= 5.02 TeV [52] exhibits suppression and is rather flat for \(N_{\rm part}>100\) as seen in Fig. 10. The data clearly reveal a sequential suppression pattern showing stronger suppression of the excited charmonium state with \(R_{\rm AA}(\psi^{\prime})/R_{\rm AA}({\rm J}/\psi)\approx 0.5\). However, when comparing the \(R_{\rm AA}({\rm J}/\psi)\) to that measured at RHIC in Fig. 9, it is clear that the suppression of the J/\(\psi\) is less pronounced at the LHC energy than at RHIC. A detailed comparison of the \(p_{T}\)- and \(N_{\rm part}\)-dependence of J/\(\psi\) production at RHIC and LHC can be found in [53] (STAR data and discussion of their Figs. 4 and 5). The most striking difference is seen for J/\(\psi\) production at midrapidity integrated over all \(p_{T}\), which is dominated by low-\(p_{T}\) production and found to be much less suppressed at LHC [54] than at RHIC. On the other hand, ATLAS and CMS data on prompt J/\(\psi\) suppression at high \(p_{T}\) (up to 40 and 50 GeV/c, respectively) exhibit a strong increase of suppression with \(N_{\rm part}\) consistent with the path-length dependent energy loss of the precursor color-octet \(c\bar{c}\) state [55; 56].
The difference between the \(p_{T}\)-integrated J/\(\psi\) suppression at RHIC and LHC cannot be explained by gluon shadowing as the charm production in the forward-rapidity window selected by ALICE probes the nuclear gluon distribution at \(x<10^{-4}\), where shadowing should be even stronger than at RHIC. The widely accepted explanation for this effect is that it reveals a new production mechanism for J/\(\psi\) at the LHC collision energies. During hadronization of the QGP, _regeneration_ by coalescence of \(c\overline{c}\) pairs copiously produced by hard QCD processes in the initial phase of the collision increases the yield of J/\(\psi\) at higher energies [43; 44]. It is possible that this mechanism already contributes to the observation that \(R_{\rm AA}^{\rm mid}>R_{\rm AA}^{\rm forward}\) at the top RHIC energy (see Fig. 9).
Figure 9: The nuclear modification factor \(R_{\rm AA}({\rm J}/\psi)\) as a function of the number of participating nucleons \(N_{\rm part}\) at mid-rapidity from NA38, NA50, NA60 and PHENIX [49]. Collision system, rapidity window and centrality are given for each in the legend. See text for more details.
Since most charm quark pairs are produced at low transverse momenta, regeneration should be most effective at low \(p_{T}\) and cease to be a significant contribution at momenta above a few GeV/c. This expectation is confirmed by data for \(p_{T}\)-differential \(R_{\rm AA}\)(J/\(\psi\)) and \(R_{\rm AA}\)(\(\psi^{\prime}\)) at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV from ALICE [57] and CMS [56] presented in Fig. 11. Both the J/\(\psi\) and \(\psi^{\prime}\) initially exhibit a steep drop of their \(R_{\rm AA}\) as \(p_{T}\) increases but then level off at their lowest values for \(p_{T}>6\) GeV/c. The hierarchy of suppression is again evident with the \(\psi^{\prime}\) being suppressed by an additional factor \(2-3\) relative to the J/\(\psi\) over the entire p\({}_{T}\) range.
A clear pattern of sequential suppression of the \(\Upsilon\) and its excited states is observed in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. The results from CMS [58] for Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV at LHC, presented as a function of \(N_{\rm part}\) in Fig. 12, indicate that \(R_{\rm AA}\)(\(\Upsilon\)(2s)) at LHC is lower by a factor of 2 or more than \(R_{\rm AA}\)(\(\Upsilon\)(1s)) at mid-rapidity. As Fig. 12 shows, the \(\Upsilon\)(1s) suppression factors at the top LHC energy (CMS data) and the top RHIC energy (STAR) data are identical within the experimental uncertainties. This is true for both the centrality differential data shown in the left segment of the figure and the integrated data shown in the right segment. For the \(\Upsilon\)(2s) the suppression at LHC appears to be stronger than at RHIC, although the large error bars of the STAR data do not permit a definite conclusion.
The observed pattern exhibited in Fig. 12 is consistent with a hierarchy of sequential melting of the \(\Upsilon\) states. The \(\Upsilon\)(3s) is more suppressed than the \(\Upsilon\)(2s) which, in turn, is more suppressed than the \(\Upsilon\)(1s). Data from the ATLAS experiment on \(\Upsilon\)(1s) and \(\Upsilon\)(2s) [59] agree with those from CMS as a function of \(N_{\rm part}\). Likewise, all experiments find the suppression to be independent of rapidity over the rapidity range \(0<y<4\). The dependence on \(p_{T}\) is found to be rather flat in minimum bias data with a slight rise observed by ATLAS in the range \(p_{T}=2-10\) GeV/c [59]. This is consistent with the expected absence of a contribution from regeneration for \(\Upsilon\) states.
### Cold nuclear matter effects
One way to investigate the extent to which cold nuclear matter (CNM) effects play a role in the measured \(R_{\rm AA}\) suppression patterns of quarkonia in A+A collisions
Figure 11: Inclusive J/\(\psi\) and \(\psi\)’ (denoted as \(\Psi\)(2s) here) suppression results at forward-rapidity from ALICE [57] and mid-rapidity from CMS [56] in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\)= 5.02 TeV. It is interesting to note that the measurements at forward- and mid-rapidity are consistent with each other in their range of overlap (\(p_{T}\approx 7-11\) GeV/c).
Figure 12: \(R_{\rm AA}\)(\(\Upsilon\)) results for \(\Upsilon\) states at mid-rapidity in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\)= 5.02 TeV for p\({}_{T}\)(\(\Upsilon\)) \(<\) 30 GeV/c from CMS [58] and in Au-Au collisions at \(\sqrt{s_{\rm NN}}\)= 200 GeV for \(\Upsilon\)\(<\) 10 GeV/c from STAR [60]. The left segment of the figure shows the centrality differential data for \(\Upsilon\) suppression; the integrated data are shown in the right segment. Both the CMS and the STAR data confirm the theoretical expectation of sequential suppression in the order of the binding energy and size of the bound state.
is to compare those with p+A and other light-particle induced reactions. STAR \(R_{\rm pAu}\) and PHENIX \(R_{\rm dAu}\) data [61] for J/\(\psi\), together with central \(R_{\rm AuAu}\) data from STAR, are displayed in Fig. 13 as a function of \(p_{T}\)[62]. While the Au+Au data show nearly constant suppression at \(R_{\rm AuAu}\approx 0.4-0.5\) over the entire range of \(p_{T}<10\) GeV/c, the p+Au (d+Au) data are consistent with unity, \(R_{\rm pAu}\approx R_{\rm dAu}\approx 1\), for \(p_{T}>2\) GeV/c within the measurement uncertainties. However, a modest suppression with values \(R_{\rm pAu}\approx R_{\rm dAu}\approx 0.6-0.8\) is observed for \(p_{T}<2\) GeV/c. These results leave little room for CNM effects in the range \(p_{T}>2\) GeV/c and help establish the strong suppression of J/\(\psi\) seen in Au+Au collisions that is a final-state effect caused by J/\(\psi\) melting in the QGP. The most likely reason for the J/\(\psi\) suppression found in p+Au and d+Au collisions at low \(p_{T}\) is gluon shadowing in the Au nucleus for \(x\leq 0.03\)[63]. Higher values of \(p_{T}\) correspond to larger values of \(x\) where gluons are not shadowed in nuclei.
ALICE data on J/\(\psi\) and \(\Upsilon\)(1s) suppression for p+Pb and Pb+Pb, shown in Fig. 14, paint a similar picture regarding possible CNM effects at LHC energies. [7] The measured \(R_{\rm AA}\)(J/\(\psi\)) and \(R_{\rm AA}\)(\(\Upsilon\)(1s)) in Pb+Pb collisions at forward and backward rapidities exhibit strong suppression. By contrast, both the forward and backward \(R_{\rm pPb}\)(J/\(\psi\)) and \(R_{\rm pPb}\)(\(\Upsilon\)) are consistent with unity at high \(p_{T}\), but exhibit systematic suppression at low p\({}_{T}\). The observed behavior is consistent with the expectation of nuclear shadowing as indicated by the grey bands in the figure. ATLAS [70] and CMS [71; 72] have measured \(R_{\rm pPb}\) and \(R_{\rm PbPb}\) for J/\(\psi\) and \(\Upsilon\) in p+Pb and Pb+Pb collisions out to \(p_{T}=30\) GeV/c, and LHCb has measured \(R_{\rm pPb}\) up to \(p_{T}=15\) GeV/c [73; 74], with all observing similar trends.
Summarizing this subsection, the light-particle induced reactions clearly exhibit the presence of suppression effects in the lower range of the p\({}_{T}\) measured, especially for the J/\(\psi\). As discussed, effects that could cause this suppression include nuclear shadowing of the nuclear gluon distributions. Momentum broadening of the initial-state partons in the light projectile, and final state absorption may also contribute, especially for the \(\psi^{\prime}\). The fact that these effects are small and generally understood reinforces the conclusion that quarkonium suppression in the Au+Au and Pb+Pb collisions at RHIC and the LHC, respectively, is a signature of sequential quarkonium melting and quark deconfinement in the QGP. The observation of charmonium regeneration in Pb+Pb collisions at the LHC further consolidates this conclusion.
Figure 14: \(R_{\rm AA}\) and \(R_{\rm pA}\) as a function of \(p_{T}\) for J/\(\psi\) (top) and \(\Upsilon\)(1s) (bottom) production in \(\sqrt{s_{\rm NN}}=5.02\) TeV Pb+Pb collisions integrated over centrality (0-90%) and in \(\sqrt{s_{\rm NN}}=8.16\) TeV p+Pb collisions [7]. For details, see [52] (Pb+Pb) and [65] (p+Pb) for data in the top panel, and [66] (Pb+Pb) and [67] (p+Pb) for data in the bottom panel. Model calculations [68] based on nuclear shadowing using EPS09-LO nuclear parton distributions are shown as the gray bands. The dashed band in the top panel represents a calculation based on the color glass condensate model and a non-relativistic QCD production mechanism for the J/\(\Psi\)[69].
Temperature
A main goal of temperature measurements as a function of the deposited energy was to determine the equation of state of QCD matter. A change in the number of effective degrees of freedom changes the entropy density \(s(T)\) at a given temperature, which is closely related to the energy density \(\varepsilon\) by the relation \(s=(\varepsilon+P)/T\). The change of slope in the curve shown in the "temperature" panel of Fig. 1 around the critical energy density \(\varepsilon_{c}\) reflects the expectation at the time that QCD matter would undergo a sharp, perhaps first-order, phase transition from hadronic matter to a quark-gluon plasma with the associated liberation of color-nonsinglet degrees of freedom carried by deconfined quarks and gluons. We will return to the equation of state in Section XI; here we will focus on the status of temperature measurements.
There are few model-independent ways to measure the temperature in a relativistic heavy-ion collision. Thermal slopes deduced from the transverse momentum spectra of emitted particles are "corrupted" by the blue-shift caused by the transverse expansion of the fireball. In order to avoid this influence of collective flow, one needs to deduce the temperature from the measurement of a Lorentz invariant quantity that is independent from the frame of reference. The two measurements that satisfy this constraint are yields of particles with different masses, \(dN_{i}/d\eta\propto e^{-m_{i}/T}\), and the invariant mass spectrum of lepton pairs. The former enable a frame-independent measurement of the temperature at which the hadrons are produced, commonly called the _chemical freeze-out temperature_, the latter provides for a measurement of the time-averaged temperature of the medium that emits the lepton pairs.
Because the dilepton invariant mass spectrum is distorted by the decay of vector mesons, the most promising region for a temperature measurement is the intermediate mass region (IMR) of invariant masses between the \(\phi\)-meson and the J/\(\psi\): \(1.1\ \mathrm{GeV}/c^{2}<M_{\ell^{+}\ell^{-}}<3\ \mathrm{GeV}/c^{2}\). An experimental challenge is that dileptons in this mass range have a potentially large background contribution from semi-leptonic charm decays, especially at collision energies well above the charm threshold.
The first and still most accurate measurement of the slope of the di-muon invariant mass spectrum was made by NA60 in \(\sqrt{s_{\mathrm{NN}}}=17.3\ \mathrm{GeV}\) fixed-target In+In collisions [75]. The experiment reported an "excess" contribution with a spectral slope \(T_{\mathrm{IMR}}\approx 193\pm 16\ \mathrm{MeV}\), somewhat dependent on the chosen mass window and \(p_{T}\)-cut. The rather strong dependence of this slope parameter on the upper limit of the invariant mass window suggests contributions to lepton-pair production in the higher mass range from the very early (and therefore very hot) thermal or even pre-equilibrium stages.3
Footnote 3: R. Rapp, private communication. The argument is motivated by the observation that a fit of the form \(M^{3/2}\exp(-M/T)\) to the intermediate mass region \(1.2\ \mathrm{GeV}/c^{2}<M_{\mu\mu}<2.5\ \mathrm{GeV}/c^{2}\) yields \(T_{\mathrm{IMR}}^{17.3\ \mathrm{GeV}}=246\pm 15\ \mathrm{MeV}\), substantially larger than the apparent temperature reported in [75] for a narrower mass window.
STAR recently reported invariant mass electron-pair spectra for Au+Au collisions at \(\sqrt{s_{NN}}=27,54.4\ \mathrm{GeV}\)[76], with thermal fits of the form \(M^{3/2}\exp(-M/T)\) to the intermediate mass region (IMR) yielding \(T_{\mathrm{IMR}}^{27\ \mathrm{GeV}}=301\pm 60\ \mathrm{MeV}\) and \(T_{\mathrm{IMR}}^{54.4\ \mathrm{GeV}}=338\pm 59\ \mathrm{MeV}\). These results are shown in Fig. 15 together with the values of \((T_{c},\mu_{B,c})\) at chemical freeze-out (blue dots) and the initial thermalization conditions \((T_{\mathrm{ini}},\mu_{B,\mathrm{ini}})\), where \(T_{\mathrm{ini}}\) is given by (6). Note the apparent temperatures deduced from the dilepton invariant mass spectra lie above the estimated initial temperatures at which the QGP thermalizes, again suggesting contributions from pre-equilibrium production in the measured invariant mass range. Thermal fits to the mass region around the \(\rho\)-meson, the low-mass region (LMR), on the other hand, yield temperatures consistent with those deduced from chemical freeze-out analyses [75; 76]. These are also shown in Fig. 15.
A more model-dependent measurement of the temperature can be obtained from blast-wave fits to transverse momentum spectra of identified particles [77]. There
Figure 15: QCD phase diagram showing: chemical freeze-out points (blue dots), average initial temperatures and chemical potential (red squares) and effective temperatures obtained by thermal fits to the intermediate and low mass regions in dilepton invariant mass spectra. The dotted lines indicate lines of constant \(T/\mu_{B}\), corresponding to approximately constant entropy per baryon in the QGP phase. (See text for literature references.
are many blast-wave fits of the temperature and expansion velocity at kinetic freeze-out [78; 33; 79]. Most of these show kinetic freeze-out temperatures \(T_{f}\) that are too low to be associated with the QGP. Exceptions are [80], where the authors consider anisotropic momentum distributions at freeze-out, which allows them to describe the final spectra with \(T_{f}=165.6\) MeV, and [81], where the authors determine the freeze-out parameters of the blast-wave fit from the fully-decayed hadron spectra and yields rather than from the spectra of primary hadrons. This method yields a common freeze-out temperature \(T_{\rm fo}=(150\pm 2)\) MeV for Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV over the entire centrality range with an average transverse expansion velocity that varies with centrality.
It would be interesting to perform similar fits at lower collision energies. If the concept is correct that hadron formation occurs always at the same temperature, and the temperature reached initially is reflected in the transverse expansion velocity, the dependence of \(\langle v_{T}/c\rangle\) on collision energy could reflect the amount of time the fireball spends in the QGP phase. The average transverse momentum \(\langle p_{T}\rangle\) reflects both, \(T_{\rm fo}\) and \(\langle v_{T}/c\rangle\), as well as the particle mass. A direct comparison with data again requires taking resonance decays into account.
## V Radiation from the plasma
In principle, direct photons carry information about the temperature of the emitting QGP. In practice, the analysis is complicated by the fact that the QGP temperature changes with time during the collision and the photon spectrum is blue-shifted owing to the transverse expansion velocity of the emitting matter. Finally, there can be contributions from photons radiated by the final-stage hadron gas. Any interpretation of measured photon spectra is therefore model dependent. The PHENIX collaboration has compiled data from RHIC and LHC on the collision energy and system size dependence of the direct photon yield (see Figs. 7, 8 in [82]) over a wide range.
Here we present figures of low-energy direct photons for \(\sqrt{s_{\rm NN}}=200\) GeV Au+Au collisions from PHENIX [83] and for \(\sqrt{s_{\rm NN}}=2.76\) TeV Pb+Pb collisions from ALICE [84]. The PHENIX data shown in Fig. 16 are already background substracted and only show the spectrum of photons attributed to thermal radiation from the hot medium. The subtraction uses a power-law fit to the spectrum measured in p+p collisions, which is scaled by the average binary collision number in the selected Au+Au centrality window. As indicated in the figure, the resultant fits give \(T_{\rm eff}=(239\pm 25\pm 7)\) for the most central 0-20% window and \(T_{\rm eff}=(261\pm 33\pm 8)\) MeV for the 20-40% centrality window, and does not rely on theoretical prediction for the photon spectrum emitted in p+p collisions.
Figure 17 shows the unsubtracted ALICE data for \(\sqrt{s_{\rm NN}}=2.76\) TeV Pb+Pb in three centrality windows. The figure also shows the scaled background of direct photons in p+p collisions, calculated at next-to-leading order in perturbative QCD and scaled with the average \(N_{\rm coll}\) for each centrality window. Exponential fits to the low-\(p_{T}\) spectrum for \(p_{T}<2.1\) GeV/c, after subtraction of the pQCD background, give thermal slopes of \(T_{\rm eff}=(297\pm 12\pm 41)\) MeV for the 0-20% centrality window and \(T_{\rm eff}=(410\pm 84\pm 140)\) MeV for the 20-40% window. It is not clear why the slope parameter is so much larger for the less central window; one reason may be that the data used in the fit start at a slightly larger value of \(p_{T}\).
The ALICE publication [84] also contains a comparison with model calculations of the fireball evolution using boost invariant hydrodynamics and lists the initial temperatures for several of these models, which depend on the start time \(\tau_{\rm ini}\) and the way the temperature is determined (at the center or averaged over the transverse profile). We here list results for the most central window (0-20%). For the ideal hydrodynamics model of He, He, and Rapp [85] using \(\tau_{\rm ini}=0.2\) fm/c the initial temperature at the center is \(T_{\rm ini}=682\) MeV; for the viscous hydrodynamics model of Patolet _et al._[86] the initial volume average temperature at \(\tau_{\rm ini}=0.4\) fm/c is \(T_{\rm ini}=385\) MeV. (A rough estimate based on boost-invariant ideal hydrodynamics scaling suggests that the two temperatures should be related by a factor \((4/3)(0.4/0.2)^{1/3}\approx 1.68\), which is close to the actual ratio \(682/385\approx 1.77\).)
Recently Paquet and Bass [87] showed in the context of an analytical model how the measured photon spectrum and yield can be related to the initial temperature \(T_{\rm ini}\) of the QGP at the center of the fireball. The Bayesian fit most tightly constrains the combination \(\tau_{\rm ini}^{1/3}T_{\rm ini}\), which is found to have the value \(450^{+100}_{-70}\) fm\({}^{1/3}\)MeV for Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV and \(350^{+130}_{-60}\) fm\({}^{1/3}\)MeV
Figure 16: Direct photon \(p_{T}\)-spectra for \(\sqrt{s_{\rm NN}}=200\) GeV Au+Au collisions after subtraction of the \(N_{\rm coll}\) scaled p+p contribution in centrality bins 0–20% and 20–40%. Dashed lines are fits to an exponential function in the range \(0.6\) GeV/c \(<p_{T}<2.0\) GeV/c. [From [83]]
for Au+Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV.
## VI Event-by-event fluctuations
There are three main sources of event-by-event fluctuations in relativistic heavy ion collisions: (1) Quantum mechanical density fluctuations in the colliding nuclei, (2) statistical fluctuations around thermal equilibrium, and (3) large fluctuations caused by instabilities during the dynamical evolution of the fireball. The first source is the origin of higher-order anisotropic flow, which will be discussed in Section X. The second source is always present in finite-size thermal systems and creates fluctuation observables that can probe the thermodynamic properties of the fireball. The third source requires dynamical evolution of the system far off equilibrium, which can occur, e. g., in a system that undergoes a first-order phase transition. In this section we focus on the second and third sources of fluctuations.
If in the process of cooling through the critical temperature \(T_{c}\) the fireball makes a sudden transition between supercooled phase without chiral symmetry breaking, implying a vanishing quark condensate, to a broken phase with a large quark condensate, extended domains with random orientation of the chiral quark condensate could be formed [88]. The formation and decay of domains of disoriented chiral condensate (DCC) would reveal itself by non-Poissonian fluctuations of the neutral-to-charged pion ratio \(N(\pi^{0})/N(\pi^{\pm})\)[89; 90]. The precondition for such a scenario is that the fireball evolves far out of equilibrium during the chiral transition.
Many searches have been carried out for signals from such DCC domains, but none of the searches have shown any sign of this effect [91; 92]. The most likely explanation for the absence of a DCC signal is that the expanding fireball never deviates far from thermal equilibrium, which is consistent with the smooth cross-over between the phases with broken and unbroken chiral symmetry at \(T_{c}\) found in lattice QCD. Another possibility is that some off-equilibrium evolution occurs but that domains of disoriented chiral condensate produced in relativistic heavy-ion collisions are too small to be distinguishable from thermal fluctuations of the chiral order parameter.
Isospin fluctuations characteristic of DCC can also show up as anomalous charge fluctuations among kaons \(N(K_{s}^{0})/N(K^{\pm}))\)[93; 94]. Measurements of cumulants of the neutral and charged kaon yields in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV by ALICE [95] revealed that \(K_{s}^{0}-K^{\pm}\) correlations differ from charged and neutral kaon correlations. However, various kinematic aspects of the observed difference do not support the interpretation as a DCC signal.
As stated at the beginning of this section, event-by-event fluctuations can also reflect statistical fluctuations around thermal equilibrium. These fluctuations may have a chance to survive up to the final state, if they involve locally conserved quantum numbers, such as electric charge \(Q\), baryon number \(B\), and strangeness \(S\). Thermodynamics relates these fluctuations to the corresponding susceptibilities \(\chi_{2}^{(X)}\), where \(X\) stands for the considered quantum number and the index 2 denotes the order of the fluctuation. Higher-order susceptibilities are related to higher-order event-by-event fluctuations. The thermal fluctuations of these quantities differ quite characteristically between a QGP and a hadron gas [96; 97], as do correlations, such as those between strangeness and baryon number [98].
Transport theory predicts that locally conserved quantum number fluctuations adjust quickly to the changing thermodynamic conditions as the QGP cools down, but to change much more slowly after hadronization [99]. Thus, the experimentally measured event-by-event fluctuations and correlations of conserved quantum numbers are expected to reflect the conditions that are prevalent at the quark-hadron transition. This insight can be used for an independent experimental determination of the quark-hadron phase boundary [100; 101].
Figure 18 compares the phase boundary between hot hadronic matter and QGP determined by lattice QCD simulations [102] with results obtained from experimentally measured net-electric charge and net-proton number fluctuations [103] (red triangles) and those obtained
Figure 17: Direct photon \(p_{T}\)-spectra in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 2.76 TeV for the 0–20% (scaled by a factor 100), the 20–40% (scaled by a factor 10) and 40–80% centrality windows compared to next-to-leading order pQCD predictions for the direct photon yield in p+p collisions at the same energy, scaled by the number of binary nucleon collisions for each centrality window (from [84]). See text and [84] for details.
from hadron yields using the statistical hadronization model [104; 105; 33] (magenta dots). The black line and the grey shaded region show the pseudo-critical line \(T_{c}(\mu_{B})\); the blue shaded region represents the width of the transition derived from the width of the peak in the chiral susceptibility [102].
A special case of such fluctuations are critical fluctuations in the vicinity of a critical point (\(\mu_{B.c},T_{c}\)) in the QCD phase diagram [106]. Because the critical mode has a component associated with the net baryon density, the critical fluctuations are expected to be manifested in net baryon number fluctuations, especially in the existence of a region with negative kurtosis [107]. After intriguing hints of such fourth-order fluctuations were observed in an exploratory beam energy scan at RHIC [108], an extensive campaign of measurements (RHIC Beam Energy Scan II) was conducted [109]. We are currently awaiting a full analysis of these data. See also Section XI for discussion of the critical point in the context of the QCD equation of state.
Another important application of event-by-event fluctuations of conserved quantum numbers are balance functions. In a closed system, such as the fireball created in a nuclear collision, any local fluctuation of a conserved quantity ("charge") in a certain region of phase space must be compensated ("balanced") by an equal but opposite fluctuation in the complementary part of phase space. The distribution of this compensating charge is called the balance function. The balance function is usually projected onto relative rapidity, \(B(\Delta y)\), or relative azimuthal emission angle, \(B(\Delta\phi)\). A wide separation of observables in (pseudo-)rapidity implies that they are established early in the collision; the separation in emission angle is sensitive to the diffusivity of the quanta carrying the observed charge, which then gets imprinted with the radial flow profile of the QGP.
Figure 19 shows the rapidity-dependence (upper panel) and angle-dependence (lower panel) of the kaon charge balance function, \(B_{K|K}(\Delta y)\) and \(B_{K|K}(\Delta\phi)\). \(B_{K|K}(\Delta y)\) is shown for three different values of the space-time rapidity width \(\sigma_{0}\) of the balance function at the hydrodynamization moment (\(\tau_{\rm ini}=0.6\) fm/c); \(B_{K|K}(\Delta\phi)\) is shown for four different values of the charge diffusion constant \(D\)[110]. The theoretical predictions are compared with data from ALICE in the 5% most central Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV [111]. The conclusion is (_i_) that the chemical composition of the QGP is equilibrated at the time of hydrodynamization and (_ii_) that the charge diffusion constant \(D\) agrees with values obtained on the lattice [112] within a factor of two.
## VII Chiral symmetry restoration
One of the defining characteristics of the QGP is the restoration of chiral symmetry. Lattice QCD calculations identify the crossover transition between the hadronic gas phase and the QGP phase by the location \(T_{c}\) of the inflection point in the temperature dependence of the renormalized chiral condensate \(\langle\bar{\psi}\psi\rangle_{\rm ren}\), or equivalently, by the location of the maximum of the chiral susceptibility. For \(T<T_{c}\) the chiral condensate approaches its vacuum value; for \(T>T_{c}\) the condensate rapidly tends to zero signalling restoration of the spontaneously broken chiral symmetry.
A direct consequence of chiral symmetry restoration above \(T_{c}\) is that excitation modes that differ only by parity must become degenerate. A prime example for this behavior are the vector and axial vector modes. In the vacuum, the lowest hadronic modes in these channels belong to the \(\rho\)-meson and the \(a_{1}\)-meson, respectively, which are separated in mass by approximately 500 MeV. It is predicted that the two modes become degenerate above \(T_{c}\)[113]. The axial vector channel is difficult to access, but the vector channel can be probed by measuring the spectrum of emitted lepton pairs, either \(e^{+}e^{-}\) or \(\mu^{+}\mu^{-}\), which can be related to the photon spectral function. The restoration of chiral symmetry manifests itself in rather subtle changes in the continuum at masses above \(m_{\rho}\)[114]. The \(\rho\)-meson peak in the spectral function, which is already collision broadened in hot or dense hadronic matter, completely disappears in the QGP phase. This is a signature of quark deconfinement and the associated disappearance of well-defined hadron
Figure 18: Phase boundary between hot hadronic matter and QGP. The black line shows \(T_{c}(\mu_{B})\) calculated by lattice QCD; the blue shaded region indicates the width of the transition region [102]. The results derived from hadron yields using the statistical hadronization model are shown as magenta dots [104; 105; 33], those deduced using the experimentally measured net-electric charge and net-proton number fluctuations are shown as red triangles [103].
states above \(T_{c}\)[115; 116].
The most precise measurement of the lepton pair spectrum was carried out by the NA60 experiment for \(\sqrt{s_{\rm NN}}\) 17.3 GeV In+In collisions at CERN-SPS in the \(\mu^{+}\mu^{-}\) channel [117; 118; 119; 120; 75]. The di-muon mass spectrum shows a much reduced peak at the \(\rho\)-meson mass corresponding to final-state decays of \(\rho\)-mesons in a dilute hadronic medium, as shown in Fig. 20, superimposed on a broad background that is compatible with expectations from models of in-medium resonance broadening [121]. There is no evidence of a mass shift that is predicted by some models of chiral symmetry restoration in dense, baryon-rich hadron matter [122].
Further analysis of the \(\mu^{+}\mu^{-}\) spectrum revealed that the spectrum below \(M_{\mu\mu}=1\) GeV is azimuthally isotropic [120] and its \(p_{T}\)-distribution is compatible with thermal emission from a collectively flowing hot hadronic medium [118; 119]. The spectrum for \(M_{\mu\mu}>1\) GeV shows a different \(p_{T}\)-dependence without indication of transverse flow, which is consistent with an origin from an early deconfined partonic phase [75].
Low-mass electron pair production in \(\sqrt{s_{\rm NN}}=200\) GeV Au+Au collisions at RHIC energies has been measured by PHENIX [123] and STAR [124]. The data exhibit similar features as those measured at SPS energies in the In+In system, albeit with lower statistical significance. The invariant mass spectrum shown in Fig. 21 exhibits a broad excess over the "cocktail" from hadronic decays, especially in the region below the \(\rho\) peak, which is compatible with predictions from models of resonance broadening in a hot hadron gas. Data from STAR shown in Fig. 22 taken at lower collision energies are consistent with a linear scaling of the di-electron excess with the charged multiplicity [125]. Dielectron data from Pb+Pb collisions at LHC are currently limited to peripheral and semi-peripheral collisions [126].
## VIII Femtoscopy and other correlations
Identical two-particle correlations are sensitive to the spatial extent and the life-time of the emitting source.
Figure 19: Charged kaon balance functions \(B_{K|K}\) for \(0-5\%\) central Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 2.76 TeV measured by ALICE [111] (blue dots) in comparison with theoretical simulations [110] (connected black dots). Upper panel: Rapidity-dependent balance function \(B_{K|K}(\Delta y)\) for three values of the width \(\sigma_{0}\) of the initial balance function at the moment of hydrodynamization (0.6 fm/c). Lower panel: Azimuth-dependent balance function \(B_{K|K}(\Delta\phi)\) for four different values of the charge diffusion constant \(D\) in the QGP. The red dots/lines account for diffusion in the QGP; green dots/lines account for hadron decays and rescattering; black dots/lines show the sum of both contributions.
Figure 20: Excess \(\mu^{+}\mu^{-}\) mass spectrum for the semicentral bin in 158 GeV/c In+In collisions in comparison with model predictions. The curves show: “Cocktail” \(\rho\) (thin solid), unmodified (“vacuum”) \(\rho\) (dashed), in-medium broadening \(\rho\) (thick solid), in-medium shifted \(\rho\) (dashed-dotted). The errors are purely statistical [from [117]].
This method of experimentally constraining the source geometry is called Hanbury-Brown-Twiss (HBT) interferometry, density interferometry, or femtoscopy (see [127] for a detailed exposition of the theoretical foundations and [128] for a pedagogical introduction). Experimental results for identical charged pions, kaons, and protons have been extensively published for a wide range of collision energies at AGS, SPS, RHIC, and LHC (see Fig. 20 in [129] and Fig. 18 in [7]).
Most analyses are based on a source distribution that uses a Gaussian profile with radius parameters that are aligned along the collision axis (\(R_{\rm long}\)), the combined momentum of the observed particle pairs (\(R_{\rm out}\)), and the axis perpendicular to these two directions (\(R_{\rm side}\)). The value of \(R_{\rm out}\) is sensitive to the duration of the emission process and thus can serve as a probe of the late-stage expansion dynamics. A first-order phase transition involving the formation of a long-lived mixed phase is expected to increase the emission duration and to result in a (much) larger value of \(R_{\rm out}>R_{\rm side}\). A steep drop in the compressibility of the expanding matter during hadron emission, corresponding to a drop in the sound velocity, would have a similar, albeit less pronounced effect.
The data for Au+Au collisions over the energy range of the RHIC Beam Energy Scan from STAR exhibit a rise in \(R_{\rm out}/R_{\rm side}\) with increasing collisions energy up to \(\sqrt{s_{\rm NN}}\approx 20\) GeV followed by a smooth fall-off for higher energies as seen in Fig. 23. This behavior appears to be consistent with the interpretation of a minimum of the compressibility around \(T_{c}\) during hadron emission, but a firm conclusion will require a detailed theoretical analysis, which is not yet available.
The three radius parameters are sometimes combined to estimate the volume of a homogeneously flowing emission region at the moment of freeze-out. However, regions of the fireball that flow in different directions or are shielded from each other by opaque matter do not contribute to the HBT interference pattern. Therefore, the
Figure 23: Data for the ratio \(R_{\rm out}/R_{\rm side}\) over the energy range of the RHIC beam energy scan. The symbols refer to results from the different experiments as shown in the legend. For further details see [129].
Figure 21: Dielectron mass spectrum for several centrality bins in 200 GeV/c Au+Au collisions measured by PHENIX [123]. The solid line shows the hadronic “cocktail” contribution; the various other curves represent specific contributing decay channels. A statistically significant excess is observed in the mass regions below and above the \(\rho\) peak.
Figure 22: Dielectron excess over the hadronic “cocktail” contribution in \(0-80\%\) central Au+Au collisions over a wide range of collision energies measured by STAR [125]. The blue stars show the STAR data; open symbols represent various theoretical model calculations (for details see [125]).
product \(V_{\rm hom}=R_{\rm out}R_{\rm side}R_{\rm long}\), called the homogeneity volume, cannot be interpreted directly as the total volume of the fireball during the hadron emission process. The Gaussian life-time parameter \(\tau_{f}\) measures the average duration of the stage during which hadrons freeze out from the fireball, or their emission time. The \(\tau_{f}\) can be derived from the \(R_{\rm long}\) and the kinetic freeze-out temperature [130]. The life-time \(\tau_{f}\) increases smoothly with charged-particle multiplicity from around 4 to 10 fm/c, as seen in Fig. 24. This is also the case for the quantity \(V_{\rm hom}\).
Momentum correlations of non-identical particles have been measured providing information about interactions among hadrons that cannot be easily measured in scattering experiments because the hadrons are unstable or beams are unavailable. For example, (p\(\Lambda\)) correlations have been measured in Au+Au collisions by STAR [131] and (K\({}^{-}\)p) correlations in collisions of p+p, p+Pb, and Pb+Pb by ALICE [132; 133]. These are sensitive to the asymptotic form of the two-particle \(\overline{\rm K}\)N wave function at distances of several fm and are able to provide details of the coupling strength in various inelastic channels of exotic nuclear resonance states. When measured as a function of the source size can help understand the internal structure of these exotic states.
Another example where heavy-ion collisions can help elucidate the structure of hadronic resonance states is the exotic \(\chi\)(3872) particle, which was first observed in p+p collisions [134] collisions. The decay channel \(\chi\)(3872) \(\rightarrow\) J/\(\psi\,\pi^{+}\pi^{-}\) was recently measured in inclusive Pb+Pb collisions [135]. The prompt \(\chi\)(3872)/\(\psi\)(2s) is observed to increase as a function of multiplicity in p+Pb and Pb+Pb, but to decrease with underlying event multiplicity in p+p reactions. This suggests very different dynamics, such as quark coalescence, for the exotic \(\chi\)(3872) particle at high density compared to the \(\psi\)(2s). Future measurements will aim to determine whether the \(\chi\)(3872) is a \((q\bar{q})\) molecule, a tetraquark state or some mixture of both.
The production of light anti-nuclei is enhanced in heavy-ion collisions [136] by the formation and rapid expansion of a QGP, as it allows anti-nuclei to escape more easily without annihilation. This is also true for production of light anti-hypernuclei [137]. The relative yields of light nuclei and their antiparticles can be used to test their production mechanisms, such as statistical hadronization and final-state coalescence by comparing production yields in p+p, p+A and A+A collisions.
Measurements of light anti-nuclei and anti-hypernuclei have potential impact in other realms of physics. Precision measurements [138] of the mass differences between light nuclei and their antiparticles allow for unique tests of CPT invariance. Experimental results for light anti-nuclei are also important for better modeling of the particle composition of cosmic rays as well as the propagation of light anti-nuclei in the interstellar medium [139], which is an important ingredient of certain dark matter searches. The significantly enhanced yield of \({}^{3}_{\Lambda}\)H measured at the lowest RHIC energies [140] favors low-energy heavy-ion collisions as a tool for the study of strange quark-doped nuclear matter, which is of relevance to the interior of neutron stars.
## IX Parton propagation
The last diagram in Fig. 1 labeled "parton propagation" was a placeholder for a multitude of possible observables, comprehensively called jet quenching or jet modification, that were not well understood at the time. The simplest observable sensitive to the propagation of hard-scattered partons in the QGP is the inclusive yield of high-\(p_{T}\) hadrons. An energy loss of partons in the QGP results in the suppression of the hadron yields. The combined energy loss of all partons in the jet shower manifests itself in the suppression of the overall jet yield. Both phenomena are usually expressed in terms of a suppression factor \(R_{\rm AA}\) (defined in Eq. (9)) with respect to the yields measured in appropriately scaled p+p collisions.
The initial measurements of the charged-particle \(R_{\rm AA}\) at RHIC [3; 4; 5; 6] revealed suppression in central collisions of heavy ions [141; 142]. Various approaches have since evolved to investigate the influence of the QGP on the propagation of partons through the medium, with experiments focusing on less inclusive observables that could be sensitive to the pathlength dependence of parton energy loss in the QGP.
Correlations between two back-to-back high-\(p_{T}\) hadrons revealed the attenuation of hadrons on the opposite side ("away-side") of a trigger hadron in the most central collisions [143]. The interpretation is that the interactions of the away-side parton in the QGP degrade its momentum and thereby reduce the number of hadrons that escape on the away-side. In order to understand quantitat
Figure 24: Life-time parameter \(\tau_{f}\) as a function of the cube-root of the charged-particle multiplicity density. Data are from femtoscopy measurements of various experiments covering the center-of-mass energies labeled in the legend. [7]
loss mechanisms in the QGP, experiments have sought to determine the pathlength dependence of partons traversing the QGP by measuring various correlations. Studies of high-p\({}_{T}\) hadron correlations [144; 145; 146; 147; 148] include short- and long-range correlations in azimuth and pseudo-rapidity. The results of these studies have led to tests of possible collectivity in high multiplicity events in smaller collision systems [149]. Since jet measurements have become prevalent at the LHC and with upgrades at RHIC, correlations of hadrons with a trigger jet [150; 151], of jets with a trigger hadron [152; 153], and between two back-to-back jets (dijets) [154; 155] have been investigated. Such observables represent semi-inclusive measurements that are more complicated to interpret.
Most recently, there has been a focus on jet measurements and flavor dependence of various energy-loss observables. They include investigations of the dijet asymmetry (or imbalance) [156; 157; 158; 154] and acoplanarity [159; 160], which are considered to be sensitive to the parton rescattering in the medium. A larger di-jet imbalance between opposite jets of a dijet pair is observed in Pb+Pb compared to p+p collisions [157]. The \(p_{T}\) imbalance in the Pb+Pb dijets is compensated for by an enhanced multiplicity of low-\(p_{T}\) (0.5 - 2.0 GeV/c) particles on the side of the less energetic (subleading) jet, indicating a softening of the radiation responsible for the imbalance in \(p_{T}\). The dijet imbalance in Pb+Pb compared to p+p is greater for more central Pb+Pb collisions. Furthermore, the subleading jets are found to be more suppressed than leading jets, reaching up to 20% stronger suppression in central collisions [158]. These measurements can be used to constrain models of the path-length dependence of jet energy loss and its fluctuations.
The results of these investigations thus far have not yielded definite conclusions nor straight-forward interpretations regarding QGP medium properties beyond the jet quenching parameter \(\hat{q}\). However, there appears to be some consistency developing between the longtime prediction [161; 162] of a broadening of the acoplanarity distribution and what has recently been observed in hadron-recoil jet measurements at the LHC [159] and RHIC [160]. The acoplanarity measurements exhibit a broadening of the recoil jet distribution in Pb+Pb relative to p+p collisions at low recoil jet \(p_{T}\) indicating enhanced jet-medium interactions of low-\(p_{T}\) jets opposite the trigger, presumably due to its longer path through the QGP.
To study the pathlength dependence of the interactions of partons traversing the QGP in detail [163; 164] event shape engineering has been implemented [165] in order to have better control of the initial geometrical event shapes for more precise path-length determination. The overall goal of the various jet asymmetry measurements is to provide additional insight into the pathlength dependence of jet modification and provide more rigorous tests of the energy-loss mechanisms in the QGP. Although several intriguing observations have been made, more theoretical work and incisive experimental results are needed to reach this goal.
More detailed information about the dynamics of parton propagation in the QGP can be gleaned from studies of the modification of the substructure of jets. The two simplest observables in this domain are fragmentation functions and jet shapes, which characterize the longitu-dinal and transverse momentum structure of jets, respectively. The interactions of showering partons with the QGP modify the gluon radiation pattern that imprints itself on the parton shower, which makes the momentum space structure of the shower a promising probe of the elementary nature of the parton interactions with the QGP. Increasing experimental capabilities combined with improved jet shower simulations are pushing the forefront of jet quenching studies in the direction of more exclusive studies of jet substructure modifications, on the one hand, and the search for globally defined observables that allow for rigorous QCD-based calculations.
In the following we discuss some of these findings in detail, focusing on high-\(p_{T}\) inclusive hadron and jet suppression and modifications of the internal structure of jets by the QGP.
### High-Momentum Hadron Suppression
#### iv.1.1 Light Hadrons
Jet quenching in relativistic heavy-ion collisions [166; 167] (see [168] for a review of the basic theory) probes the mechanisms for secondary scattering and energy loss of fast partons, i. e. quarks or gluons, in the medium created during the collision. The observable that most directly connects jet quenching to parton energy loss is the suppression of the yield of inclusive high-\(p_{T}\) hadrons [169], expressed as the ratio \(R_{\rm AA}(p_{T})\) of the inclusive single-hadron yield in A+A collisions and the single-hadron yield in proton-proton collisions, scaled by the number of binary nucleon-nucleon collisions \(N_{\rm coll}\), defined in (9).
Suppression of the charged-hadron spectra was initially observed in measurements of \(R_{\rm AA}\) at RHIC [141; 142; 170; 171]. Since then, a wealth of data has been accumulated on the \(R_{\rm AA}\) of inclusive charged hadrons from LHC [172; 173; 174; 175; 176] and RHIC, as well as the \(R_{\rm AA}\) of identified hadrons (discussed below). Inclusive charged hadron data at lower collision energies were taken in the RHIC beam energy scan [177; 178]. For some collision energies a p+p reference was not available; in those cases a binary collision-scaled hadron spectrum measured in peripheral A+A collisions was used. The resulting ratio \(R_{\rm CP}(p_{T})\) can serve as a proxy for \(R_{\rm AA}\).
The general shape of the curve \(R_{\rm AA}(p_{T})\) can be divided into a low-\(p_{T}\) region, roughly \(p_{T}\lesssim 5\) GeV/c, and a high-\(p_{T}\) region with \(p_{T}\gtrsim 5\) GeV/c, each encompassing different dominant dynamical processes. At low \(p_{T}\) there is a complex interplay between collective flow and quark recombination, while at high \(p_{T}\) the hadron spectrum reflects the fragmentation spectrum of the hard-scattered
partons, modified by their energy loss caused by passage through the QGP.
The sketch in Fig. 1 entitled "parton propagation" was based on the expectation that the amount of energy loss in a QGP would be quite different (either much larger or much smaller) than that in a hadron gas. It could be larger because the number of active scattering centers (gluons) is much larger in a QGP; but it could also be smaller because the strong confining force is screened in the plasma. In the absence of a theoretical framework it was not possible to make a definite prediction.
The most direct way of studying this question experimentally is to explore the dependence of \(R_{\rm AA}\) (or \(R_{\rm CP}\)) on the collision energy and centrality. The STAR data for \(R_{\rm CP}\) of charged hadrons shown in Fig. 25 cover the energy range \(\sqrt{s_{\rm NN}}=7.7-200\) GeV. They exhibit suppression at large \(p_{T}\) for collision energies greater than 27 GeV, the lowest collision energy for which \(R_{\rm CP}(p_{T})\) data for \(p_{T}\gtrsim 5\) GeV/c exist. For lower collision energies an enhancement (\(R_{\rm CP}>1\)) is observed in the few GeV/c momentum range, which grows as the collision energy is lowered. This enhancement has been attributed to contributions from several mechanisms. These include the Cronin Effect [179; 180], the cumulative effect in nuclear parton distributions that extend into the region \(x>1\)[181], and collective transverse flow augmented by parton recombination [182]. All these effects have in common that multiple nucleon-nucleon collisions contribute to the transverse energy of the produced hadrons. Comparison with p+A data will be needed to sort out the relative importance of these mechanisms.
In nuclear collisions, recombination is enhanced at larger \(p_{T}\) by the collective flow that blue-shifts the thermal parton spectrum. Fragmentation is depleted in the presence of a dense medium by the energy loss of the primary parton. The fragmentation mechanism generally dominates at sufficiently high \(p_{T}\), because the primary parton spectrum from hard QCD scatterings has a power law tail, while the thermal parton spectrum falls off exponentially. The recombination contribution only weakly depends on \(\sqrt{s_{\rm NN}}\,\) while the fragmentation contribution falls off steeply as \(\sqrt{s_{\rm NN}}\,\) decreases. Thus, the relative magnitude of the two contributions depends on the collision energy. This means that the threshold value of \(p_{T}\) beyond which jet quenching is visible shifts rapidly to higher \(p_{T}\) as the collision energy is reduced and eventually becomes unobservable because sufficiently hard parton scatterings become rare.
The \(R_{\rm AA}\) of identified protons and pions has been measured at midrapidity in d+Au collisions at \(\sqrt{s_{\rm NN}}\,=200\) GeV and exhibits an enhancement for \(2<p_{T}<7\) GeV/c in central collisions [183]. NLO pQCD calculations are able to describe the data for pions at higher \(p_{T}\) in both p+p and d+Au collisions indicating an emergence of effects outside pQCD at these lower \(p_{T}\). Furthermore, the larger enhancement of protons than pions observed at low \(p_{T}\) in the d+Au data reinforces the role of recombination and collective flow in the enhancement and possibly additional cold nuclear matter effects.
The \(p_{T}\) range covered by the data expands quickly with collision energy and reaches up to \(p_{T}=250\) GeV/c in Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV measured by ATLAS [185]. For collision energies in the LHC range, as shown in Fig. 26, one generally finds that \(R_{\rm AA}(p_{T})\) attains a minimum at \(p_{T}\approx 6-8\) GeV/c, followed by a steady rise that extends up to the highest \(p_{T}\) measured. This behavior indicates that the relative energy loss \(\Delta E/p_{T}\) shrinks with increasing momentum \(p_{T}\).
A comparison of the inclusive \(R_{\rm AA}\) for central Pb+Pb collisions at LHC with that for central Au+Au collisions
Figure 26: \(R_{\rm AA}\) of charged particles in 0-5% central Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) and 5.02 TeV from CMS [174]. Also shown are \(\sqrt{s_{NN}}=2.76\) TeV results from ALICE [175] and ATLAS [176] as indicated in the legend. The boxes represent the systematic uncertainties of the 5.02 TeV CMS data.
Figure 25: \(R_{\rm CP}\) for inclusive charged hadrons measured by STAR in Au+Au collisions [178] over a wide range of collision energies as indicated in the legend.
at the top RHIC energy in Fig. 27 shows that the suppression exhibits a similar pattern and appears only slightly stronger at LHC than at RHIC. This is somewhat of an illusion, because the charged-hadron spectrum falls off more steeply at RHIC, which means that a smaller energy loss \(\Delta E\) is needed at RHIC to produce a comparably large suppression as that seen at LHC.
An estimate of the energy loss \(\Delta E\) can be obtained as follows. Expressing the nuclear suppression factor as a downward (in \(p_{T}\)) shift of the hadron spectrum:
\[R_{\rm AA}(p_{T})=\frac{P_{\rm AA}(p_{T})}{P_{\rm pp}(p_{T})}=\frac{P_{\rm pp} (p_{T}-\Delta E)}{P_{\rm pp}(p_{T})}. \tag{10}\]
Expanding to first order in \(\Delta E\) gives
\[\Delta E=-\frac{\ln R_{\rm AA}(p_{T})}{\frac{d}{dp_{T}}\ln P_{\rm pp}(p_{T})}. \tag{11}\]
Both PHENIX and STAR have published \(R_{\rm AA}\) or \(R_{\rm CP}\) data for pions at several collision energies from the RHIC beam energy scan [177; 184; 178]. ALICE has published \(R_{\rm AA}\) data for pions at the LHC collision energies of 2.76 and 5.02 TeV [186]. The energy loss deduced from the measured \(R_{\rm AA}\) for pions in \(0-10\%\) central Au+Au collisions at RHIC and for charged hadrons in \(0-5\%\) central Pb+Pb collisions at LHC is shown in Fig. 28 for collision energies \(\sqrt{s_{\rm NN}}\) ranging from 39 GeV to 5.02 TeV. The energy loss increases with both collision energy and the transverse momentum of the primary parton.
Figure 29 demonstrates that the nuclear suppression is a function of system size. Comparing \(R_{\rm AA}\) measured in central Pb+Pb collisions with the \(R_{\rm AA}\) measured in peripheral collisions and \(R_{\rm pPb}\) measured in p+Pb collisions one sees that the suppression is much weaker in peripheral collisions, where hard partons have much less matter to traverse, and essentially absent in non-single diffractive (NSD) p+Pb collisions, where very little or no hot matter is produced.
One important question is whether the modification of the hadron spectrum is an initial-state effect, e.g. caused by nuclear modification of the parton distribution functions \(f_{i}^{(A)}(x)\), or a final-state eff
Figure 28: Energy loss \(|\Delta E|\) at fixed p\({}_{T}\) for several different collision energies deduced from the nuclear suppression factor in central Au+Au collisions at RHIC [184; 177] and Pb+Pb collisions at LHC [187; 173] using the relation (11). The \(\Delta E\) for 5.02 TeV collisions has been extrapolated to \(p_{T}=6\) GeV/c for a visual comparison with the RHIC data.
Figure 27: \(R_{\rm AA}\) for inclusive charged hadrons measured by ALICE in central 2.76 TeV Pb+Pb collisions in comparison with \(R_{\rm AA}\) for inclusive charged hadrons measured by STAR and PHENIX in central Au+Au collisions at 200 GeV [173].
Figure 29: \(R_{\rm AA}\) for central (0-5% centrality) and peripheral (70-80% centrality) Pb-Pb collisions, and \(R_{\rm pPb}\) for non-single diffractive p–Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV [7].
answered with the initial results from RHIC by a comparison of the \(R_{\rm AA}\) for direct photons with that for \(\pi^{0}\) and \(\eta\)-mesons [188], which is shown in Fig. 30. \(\pi^{0}\) and \(\eta\)-mesons are almost identically suppressed while direct photons, which do not suffer significant final-state interactions in the QGP, are not suppressed. Further investigation into direct boson production in AA collisions at the LHC have confirmed that not only direct photons [189; 84] but also W- and Z-bosons [190; 191] are consistent with pQCD calculations and exhibit no signs of suppression.
Complementary data from the LHC on identified hadrons and photons extend this conclusion to larger \(p_{T}\) as seen in Fig. 31. The strong suppression of identified hadrons, combined with the lack of suppression of direct photons, singles out a final-state effect (parton energy loss) as the cause of the observed suppression and rules out any initial-state mechanism as the cause. Figure 31 also demonstrates that particle-specific effects, such as collective flow and recombination from the QGP, strongly affect the \(R_{\rm AA}\) for various hadron species in the range \(p_{T}<10\) GeV/c. This is observed in heavier mass particles, e.g. protons, whose \(R_{\rm AA}(p_{T})\) peaks at successively larger \(p_{T}\). However, for \(p_{T}>10\) GeV/c one finds a universal behavior in \(R_{\rm AA}\) for all hadrons composed of light (\(u,d,s\)) quarks, indicating that these particles are all created by the same mechanism, fragmentation of a hard-scattered primary parton.
This universal behavior justifies using experimental data to extract a value for the radiative jet quenching parameter \(\hat{q}\) using Bayesian parameter estimation. Following early work by the JET Collaboration [192], the JETSCAPE Collaboration performed a systematic analysis to constrain the dependence of \(\hat{q}\) on the jet energy, virtuality, and medium temperature from experimental measurements of inclusive hadron suppression in Au+Au collisions at RHIC and Pb+Pb collisions at LHC [193]. The results, shown in Fig. 32 favor a model in which the ratio \(\hat{q}/T^{3}\) depends logarithmically on the primary parton virtuality and energy, and it scales quadratically with the color charge of the parton.
#### iv.2.2 Heavy Flavor Hadrons
Heavy-flavor quarks are produced mainly in hard scattering of partons in the initial stage of a heavy-ion collision prior to formation of the QGP. Thus, they experience the entire history of the collision process, interact with the QGP, and probe the flavor and mass dependence of parton energy loss in the evolution of the QGP.
Initial investigations into the possible suppression of heavy-flavor hadrons were carried out at RHIC with measurements of non-photonic electron spectra from semileptonic decays of open-charm and open-beauty hadrons. The \(R_{\rm AA}({\rm e}^{\pm})\) was found to be strongly suppressed at mid-rapidity in central \(\sqrt{s_{\rm NN}}=200\) GeV Au+Au collisions, indicating significant energy loss of heavy quarks in the QGP [194; 195]. The suppression approaches that of the \(\pi^{0}\) for p\({}_{T}>4\) GeV/c.
Later, a direct measurement of the \(R_{\rm AA}({\rm D}^{0})\) from \(D^{0}\to K^{-}+\pi^{+}\) for \(p_{T}>2\) GeV/c in semi-central Au+Au collisions (\(N_{\rm part}>170\)) confirmed that the open-charm hadrons are suppressed when traversing the QGP [196]. The D\({}^{0}\)-meson yield integrated over \(p_{T}<8\) GeV/c is suppressed by a factor \(R_{\rm AA}({\rm D}^{0})\approx 0.5\), while an enhancement by a factor \(R_{\rm AA}({\rm D}^{0})\approx 1.3\) is observed over the narrower momentum range \(0.7\) GeV/c \(<p_{T}<2.2\) GeV/c. The suppression is consistent with a charm quark energy loss similar to that of light quarks, while the enhancement at low \(p_{T}\) for these most central collisions is a reflection of the chemical oversaturation of charm quarks and may suggest a coalescence mechanism for low-\(p_{T}\) open-charm hadrons. Additional evidence for coalescence comes from the observed enhancement of the \(\Lambda_{c}/{\rm D}^{0}\) ratio [197].
Figure 31: \(R_{\rm AA}\) for charged particles, identified particles, and photons in central 5.02 TeV Pb+Pb collisions with particle species and references given in the legend. [7]
Figure 30: \(R_{\rm AA}\) for identified \(\pi^{0}\) and \(\eta\)-mesons measured by PHENIX in central 200 GeV Au+Au collisions in comparison with the \(R_{\rm AA}\) for direct photons [188].
Better statistics at the higher energies of the LHC in Run 2 and refinement of experimental techniques enabled a more thorough investigation of the particle and quark mass dependence of the suppression. The \(R_{\rm AA}\) of identified hadrons (\(\pi^{\pm}\), D\({}^{0}\), D\({}^{+}\), D\({}^{\rm*+}\), J/\(\psi\)) are displayed in Fig. 33 for \(\sqrt{s_{\rm NN}}=5.02\) TeV central Pb+Pb collisions at mid-rapidity [198]. The data show that \(R_{\rm AA}({\rm D})>R_{\rm AA}(\pi)\) for \(p_{T}\lesssim 10\) GeV/c, indicating that effects due to radial flow and hadronization affect D-meson and light- and heavy-hadron yields differently as a function of \(p_{T}\), which complicates the interpretation of their \(R_{\rm AA}\) values at low to intermediate \(p_{T}\). At \(p_{T}\gtrsim 10\) GeV/c, the D-meson \(R_{\rm AA}\) reaches values similar to that of pions. However, due to the harder \(p_{T}\) spectrum and different fragmentation function of charm quarks compared to light quarks and gluons, the interpretation of the differences in the pion and D-meson \(R_{\rm AA}\) requires detailed model calculations.
The \(R_{\rm AA}\) for prompt and non-prompt J/\(\psi\) from CMS [56] is also shown in Fig. 33. The \(R_{\rm AA}\) of prompt D-mesons is observed to be lower than that of non-prompt J/\(\psi\) mesons from beauty decays indicating a quark mass dependence of parton energy loss, whereby heavier \(b\)-quarks lose less energy than lighter \(c\)-quarks when traversing the QGP. Additional measurements of the \(R_{\rm AA}\) of light, open-charm, and open-beauty hadrons via non-photonic electrons at RHIC [199; 200] and LHC [201], and muons at LHC [202; 203] confirm the flavor and mass ordering of the suppression of charm and beauty quarks. The investigation into the flavor and mass dependence of hadron suppression in Pb+Pb collisions at LHC continues with new measurements of mixed-quark hadrons such as the D\({}_{s}^{+}\)[204], B\({}_{s}^{0}\)[205], and B\({}_{c}^{+}\)[206].
### Jets
#### iv.2.1 Jet Suppression
Understanding the parton energy loss processes in the QGP requires measurement of the resulting parton showers known as jets. Jets and their properties have been measured extensively in p+p collisions. In heavy-ion collisions the showering process becomes convoluted with the energy loss of the partons as they traverse the QGP. It is thus important to compare jet measurements in Pb+Pb collisions with those in p+p collisions to extract the jet energy and yield as a function of \(p_{T}\) with the aim to better understand the parton energy loss mechanism.
In addition to the total jet energy loss relative to the initial hard scatter it is important to distinguish as much as possible between the elastic interaction processes, i. e. two-body scattering off medium constituents, and various inelastic ones, such as collisionally induced gluon radiation. For example, the analysis of inclusive hadron suppression \(R_{\rm AA}\) in terms of the jet quenching parameter \(\hat{q}\)[192; 193] assumes that the entire energy loss of a hard-scattered parton is caused by collisionally induced gluon radiation. One goal of studying jet modification by the medium is to determine whether the picture underpinning such energy loss analyses is correct.
Figure 33: \(R_{\rm AA}\) for prompt D-mesons, charged pions, charged particles, and J/\(\psi\) from ALICE [198]. Also shown are \(R_{\rm AA}\) results for prompt and non-prompt J/\(\psi\) from CMS [56]. All measurements are for \(\sqrt{s_{\rm NN}}=5.02\) TeV central Pb+Pb collisions at mid-rapidity with ranges stated in the legend.
Figure 32: Bayesian parameter extraction of \(\hat{q}/T^{3}\) from experimental measurements of inclusive hadron suppression in Au+Au collisions at RHIC and Pb+Pb collisions at LHC [193]. The 90% confidence regions for the MATTER+LBT2 model encompass the top and bottom curves of each color as a function of medium temperature \(T\). The curves in the middle of the bands indicate their median values. The solid black circles with error bars represent the results obtained by the JET Collaboration [192]. The dotted boxes indicate the temperature ranges considered in that analysis. The insert shows the prior range of values for \(\hat{q}/T^{3}\) used in the Bayesian analysis with the darker (lighter) area depicting the 90% (99%) likelihood range. [From [193]]
The various parton-medium interaction processes will manifest themselves not only in longitudinal momentum loss but also in momentum broadening transverse to the jet axis. Thus, there is the need to determine differences between jets from heavy-ion collisions and parton showers in vacuum, represented in p+p collisions, and to identify the influence of the flavor and mass of partons on the jet structure. In turn, the medium responds differently to the elastic and inelastic interaction processes that contribute to the parton energy loss. By using jets and high-\(p_{T}\) partons, we seek to understand not only the parton energy-loss mechanisms, but also to probe the QGP at various resolution scales with the ultimate goal of gleaning information about its microscopic structure.
It is important to note that high-\(p_{T}\) hadrons are most likely to be produced downstream from the hardest splitting in the jet shower, which is calculable in pQCD, and are most sensitive to the energy loss in that branch of a parton shower. In contrast, jets are sensitive to the energy lost in the entire shower and the various energy loss processes down to the non-perturbative level, but the lost energy ends up outside the kinematic cuts that are used to define the jet.
Because jets are not unambiguously defined states in QCD, they must be characterised by the experimental procedure by which they are identified. This procedure includes the resolution parameter (also called the jet cone opening angle) \(R\leq 1\), the clustering algorithm, such as anti-\(k_{T}\)[207], and possibly a low-\(p_{T}\) cutoff. Only data with the same selections of the clustering algorithm and cone parameter \(R\) are comparable. The method for subtracting out the soft background underlying the jet in heavy-ion collisions is also important.
The jet suppression measured at RHIC and the LHC has been analyzed using transport models, which have found the \(\hat{q}/T^{3}\) transport coefficient for the energy loss distribution to be in the range \(\hat{q}=2-4\) GeV\({}^{2}\)/fm for \(300<T<500\) MeV over the range of temperatures of the QGP at RHIC and LHC [208]. The JETSCAPE analysis [193] is only for inclusive hadron production! There is also the recent JETSCAPE analysis of jet substructure [209], but it does not attempt to extract values for \(\hat{q}\).
Figure 34 shows the jet \(R_{\rm AA}\) in central Pb+Pb collisions in comparison with the \(R_{\rm AA}\) of charged hadrons. The hadron and jet \(R_{\rm AA}\) are both found to be strongly suppressed, with the jet \(R_{\rm AA}\) exhibiting stronger suppression than that of the inclusive hadrons at the same \(p_{T}\). At higher \(p_{T}\), jets are more suppressed than hadrons with the same \(p_{T}\), since the inclusive hadrons at a given \(p_{T}\) originate from energetic partons that fragment at late times and thus lose less energy in the medium than the combined energy loss of the entire parton shower that constitutes an average jet (see Section IX.2.2 for a more detailed discussion of jet fragmentation). The measurement of hadrons does not extend as high in \(p_{T}\) as that of jets, since the jets encompass the entire shower from the parton rather than just one (leading) hadron.
A summary plot of current jet \(R_{\rm AA}\) measurements from RHIC and LHC is shown in Fig. 35 for central (0-10%) Au+Au at RHIC and Pb+Pb at the LHC. [214] The ATLAS and CMS results represent full (electromagnetic and hadronic) calorimetric measurements of jets, ALICE comprises electromagnetic energy and charged particles, while STAR measurements are jets measured solely with charged particles, all with the same jet resolution parameter \(R=0.4\). The uncertainties are larger for the STAR and ALICE jet measurements and increase as the jet-\(p_{T}\) decreases. Several effects contribute to the increased uncertainty at low jet-\(p_{T}\): the dependence of the experiments on charged-particle tracking rather than calorimetry, the increased influence of the soft background at lower jet-\(p_{T}\), and greater dependence on the low-\(p_{T}\) cutoff. Also noticeable is the gap in \(p_{T}\) between the RHIC and LHC data, which is partly due to the circumstance that only the energy by charged particles is detected in the STAR measurements. The entire region \(p_{T}<100\) GeV/c is important to theoretical comparisons in order to better understand jet energy loss mechanisms and the response of the medium. Therefore, it is a focus of new experimental background and jet-isolation techniques and continued higher statistics data-taking.
The dependence of jet quenching on the color charge of the primary parton can also be derived from a comparison of jets initiated by a hard-scattered quark (quark jets) with those initiated by a gluon (gluon jets). Experimentally, this can be achieved statistically by comparing ensembles of photon-tagged jets (jets opposite in azimuth from an isolated photon) with inclusive jets. Event generators predict that the fraction of quark jets in a photon-tagged sample of jets in a typical kinematic range at the LHC is \(0.7-0.8\) as compared to a quark fraction of \(0.3-0.5\) for inclusive jets in the same range [215]. If the jet energy loss is proportional to the square of the color charge of the primary parton (\(C_{q}/C_{g}=4/9\)) as predicted by theory, a smaller quark energy loss should be reflected in less suppression, i. e. a larger \(R_{\rm AA}\) for photon-tagged
Figure 34: Comparison of \(R_{\rm AA}\) for charged hadrons from ALICE [172] and CMS [187], and jets from ALICE [210] and ATLAS [211] in central Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. Compilation from [7].
jets than for inclusive jets.
The jets opposite isolated photons will consist predominantly of quark jets, enabling potential discrimination between the energy loss of a primary quark with the medium and that of a mixture of quarks and gluons that make up the inclusive jet sample. These events with jets opposite a photon (referred to as \(\gamma\)-jet) were investigated and compared to inclusive jet production [217] in p+p and Pb+Pb interactions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. Figure 36 displays a comparison of the \(R_{\rm AA}\) for \(\gamma\)-jet measurements at three centralities with an inclusive jet measurement at 0-10% centrality. As already seen for inclusive jets, the \(\gamma\)-jet measurements also exhibit increased suppression for more central collisions. However, as Fig. 36 highlights, the most central \(\gamma\)-jet \(R_{\rm AA}\) results show significantly less suppression than inclusive jets reflecting the enhanced presence of gluons with their larger energy loss in the inclusive sample.
#### iv.2.2 Jet Fragmentation and Jet Shape
It is important to note that a general difference between the jet and hadron \(p_{T}\)-spectra is that the hadron spectra result from fragmentation of the primary parton into a jet that contains a leading parton carrying above average momentum. Therefore, the fragmentation function plays an integral role in the difference between the hadron and jet \(p_{T}\)-spectra, and the jet spectrum is harder than that of inclusive hadrons. In fact, a hadron and a jet at a given \(p_{T}\) do not originate from partons with the same \(p_{T}\). The primary parton momentum, which is represented for the most part by the jet, must be convoluted with the fragmentation function in order to obtain the \(p_{T}\) of an individual hadron. Clearly, this entails the need to measure the fragmentation function in p+p collisions and its modification in A+A collisions. Similarly, the desire to understand the transverse momentum broadening of the jet shower by its interaction with the medium requires a quantitative understanding of the transverse jet shapes in p+p collisions and their modification in A+A collisions.
The fragmentation functions \(D(z)\) for charged hadrons have been measured in p+p and Pb+Pb collisions [211] for a variety of centralities [218]. Figure 37 shows the measured ratios \(R_{D(z)}\) of jet fragmentation into charged hadrons in central Pb+Pb collisions relative to p+p collisions as a function of \(z=p_{T}/p_{\rm jet}\). A strong enhancement is observed for hadrons at low \(z\), while a suppression is seen for hadrons in the intermediate region \(0.03<z<0.1\). This is consistent with a scenario in which partons that would normally contribute in this intermediate region interact with the medium, lose energy, and form hadrons at lower \(z\) resulting in the observed low-\(z\) enhancement. The slight enhancement observed for hadrons with \(z>0.5\), a kinematic region typically dominated by leading hadrons, may reflect a selection bias in favor of narrow jets, which do not interact as strongly with the medium as wider jets.
Since \(D(z)\) only provides a measure of the longitudinal fragmentation of jets, it is important to also measure the transverse structure of the jets to gain additional insight into the medium modification of the fragmentation process and the role of parton-medium interactions. This is commonly achieved by measuring the angular distribution of hadrons with respect to the jet axis within the jet cone. Figure 38 displays the ratio of the jet radial momentum distributions as a function of the angular distance \(\Delta r\) from the jet axis in Pb+Pb for various centrality intervals relative to that measured in p+p collisions for leading jets with \(p_{T}>120\) GeV/c, \(R=0.4\) and 0.7 GeV/c \(<p_{T}^{\rm track}<300\) GeV/c [219]. The Pb+Pb radial momentum distributions are enhanced over the p+p distribution for charged particles f
Figure 35: A compilation [214] of jet \(R_{\rm AA}\) measurements at RHIC and LHC [210; 211; 212; 213]. Measurements are for full jets at LHC and charged-particle jets at RHIC. See text for more details.
Figure 36: Jet \(R_{\rm AA}\) compilation from ATLAS for \(\gamma\)-jet and inclusive jets in \(\sqrt{s_{\rm NN}}\) = 5.02 TeV Pb+Pb collisions. Details in the legend and text. [From [216]]
axis and the enhancement increases with centrality primarily outside the jet cone (\(\Delta r>0.4\)). This behavior indicates that there is significant out-of-cone radiation associated with the jet [220]. Thus, jets defined with a larger cone radius \(R\) should recover more of this large-angle radiation than jets defined with a narrower cone and therefore should be expected to incorporate more sources of potential energy loss. The magnitude of the out-of-cone radiation will depend on the parton-medium interactions and also differences in the energy-loss mechanisms between quark and gluon jets.
Another promising probe of the mechanisms of jet-medium interactions are jets with a leading \(b\)-quark (\(b\)-jets. These jets overall are observed to be broader than inclusive jets [221], with a broadening of the angular distribution of charged hadrons beyond \(R=0.2\) that increases significantly in Pb+Pb collisions for more central events and extends beyond the cone radius that defines the \(b\)-jet. Thus, the energy in \(b\)-jets is redistributed to larger angles in Pb+Pb collisions compared with p+p collisions. This finding is consistent with measurements of the \(R_{\rm AA}\) for \(R=0.2\)\(b\)-jets compared to inclusive jets, where the \(R_{\rm AA}\) appears larger for \(b\)-jets than that for inclusive jets in central Pb+Pb collisions [222]. In general, the \(b\)-jet measurements are suggestive of mass and color-charge effects in the mechanisms of jet energy loss in heavy-ion collisions. Higher statistics data and new measurements will be required to disentangle the various sources of these effects.
#### iv.2.3 Jet Substructure
We now turn to the emerging field of jet substructure measurements. As compared to inclusive jet measurements, jet substructure measurements seek to elucidate the dynamical evolution of the internal structure of the jet as it propagates through the QGP medium and thus aim to provide information on the microscopic processes leading to parton energy loss in the QGP. There are two possible approaches to this goal. One, which can be called the microscopic approach, strives for the complete reconstruction of the underlying parton propagation and kinematics in the QGP in the hope that this will permit one to distinguish and understand the energy loss processes and the response of the QGP to the evolving jet. The other, which can be called the global approach, aims at the precision measurement of semi-inclusive observables that are sensitive to the substructure of jets and can be rigorously calculated in QCD without the need for somewhat arbitrary kinematic cuts. We first discuss the microscopic approach.
In order to reconstruct the evolution or shower history of a jet and determine its parton energy-loss mechanisms in the medium, the parton splittings and interactions must be derived from the final jet constituents. The splittings can be investigated using a technique that involves grooming of the jets (one popular approach is Soft Drop [223]) to reduce background and then reclustering ([224]) to determine the angular ordering in the QCD evolution of the jet. The jet substructure splittings can be characterized by the momentum fraction (\(z_{g}\)) and opening angle (\(\theta_{g}\)) of the first splitting after grooming, as shown in Fig. 39. This algorithm is well suited to analyze jet fragmentation in the vacuum, i. e. in p+p collisions, where the branching tree obeys angular ordering. Within a medium the angular ordering can be destroyed by medium-induced interactions that change the color flow within the branching jet, and the usefulness of this method is less well established.
Two variables that describe the splittings after grooming - \(z_{g}\) (the momentum fraction of first splitting) and \(R_{g}\) (the angular opening of the first splitting) - can be derived in theory and extracted from experiment in jet analyses. These variables are typically plotted in a diagram, known as the Lund Plane [227] (see Fig. 40), where \(k_{T}=p_{T,\rm subleading}\sin(R)\) and \(\theta_{g}=R_{g}/R\), with \(R\) being the jet cone angle. [226; 228; 229]
The different regions in the Lund plane are populated by splittings ranging from the non-perturbative at low \(\ln(k_{T})\) to perturbative at high \(\ln(k_{T})\). Wider splittings and soft wide-angle radiation populate lower values of \(\ln(1/\Delta R)\), where \(\Delta R\) is the angle between the splitting and the jet axis. Splittings that are more collinear correspond to higher values of \(\ln(1/\Delta R)\). The Lund Plane also provides insight into regions where coherence may take place.
Fully corrected measurements of \(z_{g}\) distributions in Pb+Pb are found to be consistent with those measured in p+p collisions over the entire range of jets measured. However, the \(\theta_{g}\) (and R\({}_{g}\)) distributions are narrower for smaller-angle jet splittings in Pb+Pb collisions, and the wider-angle splittings are significantly more suppressed
Figure 37: Fragmentation function ratio \(R_{D}\) of fragmentation functions plotted as a function of \(z\) in central Pb+Pb collisions relative to those in p+p collisions. Details of the jet selection are given in the legend.[218]
relative to those in p+p [226; 229]. In central collisions, the values of the jet suppression factor \(R_{\rm AA}\) range between 0.75 for narrow jets and \(\sim 0.3\) for the widest jets. We already speculated that this phenomenon is responsible for the rise of \(R_{D(z)}\) for \(z\to 1\) in Fig. 37.
Presumably, the wider jets reflect incoherent interactions or larger gluon fractions and thus suffer more energy loss than narrow jets. These results are qualitatively in line with a recent JETSCAPE study of jet substructure modifications caused by jet-medium interactions [209], which confirms that parton scattering with the QGP at high virtuality is highly suppressed by coherence effects. The reduced interaction of highly virtual partons with the medium then leads to the enhancement of narrow jets relative to wide jets. Further studies along these lines could allow for a determination of the scale dependence of elastic parton scattering in the medium that goes beyond the jet quenching parameter \(\hat{q}\) and thereby yield insight into the scale dependence of the microscopic structure of the QGP.
A more global approach to the study of jet substructure, which does not rely on the use of jet shower simulations is the measurement of energy-energy correlators (EEC) [230; 231] and, more generally, correlators involving track functions [232]. Track functions are asymptotic expectation values of observables, such as energy flow or conserved currents, integrated along a given angular direction (the track) pointing away from the interaction vertex. Their usefulness derives from the fact that they can (a) be rigorously defined in quantum field theory [233] and (b) are the natural objects measured by calorimeters with or without particle identification.
Recent progress in the calculation of the renormalization group flow for EECs [231] and moments of track functions [232; 234] together with the demonstration of a universal scaling behavior of EECs in p+p data from
Figure 38: The ratio of the jet radial momentum distributions as a function of the angular distance \(\Delta r\) from the jet axis in Pb+Pb for various centrality intervals relative to those measured in p+p collisions. The CMS data are for leading jets with \(R=0.4\) and \(p_{T}>120\) GeV/c, and for charged particles with 0.7 GeV/c \(<p_{T}^{\rm(track)}<\) 300 GeV/c [219].
Figure 40: The Lund Plane representation [227] of the kinematic regions available within a jet. The \(\Delta R\) and \(k_{T}\) are the angle and transverse momentum of a gluon emission with respect to its parent parton.
Figure 39: Diagram of angular-ordered re-clustering of constituents of a jet and the Soft Drop grooming procedure [223; 224]) to reduce background and then re-clustering [225]. The identified splitting is shown in black and the groomed-away splittings in light blue. From [226].
LHC [235; 236] have raised interest in using such global jet substructure observables for the study of jet quenching in A+A collisions. As an example of this behavior, Fig. 41 shows the EEC restricted to charged hadrons for p+p collisions at LHC using CMS open data [235]. The magenta shaded region labeled "Quarks/Gluons" is well described by next-to-next-to-leading QCD perturbation theory [236] indicating that it is governed by perturbative parton showers. Ongoing research focuses on the measurement of the modifications of EECs and track function moments in p+A and A+A collisions where characteristic changes due to jet-medium interactions are predicted, which are sensitive to the dynamics of color coherence in the parton shower [237].
## X Collective flow
Not all signatures of the QGP that are now understood to be relevant and important were recognized as such in our 1996 review and are thus absent from Fig. 1. This section will be devoted to a brief discussion of those signatures that have had great phenomenological impact but were not fully appreciated before the advent of data from heavy-ion colliders. The most important and ubiquitous of these are the collective flow anisotropies \(v_{n}\), most importantly, the elliptic flow coefficient \(v_{2}\).
Many-body systems exhibit collective flow that can be described by viscous hydrodynamics if the mean-free path \(\lambda_{f}\) of their constituents is short compared to the system size \(L\), i.e. if the Knudsen number \(Kn=\lambda_{f}/L\ll 1\). Before the advent of collider data, this condition was not expected to be satisfied by the QGP, because the strong long-range color force is screened in it, and lowest-order perturbative calculations of \(\lambda_{f}\) yield rather large values. Although some theorists argued otherwise [238], the general consensus was that the specific shear viscosity \(\eta/s\), where \(s\) is the entropy density, of the QGP was of order unity or larger, prohibiting well developed collective flow for fireballs of nuclear size.
Features of collective flow were initially observed in fixed-target experiments at the BEVALAC in 400 MeV/u Ca+Ca and Nb+Nb collisions [239] and 800 MeV/u Ar+Pb collisions [240]. A detailed characterization of collective flow in terms of directed and elliptic flow was performed in 158 GeV/u fixed-target Pb+Pb collisions [241] at the SPS. Data from Au+Au collisions at RHIC and later in Pb+Pb collisions at LHC clearly showed that the initial geometrical features of the QGP fireball are translated into characteristic collective flow patterns. For early summaries of these results and their interpretation see [42; 43; 4; 5; 6].
The geometric features imprinted on the fireball during the initial collision can be expressed in terms of eccentricities \(\varepsilon_{n}\) that measure the azimuthal anisotropies of the deposited energy density with respect to the beam axis. Hydrodynamics translates these geometric anisotropies into azimuthal anisotropies of the spectra of emitted particles, which are parameterized by flow coefficients \(v_{n}\) in the form
\[E\frac{d^{3}N}{dp^{3}}=\frac{1}{2\pi}\frac{d^{2}N}{p_{t}dp_{t}dy}\times \\ \left[1+\sum_{n=1}^{\infty}2v_{n}(p_{T})\cos[n(\phi-\Psi_{n})] \right], \tag{12}\]
where \(\Psi_{n}\) denotes the \(n\)-th order event plane.
The magnitude of the observed \(v_{n}(p_{T})\) depends on the initial eccentricities \(\varepsilon_{n}\) and the specific shear viscosity \(\eta/s\). Since the \(\varepsilon_{n}\) can be reliably modeled based on our knowledge of nuclear structure and elementary nucleon-nucleon collisions, the data for \(v_{n}(p_{T})\) can be used to deduce the value of \(\eta/s\) from the data by means of a Bayesian model-data comparison. Here we can only present a few examples of the many published comparisons of viscous hydrodynamics simulations with experimental data. Figure 42 shows the flow coefficients \(v_{n}(p_{T})\) measured by ALICE [244] and ATLAS [245] in \(\sqrt{s_{\rm NN}}=5.02\) TeV Pb+Pb collisions compared with the results of hybrid model calculations using second-order viscous hydrodynamics with \(\eta/s=0.12\) to describe the QGP phase [246].
The fact that all flow components \(v_{n}(p_{T})\) can be described by the same hydrodynamic equation with the need for fine-tuning of the initial eccentricities presents clear evidence for a rapid "hydrodynamization" of the QGP fireball. Theoretical studies of the approach to viscous hydrodynamic motion in the context of kinetic theory and holographic models have shown that the onset of hydrodynamics can occur when the system is still quite far from local thermal equilibrium because of the presence of large viscous effects (see e. g. [247; 248]). Therefore, viscous deviations from thermal equilibrium must
Figure 41: The two-point energy-energy correlator restricted to charged hadrons, evaluated from CMS Open Data for p+p collisions at LHC. The data, which are plotted as a function of the relative angle \(R_{L}\) between the tracks, exhibit distinct scaling regimes associated with asymptotically free partons (at large \(R_{L}\)) and free hadrons (at small \(R_{L}\)). [From [235]]
be taken into account in calculations of thermal quantities during the early collision stage even when the QGP is already expanding as a fluid.
Figure 43 indicates that the elliptic flow \(v_{2}(p_{T})\) of charged hadrons in Au+Au (Pb+Pb) collisions remains the same in a fixed centrality bin \((20-30\%)\) over a large range of collision energies \(\sqrt{s_{\rm NN}}\) from 39 GeV to 2.76 TeV. As Fig. 2 suggests, the initial conditions of the fireball lie deep in the QGP regime over this energy range, and the colliding nuclei are sufficiently Lorentz contracted for the Bjorken model of a boost-invariant hydrodynamic expansion to be applicable at midrapidity. The observation that the \(v_{2}(p_{T})\) data all follow the same curve indicates that the elliptic flow is driven by the scale-invariant hydrodynamic expansion of a fireball whose initial geometric shape is the nuclear overlap region in the associated impact parameter window.
While the strength of the observed elliptic flow of inclusive charged hadrons points to its early generation during the expansion phase, it does not directly indicate whether the flow is created at the (deconfined) quark level. This information comes from characteristic differences between the elliptic flow of mesons and baryons [182; 250]. If the flow is carried by the valence quarks of a hadron, the elliptic flow functions of different hadrons will satisfy the scaling law
\[v_{2}^{(i)}(p_{T})/n_{q}^{(i)}=v_{2}^{(\rm q)}(p_{T}/n_{q}^{(i)})\,, \tag{13}\]
where \(n_{q}^{(i)}=2,3\) is the number of valence quarks of hadron species \(i\), and \(v_{2}^{(\rm q)}(p_{T})\) is the elliptic flow function for quarks. Figure 44 shows the valence quark scaled elliptic flow coefficient \(v_{2}/n_{q}\) measured by STAR [251] in \(\sqrt{s_{\rm NN}}=54.4\) GeV Au+Au collisions for five different hadron species containing strange quarks: the mesons \(K_{s}^{0},\phi\) and the baryons \(\Lambda,\Xi^{-},\Omega^{-}\). The flow coefficient \(v_{2}\) is plotted as a function of the variable \((m_{T}-m_{0})/n_{q}\), where \(m_{T}=\sqrt{p_{T}^{2}+m_{0}^{2}}\) is the transverse mass. Similar results for \(v_{2},v_{3},v_{4}\) have been obtained by ALICE in Pb+Pb collisions at LHC [252].
The \(p_{T}\)-integrated flow coefficients \(v_{n}\) for \(n\geq 2\) provide a good measure of the specific shear viscosity \(\eta/s\), because the coefficients are increasingly sensitive to flow dissipation for growing values of \(n\)[253]. These coefficients have been measured by several LHC experiments in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV [254; 255; 256]. The data are in good agreement with hybrid model calculations that use values \(\eta/s\sim 0.1-0.2\) in the QGP
Figure 44: Valence quark scaled elliptic flow coefficient \(v_{2}/n_{q}\) for five different hadron species in \(\sqrt{s_{\rm NN}}=54.4\) GeV Au+Au collisions as a function of the scaling variable \((m_{T}-m_{0})/n_{q}\). The solid red line indicates a fit to the \(K_{s}^{0}\) data. [From [251]]
Figure 42: \({\rm v}_{n}(p_{T})\) (\(n=2,3,4,5\)) measured in \(20-30\%\) central Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV by ALICE [244] and ATLAS [245]. The data are compared with simulations in a hybrid collision model [246] based on viscous hydrodynamical evolution of the QGP phase. [From [246]]
Figure 43: The \(v_{2}(p_{T})\) measured in \(20-30\%\) central Au+Au (Pb+Pb) collisions over the collision energies \(\sqrt{s_{\rm NN}}\) from 39 GeV to 2.76 TeV. The fact that the data all follow the same curve is indicative of elliptic flow that is driven by hydrodynamic expansion of a fireball with the initial geometric shape of the nuclear overlap associated with the impact parameter window. [From [249]]
phase.
The collision energy dependence of \(v_{2}\) of charged hadrons has been measured from \(\sqrt{s_{\rm NN}}\simeq 2\) GeV to 5.02 TeV in Au+Au (Pb+Pb) collisions in experiments at GSI, AGS, SPS, RHIC, and LHC. The data collected in Fig. 45 show that the physical mechanism driving the elliptic flow changes for \(\sqrt{s_{\rm NN}}<10\) GeV. The slow increase of \(v_{2}\) for \(\sqrt{s_{\rm NN}}>10\) GeV can be reconciled with the invariant behavior of \(v_{2}(p_{T})\) visible in Fig. 43 by the observation that the \(p_{T}\)-spectrum of charged hadrons continues to flatten with growing \(\sqrt{s_{\rm NN}}\) and thus samples larger values of \(p_{T}\) for higher collision energies.
The numerical value for the QCD transport parameter \(\eta/s\) that can be extracted from the RHIC and LHC data has systematic uncertainties that derive from the need to simultaneously fix other parameters of the transport models, such as the initial energy density, the granularity of the density fluctuations, and the earliest time at which viscous hydrodynamics becomes a valid description. Comprehensive model-data analyses using Bayesian methodology that take many of these uncertainties into account have been conducted in recent years. A recent analysis [256] allowing for a temperature-dependent specific shear viscosity is reproduced in Fig. 46, where the red curve shows the most probable value and the orange area covers the 90% likely region.
The importance of this result derives from the insight that values of \(\eta/s\sim 0.1-0.2\) require the QGP to be a strongly coupled fluid [257; 258]. In fact, this value establishes an exceptional role of the QGP as a nearly "perfect" fluid (see Fig. 46 for a comparison with other "good" fluids) with a sound dissipation coefficient that is near the quantum bound \((4\eta/3+\zeta)/s=(3\pi)^{-1}\)[259].
The information with respect to the initial azimuthal shape of the fireball that is gleaned from the collective flow measurements can be used to study the pathlength dependence of parton energy loss by measuring properties of the jet as a function of its angle relative to the flow anisotropy. Jets that are emitted along the major axis of the initial elliptic shape, created from the geometrical overlap of the colliding nuclei, must traverse a longer distance through the QGP and lose more energy than those emitted along the minor axis. Radiative energy loss of partons is predicted to grow quadratically with the pathlength [260], whereas collisional energy loss would depend linearly on the pathlength [261]. Measurements of the azimuthal anisotropy of the jet yield relative to the event plane can thus provide information on the mechanism by which partons lose energy. Such studies have been implemented using event shape engineering methods [165] to have better control of the initial geometrical event shapes for more precise pathlength determination. Results to date are consistent with the assumption of a dominance of radiative energy loss for light partons.
Analogous to the azimuthal correlation measurements of soft particles in an event, the azimuthal anisotropy of jets \(v_{n}^{\rm jet}\) can be measured with respect to the second harmonic event plane, after separating jets from the underlying event background. Displayed in Fig. 47 is a compilation [262] of results on the jet \(v_{2}^{\rm jet}\) and particle \(v_{2}^{\rm part}\) for semi-central collisions. The ATLAS calorimetric jet \(v_{2}^{\rm calojet}\) and ALICE charged jet \(v_{2}^{\rm chiet}\) are consistent with each other and exhibit a significant \(v_{2}^{\rm jet}\) up to large \(p_{T}\).4 Also shown are the \(v_{2}^{\rm part}\) of charged particles for comparison. The ALICE and ATLAS \(v_{2}^{\rm jet}\) measurements indicate pathlength-dependent parton energy loss.
Footnote 4: Note that for any initial parton \(p_{T}\), the particle \(p_{T}\) will be less than that of a charged jet and it is likewise less than that for a calorimetric jet due to the missing initial parton energy in the particles and charged jets.
An attribute of the QGP fluid that was not antici
Figure 45: The \(p_{T}\)-integrated elliptic flow \(v_{2}\) for Au+Au (Pb+Pb) collisions over the entire collision energy range covered by Au+Au (Pb+Pb) collisions at the GSI, AGS, SPS, RHIC, and LHC. See text for details. [From [7]]
Figure 46: Comparison of the specific shear viscosity \(\eta/s\) of the QGP extracted from heavy-ion collision data with the values measured for helium and water. [From [256]]
pated at the time of our 1996 review [1] is vorticity. Because of the very low specific shear viscosity of the QGP any vorticity that is seeded into the fluid at early times can survive for an extended period of time as Kelvin's theorem states that circulation is strictly conserved in an ideal fluid. The seeding of vorticity in non-central heavy-ion collisions was first recognized in [265] where also global hyperon polarization with respect to the collision plane was identified as an experimental signature. Global \(\Lambda\)-hyperon polarization in the percent range was subsequently observed in Au+Au collisions at \(\sqrt{s_{\rm NN}}=7-200\) GeV [266]. The magnitude of the polarization can be related to the average vorticity of the QGP at the moment of hadronization and gives an average value \(|\vec{\omega}|=(9\pm 1)\times 10^{-21}\,\mathrm{s}^{-1}\) for Au+Au collisions within the energy range studied in [266]. The observed magnitude can be explained as the transfer of vorticity into the QGP from the initial orbital angular momentum of the colliding nuclei that results in a spin polarization of the QGP fluid [267; 268]. The detailed vorticity pattern of the QGP fluid and the microscopic mechanisms of spin transfer into the QGP and its equilibration are areas of active research. In addition to spin polarization of hyperons, STAR and ALICE have also reported a nonzero spin alignment of several vector mesons (\(K^{*}\), \(\phi\)) [269; 270] the origins of which are not yet well understood.
## XI Equation of state
Interest in the equation of state of nuclear matter was the primary motivation for our field of research and the inception of experiments utilizing collisions of energetic heavy ions [271]. After initial studies of baryon-rich nuclear matter in the GeV range [272], the interest became focused on understanding the equation of state of excited QCD matter, which was originally a centerpiece of the RHIC experimental program as exemplified by the panel in Fig. 1 entitled "Temperature." This interest faded somewhat once lattice gauge theory became able to calculate the equation of state with high precision for zero to moderate net baryon densities (see [273] for a recent comprehensive review). Instead of the equation of state, the experiments have since focused mainly on dynamical phenomena, such as the dynamics of thermalization and effects of viscosity on the collective flow.
Experimental interest in the equation of state of nuclear matter has now shifted back to much lower collision energies, in the few-GeV range as explored in the second RHIC beam energy scan, where the net baryon density of the matter created is above that covered by reliable lattice calculations. A main focus of this investigation is to determine whether the smooth crossover between hadronic matter and QGP at low net baryon density gives way to a first-order phase transition with a threshold critical point. The primary experimental probes for a first- or second-order phase transition are large-scale spinodal density fluctuations and critical net-baryon number fluctuations, respectively. Hints of such critical behavior were observed in net-proton number fluctuations in the first RHIC beam energy scan [108] but require confirmation with much higher statistics data [109].
The recent detection of gravitational waves from binary neutron-star mergers [274; 275] has sparked interest in connecting the equation of state governing the collapse of binary-neutron star systems to the equation of state of matter probed in heavy-ion collisions in the few-GeV energy range [276]. The shape of the gravitational wave signal is expected to be sensitive to the degrees of freedom in the core of neutron stars. Calculations are currently focused on exploring connections to the dynamical evolution of few-GeV heavy-ion collisions in terms of the pressure, temperature, entropy, and isospin [277; 278; 279; 280]. A first-order phase transition to quark matter is expected to look very different than a smooth crossover, and the next generation of gravitational wave observatories may be able to distinguish between the two. Furthermore, the lowest energy probes in the second RHIC beam energy scan and the future Compressed Baryonic Matter (CBM) experiment at FAIR [281] are expected to provide the data necessary for a quantitative comparison with neutron-star merger observations.
## XII Small systems
The motivation for colliding ultra-relativistic heavy ions at RHIC and the LHC was that at such high energies large nuclei would be most likely to create hot QCD matter in the thermodynamic limit. Notwithstanding this argument, there was also an old idea that even high-energy proton-proton collisions could produce a statistical system that might exhibit aspects of hydrodynamic behavior [282; 283; 284]. After the advent of QCD, the question
remained as to whether a statistical system composed of locally deconfined quarks and gluons could be produced in sufficiently energetic p+p collisions and behave as a hot fluid, i.e. a QGP. However, attempts to find evidence for QGP formation in high-multiplicity p+p collisions at the TEVATRON remained inconclusive [285].
The general consensus remains that minimum-bias p+p collisions do not involve the formation of a QGP, and such events are commonly used as a baseline against which nuclear modifications of hard probes are measured. This does not rule out that a QGP fireball can be produced in rare high-multiplicity p+p events. The first clear evidence for behavior that resembles a collective flow pattern was observed by CMS in p+p events at \(\sqrt{s_{\rm NN}}\) = 7 TeV with more than 90 charged tracks [286]. Angular correlation measurements at \(\sqrt{s_{\rm NN}}\) = 2.76 and 13 TeV [287; 288] confirmed this observation. Similar observations of collective flow patterns have been made for p+Pb collisions at LHC [289; 290; 291; 292] and in p+Au, d+Au, and \({}^{3}\)He+Au collisions at RHIC [293] (see [294] for a review). The similarity of the collective behavior seen in p+p, p+A, and A+A systems can be explained if a strongly coupled QGP is formed in all these systems [295].
Surprisingly, on the other hand, no evidence has been found for the formation of a QGP in p+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV in modifications of hard probes, such as jets [296]. It is presently unclear how the finding of apparent collectivity in soft particle emission can be reconciled with the absence of evidence for jet quenching. One possibility is that the soft collective behavior observed in p+p and p+A collisions is generated without hydrodynamic flow (see e. g. [297]). It is well known in other fields, e. g. plasma physics, that collective motion of particles can be created by non-hydrodynamical mechanisms, such as the action of coherent fields [298; 299]. If the origin of collective behavior in p+p and p+A collisions were found to have an alternative explanation, our current understanding of the origin of flow patterns in A+A collisions would have to be revisited.
## XIII Summary and Outlook
Nearly three decades of experimental and theoretical research have affirmed the scientific strategy aimed at the discovery and characterization of the quark-gluon plasma that was described in [1]. Extensive measurements have converted the qualitative expectations for the quark-gluon plasma signatures summarized in Fig. 1 into quantitative knowledge. As with any preconceived strategy, adjustments were made in reaction to new insights gathered along the way. Some signatures have been found to be less useful or more difficult to measure than originally thought. Others have proven to be immensely valuable including several that were unanticipated or some that were known in principle but underappreciated.
The average initial energy density reached in the most central heavy-ion collisions in Fig. 2 exceeds the threshold for QGP formation above \(\sqrt{s_{\rm NN}}\sim\) 10 GeV. In the high energy range, \(\sqrt{s_{\rm NN}}>\) 50 GeV, this can be deduced from the measured charged-particle multiplicity \(dN_{\rm ch}/dy\) and the short hydrodynamization time deduced from elliptic flow. At lower energies, it requires some assumptions about the dynamics of energy deposition, which is no longer quasi-instantaneous. The argument here is based in part on the continuity of the valence quark number scaling of elliptic flow that is observed down to \(\sqrt{s_{\rm NN}}\) = 11.5 GeV, although increasing deviations from the scaling show up for \(\sqrt{s_{\rm NN}}<\) 39 GeV indicating a growing contribution to flow from the hadronic phase [300; 301].
Identical particle (HBT) interferometry has revealed that a fireball of nuclear size and a lifetime of \(4-10\) fm/c acts as the common source of the hadrons that are emitted. As already mentioned above, the composition of the emitted hadrons and the fluctuations of conserved quantities have been used to map the chemical properties of the hadronizing fireball. Future experiments with extended pseudorapidity coverage will allow balance functions of conserved quantities to reach farther back into the history of the evolution of the fireball and track when chemical equilibrium is first established.
The intense investigation of the collective flow patterns in experiments has made it possible to quantitatively determine fluid properties of the QGP. The specific shear viscosity of the QGP has been found to lie in the range \(0.05<\eta/s<0.2\) depending on \(T/T_{c}\), establishing this novel QGP state of matter as the most "perfect" fluid known. Furthermore, the valence quark scaling of the flow pattern has provided strong evidence that the collective flow is generated at the quark level in a fluid in which quarks are not confined as hadrons. The spin polarization of hyperons adds a new dimension to the exploration of the flow pattern by its sensitivity to the vorticity and thermal shear of the fluid. In the future, more precise measurements of the interaction of heavy quarks with this fluid will further probe the strongly-coupled nature of the QGP by yielding quantitative determinations of its diffusion constants.
Among soft signatures, the enhancement of strange hadron production and, more generally, the complete chemical equilibration of all light hadron species at common thermodynamic conditions have provided strong evidence for the transition from hadronic matter to a deconfined state - the QGP - at a temperature \(T_{c}\approx\) 155 MeV, in excellent agreement with lattice-QCD simulations. As shown in Fig.18, the boundary between hadronic matter and the QGP has been mapped by two different methods over a range of baryon chemical potentials \(\mu_{B}\) up to at least 300 MeV and agrees well with expectations from lattice gauge theory.
The measured suppression pattern of heavy quarkonium states, especially the \(\Upsilon\) states, and their observed sequential melting provide further confirmation for the deconfinement of quarks and gluons in the QGP, but the mechanisms responsible for the suppression pattern are
more complex than originally thought. In particular, the reduced suppression, by regeneration at the phase boundary, of the J/\(\psi\) in A+A collisions at LHC compared to that at RHIC energies provides clear evidence that charm quarks are deconfined in the QGP.
Electromagnetically interacting and hard QCD signatures provide complementary information about the properties of the QGP. Measurements of the spectrum of direct photons and the invariant mass spectrum of dileptons have yielded lower bounds for the temperature at which the QGP initially thermalizes. These spectra exhibit thermal temperatures substantially above the transition temperature \(T_{c}\). The spectrum of dileptons in the mass region of the \(\rho\)-meson confirm the hadronization (chemical freeze-out) temperature deduced from the hadron yields.
An unambiguous detection of chiral symmetry restoration will require high-precision measurements of the lepton pair spectrum in the mass region 1 GeV \(<M_{\ell^{+}\ell^{-}}<\) 2 GeV. Theoretical predictions indicate a difference of approximately 15% between models that involve chiral symmetry restoration in the QGP phase and models that do not. Measurements of this level of precision require very precise knowledge of the background from semi-leptonic charm decays and are out of reach for the existing detectors. The proposed ALICE 3 [302] and NA60+ [303] experiments aim at reaching the required precision to be able to detect the enhancement of the dielectron spectrum at invariant masses above the \(\phi\)-meson peak characteristic of \(\rho-a_{1}\) mixing that is the signature of chiral symmetry restoration.
The most versatile, but also the most complex probes of the QGP are energetic quarks and gluons, created by hard scatterings during the first moments of the nuclear collision. Such hard-scattered partons materialize as jets, in which the initial momentum of the primary parton is shared among many hadrons. A number of different observables have been found that encode the energy loss of the primary parton on its path through the QGP, beginning with the suppression of the inclusive yield of high p\({}_{T}\) hadrons in A+A collisions observed from the mid-range of RHIC energies to those of the LHC and corroborated by the observation of a strong suppression of the high-\(p_{T}\) hadrons opposite in azimuth to a high-\(p_{T}\) trigger hadron.
These measurements involving individual hadrons were subsequently extended to jets and di-jets, where a similar quenching of jets attributable to parton energy loss was observed. More recently, differential measurements of jets and their substructure have emerged as tools to investigate the mechanism that causes parton energy loss and help determine the conditions under which energy loss is primarily radiative or when elastic processes dominate. In parallel, flavor tagging of jets has given evidence for a mass and color charge dependence of the parton energy loss in the QGP.
According to our current understanding, the energy loss of the primary parton and the redistribution of its momentum within the jet is controlled by just a few parameters characterizing the medium. In a dilute or thin medium, they are the density of scattering centers and the range of the color force in the medium. In a dense, thick medium, the jet quenching parameter \(\hat{q}\) encodes the transverse scattering power per unit length of the medium. The suppression factor \(R_{\rm AA}\) of inclusive hadrons provides a direct measurement of \(\hat{q}\) under the assumption that the energy loss of the primary parton is predominantly caused by gluon radiation induced by scattering in the medium. The dimensionless parameter \(\hat{q}/T^{3}\) is found to lie in the \((\pm 1\sigma)\) range \(3.4<\hat{q}/T^{3}<5.8\) at RHIC and \(2.4<\hat{q}/T^{3}<5.0\) at LHC [193], which is consistent with values for \(\hat{q}/T^{3}\) required to describe the inclusive jet suppression measured at RHIC and LHC.
The values of \(\eta/s\) and \(\hat{q}/T^{3}\) deduced from the heavy ion data by Bayesian model-data comparison are two examples where experimental data have helped bracket fundamental transport coefficients of the QGP that cannot (yet) be reliably calculated in QCD. A fundamental question that is still to be resolved, is to what extent it is possible to probe the dynamical evolution of the matter created in heavy-ion collisions from partons in the initial state to the thermal quarks and gluons of the QGP and, finally, into hadrons. This quest involves the investigation and understanding of the parton structure of the initial state, of the energy sharing mechanisms that produce a thermal plasma, and the response of the QGP to hard probes that are sensitive to a range of different scales.
Future measurements with better resolution and higher statistics will probe more deeply to reveal the various scales involved in the interactions of jets with the QGP. Investigation of coherence effects, both theoretically and through jet substructure measurements, will determine the extent to which the medium is able to resolve the interactions of the parton as it propagates through the QGP. By constraining the dependence on the color charge and mass of the parton they can further confirm the scattering dynamics underpinning parton energy loss. At the same time, these differential measurements become effective probes of the shower evolution inside a jet and contribute to our understanding of QCD.
Over the next few years, the new sPHENIX detector [304] at RHIC and the existing RHIC and LHC experiments with upgraded detectors will make precision measurements of jet modifications in heavy-ion collisions. In the future, a newly proposed ALICE 3 [302] experiment is expected to join in that endeavor at the LHC. Parallel advances in the theory of jet interactions with the QGP medium will be required to turn the wealth of expected data into firm insights into the structure and properties of the QGP and the internal dynamics of jet formation. The remarkable success achieved for soft QGP probes, where data-theory comparisons within well-defined frameworks have enabled quantitative measurements of QGP bulk properties, can serve as a guide for the scientific approach aimed at elucidating the microscopic structure of the QGP over the wider range of scales that is accessible
with hard QCD probes.
Another increasingly central direction of investigation is research into the parton structure of cold nuclear matter. A better understanding of the structure of the colliding nuclei is important as one attempts to understand the initial conditions of a high-energy collision of nuclei. An example of such investigations is the monitoring of sub-nucleonic proton shape fluctuations by studying J/\(\psi\) production in diffractive e+p collisions [305]. Alternative experimental approaches utilize J/\(\psi\) photo-production in ultra-peripheral d+Au collisions [306] and coherent J/\(\psi\) production in ultra-peripheral Pb+Pb collisions [307].
Understanding the interaction of cold nuclear matter with hard probes is also an essential aspect in the interpretation of the nuclear modification factor \(R_{\rm AA}\) as already discussed in conjunction with the physics of quarkonium suppression and jet quenching. Phenomena that will benefit from additional experimental investigations in p+A collisions include nuclear suppression or enhancement effects at relatively low \(p_{T}\) that are alternatively attributed to shadowing of nuclear parton distributions, momentum broadening of incident partons, or final-state absorption.
In the more distant future precision studies of the parton structure of nucleons and complex nuclei will be the scientific focus of the electron-ion collider (EIC) [308]. Generalized parton distributions and transverse momentum dependent parton distributions will ve used to map the transverse parton structure of the proton, while diffractive e+p and e+A collisions will provide precise quantitative constraints on the saturation of gluon distributions at small Bjorken-\(x\). Besides being valuable in their own right, these results will help reduce the model dependence of the initial state of relativistic heavy-ion collisions.
In conclusion, the strategy for the investigation of hot QCD matter outlined in [1] has been successful beyond expectations. As in any field of physics, experimental and theoretical progress have gone hand-in-hand, leading to changes in research emphasis and readjustments of the strategy. Many questions at the core of the initial RHIC research program have been answered and given way to new ones [12]. Among those most important are the following. How does the partonic microscopic structure of the QGP evolve into a "perfect" fluid at longer distance scales? How small can a QGP that behaves fluid-like be? What is the structure of the QCD phase diagram at high net baryon density? We can be optimistic that improved experimental techniques, supported by theoretical advances, and combined with creative and novel approaches will provide information over the next decade that will help answer these questions.
###### Acknowledgements.
We thank Roberta Arnaldi, Steffen Bass, Hannah Bossi, Helen Caines, Charles Gale, Marek Gazdzicki, Laura Havener, Joseph Kapusta, Raghav Kunnawalkam Elayavalli, Andras Laszlo, Yen-Jie Lee, Michael Lisa, Rongrong Ma, Ian Moult, Jean-Francois Paquet, Ralf Rapp, Lijuan Ruan, Mike Sas, Jurgen Schukraft, Enrico Scomparin, Alba Soto-Ontoso, and Willam Zajc for valuable input during the writing of this article. We especially thank Hannah Bossi for assistance in various aspects of the preparation of figures for this manuscript. We appreciate helpful comments on a draft version of the manuscript made by Yasuyuki Akiba, Frank Geurts, Peter Jacobs, Georgios Konstantinos Krnitiras, Krishna Rajagopal, Lijuan Ruan, Bjorn Schenke, Jurgen Schukraft, Andre Stahl, Marco Van Leeuwen, and Urs Wiedemann. We are indebted to the ALICE, ATLAS, CMS, PHENIX and STAR collaborations for their extensive experimental results. We acknowledge support from the Office of Science of the U.S. Department of Energy, JH from grant DE-SC004168 and BM from grant DE-FG02-05ER41367. BM also acknowledges support by Yale University during Spring 2022 and Spring 2023.
|
2301.06741 | Faster Sinkhorn's Algorithm with Small Treewidth | Computing optimal transport (OT) distances such as the earth mover's distance
is a fundamental problem in machine learning, statistics, and computer vision.
In this paper, we study the problem of approximating the general OT distance
between two discrete distributions of size $n$. Given the cost matrix
$C=AA^\top$ where $A \in \mathbb{R}^{n \times d}$, we proposed a faster
Sinkhorn's Algorithm to approximate the OT distance when matrix $A$ has
treewidth $\tau$. To approximate the OT distance, our algorithm improves the
state-of-the-art results [Dvurechensky, Gasnikov, and Kroshnin ICML 2018] from
$\widetilde{O}(\epsilon^{-2} n^2)$ time to $\widetilde{O}(\epsilon^{-2} n
\tau)$ time. | Zhao Song, Tianyi Zhou | 2023-01-17T07:55:15Z | http://arxiv.org/abs/2301.06741v1 | # Faster Sinkhorn's Algorithm with Small Treewidth
###### Abstract
Computing optimal transport (OT) distances such as the earth mover's distance is a fundamental problem in machine learning, statistics, and computer vision. In this paper, we study the problem of approximating the general OT distance between two discrete distributions of size \(n\). Given the cost matrix \(C=AA^{\top}\) where \(A\in\mathbb{R}^{n\times d}\), we proposed a faster Sinkhorn's Algorithm to approximate the OT distance when matrix \(A\) has treewidth \(\tau\). To approximate the OT distance, our algorithm improves the state-of-the-art results [4] from \(\widetilde{O}(\epsilon^{-2}n^{2})\) time to \(\widetilde{O}(\epsilon^{-2}n\tau)\) time.
###### Contents
* 1 Introduction
* 1.1 Our Result
* 1.2 Related Work
* 1.3 Technique Overview
* 2 Preliminary
* 2.1 Problem Formulation
* 2.2 Inequalities
* 2.3 Treewidth preliminaries
* 3 Sinkhorn's Algorithm Analysis
* 3.1 Definitions
* 3.2 Bounded \(\max-\min\)
* 3.3 Potential function \(\widetilde{\psi}\)
* 3.4 Upper bounding for potential function
* 3.5 Iteration complexity bound
* 3.6 Induction
* 4 Running Time with small treewidth setting
* 4.1 Implicit form of \(K\)
* 4.2 Running time of Sinkhorn with small treewidth
* 4.3 Correctness of rounding algorithm
* 4.4 Running time of rounding algorithm
* 4.5 Running time of OT Distance by Sinkhorn
* 5 Symmetric
Introduction
Optimal transport is a mathematical theory that deals with the problem of finding the most efficient way to transport goods or materials from one place to another. The goal is to minimize the cost of transportation, which is usually measured in terms of the distance traveled or the amount of resources used. Many problems in computational sciences require to use optimal transport to compare between probability measures or histograms, including Wasserstein or earth mover's distance [22, 23, 24]. Optimal transport has a wide range of application, such as bag-of-words for natural language processing [10], multi-label classification [13], unsupervised learning [1, 2], semi-supervised learning [20], statistics [14, 15], and other application [16]. In particular due to its applications in image processing, it has recently become crucial to have efficient ways of computing, or approximating, the optimal transport or the Wasserstein distances between two measures.
There is a long line of research on OT problem. [17] apply Sinkhorn's algorithm to the entropy-regularized OT optimization problem. As it was recently shown in [1], this approach allows to find an \(\epsilon\)-approximation for an OT distance in \(\widetilde{O}(\epsilon^{-3}n^{2})\) time. In terms of the dependence on \(n\), this result improves on the complexity \(\widetilde{O}(n^{3})\) achieved by the network simplex method or interior point methods [22], applied directly to the OT optimization problem, which is a linear program [1]. The cubic dependence on \(\epsilon\) prevents approximating OT distances with good accuracy. Then, in [11], they proposed an algorithm with the complexity bound \(\widetilde{O}(\epsilon^{-2}n^{2})\) based on the Sinkhorn's algorithm.
The treewidth of a matrix is a measure of the complexity of its structure and plays a crucial role in the design and analysis of algorithms for manipulating and processing matrices. In particular, the treewidth of a matrix can be used to determine the efficiency of algorithms that rely on tree decompositions, such as dynamic programming and divide-and-conquer techniques. In the small treewidth setting, algorithms for matrix manipulation and processing can often achieve near-linear running time, making them highly efficient and scalable. This has important implications for a wide range of applications, including interior point methods [18, 19], computing John ellipsoid [16], streaming algorithm [20]. Treewidth is also important in graph structure theory, particularly in the study of graph minors by Robertson and Seymour [15]. Many results [1] have shown that NP-hard problems can be solved in polynomial time on classes of graphs with bounded treewidth.
The best previous work to solve this problem requires \(n^{2}\). It is natural to ask a question
_Is that possible to solve in \(o(n^{2})\) under some mild assumption, e.g. tree width_
In this paper, we provide a positive answer for this question. The comparison between our results and previous work's is shown in Table 1.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**References** & **Method** & **Time Complexity** \\ \hline
[22] & Network Simplex Method & \(n^{3}\) \\ \hline
[1] & Sinkhorn’s algorithm & \(\epsilon^{-3}n^{2}\) \\ \hline
[11] & Sinkhorn’s algorithm & \(\epsilon^{-2}n^{2}\) \\ \hline Theorem 4.5 & Sinkhorn’s algorithm & \(\epsilon^{-2}n\tau\) \\ \hline \end{tabular}
\end{table}
Table 1: Given the cost matrix \(C=AA^{\top}\in\mathbb{R}^{n\times n}\), let \(\tau\) denote the treewidth of matrix \(A\). Let \(\epsilon\) denote the accuracy parameter. Since \(\tau\leq n\), our algorithm (Theorem 4.5, Algorithm 3) is always better than [11].
### Our Result
We formally state our main theorem
**Theorem 1.1**.: _Given the cost matrix \(C=AA^{\top}\) where \(A\) has treewidth \(\tau\), we can find the transport plan for the \(\epsilon\)-approximation of the optimal transport distance in_
\[O(\epsilon^{-2}n\tau\|C\|_{\infty}^{2}\ln n)\]
_time._
Comparing with [1], that solves the problem in \(O(\epsilon^{-2}n^{2}\|C\|_{\infty}^{2}\ln n)\), we proposed an algorithm that constructing matrix using its implicit form. By leveraging the property of low treewidth, our running time has no dependence on \(n^{2}\).
### Related Work
OT ProblemsOT distances, which is also called Earth Mover's Distances [14], are progressively being adopted as an effective tool in a wide range of situations, from computer graphics [1] to supervised learning [13], unsupervised density fitting [1] and generative model learning ([12, 1, 1, 15, 16]). There is a long line of work on reducing the time complexity for solving OT. In [1], they proved that, for regularized OT, the near-linear time complexity can be achieved by both Sinkhorn and Greenkhorn algorithm. They demonstrated that both algorithms have a complexity of \(\widetilde{O}(\epsilon^{-3}n^{2})\), where \(n\) represents the number of atoms (or the dimension) of the probability measure being considered and \(\epsilon\) is the desired level of tolerance. In [1], the complexity of the Sinkhorn algorithm was improved to \(\widetilde{O}(\epsilon^{-3}n^{2})\). Additionally, an adaptive primal-dual accelerated gradient descent (APDAGD) algorithm was introduced, that was shown to have a complexity of \(\widetilde{O}(\min\{\epsilon^{-1}n^{9/4},\epsilon^{-2}n^{2}\})\). With a carefully designed Newton-type algorithm, [1, 1] solve the OT problem by making use of a connection to matrix-scaling problems. [1, 2] gave a complexity bound of \(\widetilde{O}(\epsilon^{-1}n^{2})\) for Newton-type algorithms.
Treewidth ProblemsTreewidth is a concept from structural graph theory that has been studied in relation to fixed parameter tractable algorithms in various fields, including combinatorics, integer-linear programming, and numerical analysis. [12] shows several problems can be reduced to matrix factorizations efficiently, including computing determinant, computing rank, and finding maximum matching, and this leads to \(O(\tau^{O(1)}\cdot n)\) time algorithms where \(\tau\) is the width of the given tree decomposition of the graph. [1] shows a number of NP-hard problems such as Independent Set, Hamiltonian Circuit, Steiner Tree, and Travelling Salesman can be solved with run-times that depend only linearly on the problem size and exponentially on treewidth as the result of dynamic programming. By leveraging the small treewidth setting, [1] proposed an algorithm that solves the linear program problem with run-time nearly matching the fastest run-time for solving the sub-problem \(Ax=b\). [13] proposed a space-efficient interior point method (IPM) in the streaming model. For the linear programs with treewidth \(\tau\), they solve them in \(\widetilde{O}(n\tau)\) space, where \(n\) is the number of dimension for the feature space. [11] shows that, when the constraints matrix has treewidth \(\tau\), the John Ellipsoid problem can be solved in \(O(n\tau^{2})\) time. The small treewidth setting is also applied to solve the semidefinite program. In [1], they give the first SDP solver that runs in time in linear in number of variables under this setting.
### Technique Overview
AnalysisWe first provide the bounds for \(u_{k},v_{k}\) and an optimal solution \((u^{*},v^{*})\) for Eq. (5). Then, we introduce the convex function of \((\widehat{u},\widehat{v})\) as the following:
\[\langle\mathbf{1}_{n},B(\widehat{u},\widehat{v})\mathbf{1}_{n}\rangle-\langle \widehat{u},B(u_{k},v_{k})\mathbf{1}_{n}\rangle-\langle\widehat{v},B(u_{k},v_{ k})^{\top}\mathbf{1}_{n}\rangle.\]
The gradient for the above function vanishes when \((u^{*},v^{*})=(u_{k},v_{k})\), so the point \((u_{k},v_{k})\) is the minimizer of this function.
Therefore, we can show that
\[\widetilde{\psi}(u_{k},v_{k})\leq\langle u_{k}-u_{*},B_{k}\mathbf{1}_{n}-r \rangle+\langle v_{k}-v_{*},B_{k}^{\top}\mathbf{1}_{n}-c\rangle\]
Then, for each iteration of the algorithm, we upper bound the r.h.s. and get
\[\widetilde{\psi}(u_{k},v_{k})\leq R\cdot(\|B_{k}\mathbf{1}_{n}-r\|_{1}+\|B_{ k}^{\top}\mathbf{1}_{n}-c\|_{1}).\]
where the inequality follows from the bounds for the iterates \(u_{k},v_{k}\) and an optimal solution \((u^{*},v^{*})\).
Next, by using this upper bound for \(\widetilde{\psi}\) and Lemma 2.8 we have:
\[\widetilde{\psi}(u_{k},v_{k})-\widetilde{\psi}(u_{k+1},v_{k+1})\] \[\geq \max\{\frac{\widetilde{\psi}(u_{k},v_{k})^{2}}{2R^{2}},\frac{ \epsilon_{0}^{2}}{2}\},\]
By using induction, we prove the potential function \(\widetilde{\psi}\) is also upper bounded by \(\frac{2R^{2}}{k+\ell-1}\), where \(\ell=\frac{2R^{2}}{\widetilde{\psi}(u_{1},v_{1})}\). Finally, by using the switching strategy, we provide the upper bound of the total number of iterations \(k\) for the Sinkhorn's algorithm as the following
\[k\leq 2+\frac{4R}{\epsilon_{0}}.\]
Running timeGiven the cost matrix \(C=MM^{\top}\) where \(M\in\mathbb{R}^{n\times d}\) has treewidth \(\tau\), we leverage the fact that it admits a succinct Cholesky factorization and \(\operatorname{nnz}(C)=O(n\tau)\).
For each iteration in Sinkhorn's algorithm (Algorithm 1), we have to compute \(B(u,v)=\operatorname{diag}(e^{u})K\operatorname{diag}(e^{v})\) where \(K_{i,j}:=\exp(-C_{i,j}/\gamma)\). In fact, writing down \(K\) explicitly requires \(O(n^{2})\). To bypass this issue, we first write \(K\) in implicit form \(K_{i,j}:=A_{i,j}-D_{i,j}\), where \(A_{i,j}=e^{-C_{i,j}/\gamma}-1\) and \(D_{i,j}=1\), so that matrix \(A\) is as sparse as matrix \(C\). Also, we represent matrix \(D\) by \(ww^{\top}\), where \(w=\mathbf{1}_{n}\). Leveraging the fact that \(\operatorname{nnz}(A)=O(n\tau)\) and matrix \(D\) is a rank-1 matrix. We improve the per iteration running time for Sinkhorn algorithm from \(O(n^{2})\) to \(O(n\tau)\).
For the rounding algorithm (Algorithm 4) of the transport plan \(B\), we also write down the transport plan in an implicit fashion and do the computation in \(O(n\tau)\) time. Note that we _never_ write down \(B,B_{0},B_{1}\) and output \(G\) explicitly. When computing \(B\mathbf{1}_{n}\), we leverage the implicit form of \(B\) and do the computation as following:
\[\operatorname{diag}(e^{u_{k}})A\mathbf{1}_{n}\operatorname{diag}(e^{v_{k}}) +\operatorname{diag}(e^{u_{k}})(ww^{\top})\mathbf{1}_{n}\operatorname{diag} (e^{v_{k}}).\]
As \(\operatorname{nnz}(A)=O(n\tau)\), computing \(A\mathbf{1}_{n}\) takes \(O(n\tau)\) time. Similarly, when computing \(XB\), where \(X\) is a diagonal matrix, we leverage the implicit form of \(B\) and do the computation as following:
\[\operatorname{diag}(e^{u_{k}})AX\operatorname{diag}(e^{v_{k}})+\operatorname {diag}(e^{u_{k}})(ww^{\top})X\operatorname{diag}(e^{v_{k}}).\]
As \(\operatorname{nnz}(A)=O(n\tau)\), computing \(AX\) takes \(O(n\tau)\) time and the \(AX\) is also \(O(n\tau)\) sparse.
Finally, we note that with \(\widetilde{O}(\epsilon^{-2}n\tau)\) time we approximate the transport plan for the OT distance problem.
**Roadmap.**
We first introduce all required preliminary in Section 2. Then, we provide the analysis for the Sinkhorn's algorithm in Section 3. In Section 4, we provide the faster Sinkhorn's algorithm with small treewidth setting and apply our faster Sinkhorn's Algorithm to solve the OT distance.
## 2 Preliminary
For a positive integer \(n\), we denote \([n]=\{1,2,\cdots,n\}\). We use \(\mathbf{1}_{n}\) denote the length-\(n\) vector where all the entries that are ones.
For a vector \(a\), we denote \(e^{a},\ln a\) as their entry-wise exponents and natural logarithms respectively. We define \(a_{k,i}\) as the \(i\)-th coordinate of \(k\)-th iteration of the \(a\).
For a matrix \(A\in\mathbb{R}^{n\times n}\), we define \(\|A\|_{\infty}:=\max_{i,j\in[n]}|A_{i,j}|\). We define \(A_{i,j}\) as the entry at \(i\)-th row and \(j\)-th coloum of matrix \(A\). We use \(e^{A},\ln A\) to denote their entry-wise exponents and natural logarithms respectively. We denote by \(\operatorname{vec}(A)\) the vector in \(\mathbb{R}^{n^{2}}\), which is obtained from \(A\) by writing its columns one below another. For two matrices \(A,B\), we denote their inner product by \(\langle A,B\rangle\). We define the \(n\)-dimensional simplex as \(\triangle_{n}:=\{x\in\mathbb{R}^{n}_{+}:\sum_{i=1}^{n}x_{i}=1\}\).
For a vector \(x\in\mathbb{R}^{n}\), we define its \(\ell_{p}\) norm to be \(\|x\|_{p}:=(\sum_{i=1}^{n}|x_{i}|^{p})^{1/p}\). For two vectors \(x,y\), we define the inner product \(\langle x,y\rangle=\sum_{i=1}^{n}x_{i}y_{i}\).
The definition of entropy is given as the following:
**Definition 2.1** (Entropy).: _We define the entropy \(H(p)\) of vector \(p\) by_
\[H(p):=\sum_{i=1}^{n}p_{i}\log(\frac{1}{p_{i}}).\]
_Similarly, for a matrix \(P\in\mathbb{R}^{n\times n}_{+}\), we define the entropy \(H(P)\) entrywise as_
\[\sum_{i=1}^{n}\sum_{j=1}^{n}\log\frac{1}{P_{i,j}}.\]
### Problem Formulation
We first introduce the definition of OT problem.
**Definition 2.2**.: _Given a matrix \(C\) with small tree width (e.g. \(C=AA^{\top}\) where \(A\in\mathbb{R}^{n\times d}\)), the optimal transport problem is defined as:_
\[\min_{X} \ \langle C,X\rangle\] s.t. \[\ X\in\mathbb{R}^{n\times n}_{+}\] \[X\mathbf{1}_{n}=r\] \[X^{\top}\mathbf{1}_{n}=c\]
_where \(\mathbf{1}_{n}\in\mathbb{R}^{n}\) denotes a vector where every entry is \(1\)._
Next, we give the definition of the regularized OT problem.
**Definition 2.3**.: _Given a strongly convex regularizer \(\mathcal{R}(X)\), e.g. negative entropy or squared Euclidean norm, the regularized optimal transport problem is defined as:_
\[\min_{X} \langle C,X\rangle+\gamma\mathcal{R}(X)\] (1) s.t. \[X\in\mathbb{R}_{+}^{n\times n}\] \[X\mathbf{1}_{n}=r\] \[X^{\top}\mathbf{1}_{n}=c\]
_where \(\gamma>0\) denotes the regularization parameter._
The goal for this paper is to find the approximation for the transportation plan \(\widehat{X}\) defined as follows:
**Definition 2.4** (\(\epsilon\)-approximation).: _The \(\epsilon\)-approximation for the OT distance is defined as_
\[\langle C,\widehat{X}\rangle\leq \min_{X}\langle C,X\rangle+\epsilon\] (2) s.t. \[X\in\mathbb{R}_{+}^{n\times n}\] \[X\mathbf{1}_{n}=r\] \[X^{\top}\mathbf{1}_{n}=c\]
_where \(\widehat{X}\) denotes the approximation for the transportation plan._
For simplicity we introduce the definition of \(\mathcal{U}_{r,c}\subset\mathbb{R}_{+}^{n\times n}\)
**Definition 2.5**.: _Given the OT problem \(\arg\min_{X\in\mathcal{U}_{r,c}}\langle X,C\rangle\), we define_
\[\mathcal{U}_{r,c}:=\{X\in\mathbb{R}_{+}^{n\times n}:X\mathbf{1}_{n}=r,X^{\top }\mathbf{1}_{n}=c\}\]
_where \(\mathbf{1}_{n}\) is the all-ones vector in \(\mathbb{R}^{n}\), \(C\in\mathbb{R}_{+}^{n\times n}\) is a given cost matrix, and \(r\in\mathbb{R}^{n},c\in\mathbb{R}^{n}\) are given vectors with positive entries that sum to one._
Next, we provide a lemma about the transport plan \(X\).
**Lemma 2.6** ([11]).: _For any cost matrix \(C\in\mathbb{R}^{n\times n}\), \(\mathcal{U}_{r,c}\subset\mathbb{R}_{+}^{n\times n}\) and \(r,c\in\triangle_{n}\), the minimization program_
\[X_{\gamma}:=\arg\min_{X\in\mathcal{U}_{r,c}}\langle X,C\rangle+ \gamma\cdot\mathcal{R}(X),\]
_where \(\gamma>0\) is the regularization parameter and \(\mathcal{R}(X)\) is a strongly convex regularizer, has a unique minimum at \(X_{\gamma}\in\mathcal{U}_{r,c}\) of the form \(X_{\gamma}=MAN\), where \(A:=\exp(-\frac{1}{\gamma}C)\) and \(M,N\in\mathbb{R}_{+}^{n\times n}\) are both diagonal matrices. The matrices \((M,N)\) are unique up to a constant factor._
### Inequalities
We introduce the Holder's inequality as following:
**Lemma 2.7** (Holder's inequality).: _If \(p>1\) and \(q>1\) are such that_
\[\frac{1}{p}+\frac{1}{q}=1\]
_then_
\[\|ab\|_{1}\leq\|a\|_{p}\|b\|_{q}.\]
We also provide the Pinsker inequality.
**Lemma 2.8** (Pinsker inequality).: _Let \(P\) and \(Q\) be two distributions defined on the universe \(U\). Then,_
\[\operatorname{KL}(P\|Q)\geq\frac{1}{2\ln 2}\cdot\|P-Q\|_{1}^{2}.\]
_where \(\operatorname{KL}(P\|Q)\) is the \(\operatorname{KL}\)-divergence between \(P\) and \(Q\)._
### Treewidth preliminaries
We begin by introducing the definition of treewidth for a given matrix.
**Definition 2.9** (Treewidth \(\tau\)).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\), we construct its graph \(G=(V,E)\) as follows: The vertex set are columns \([d]\); An edge \((i,j)\in E\) if and only if there exists \(k\in[n]\) such that \(A_{k,i}\neq 0,A_{k,j}\neq 0\). Then, the treewidth of the matrix \(A\) is the treewidth of the constructed graph. In particular, every column of \(A\) is \(\tau\)-sparse._
Next, we present the definition for Cholesky factorization.
**Definition 2.10** (Cholesky Factorization).: _Given a positive-definite matrix \(P\), there exists a unique Cholesky factorization \(P=LL^{\top}\in\mathbb{R}^{d\times d}\), where \(L\in\mathbb{R}^{d\times d}\) is a lower-triangular matrix with real and positive diagonal entries._
We also provide the running time of computing the Cholesky factorization.
**Lemma 2.11** ([14, 15]).: _Given a positive definite matrix \(M\in\mathbb{R}^{d\times d}\), we can decompose it by using Cholesky decomposition \(M=LL^{\top}\) in time_
\[\Theta(\sum_{j=1}^{d}|\mathcal{L}_{j}|^{2}),\]
_where \(|\mathcal{L}_{j}|\) is the number of nonzero entries in the \(j\)-th column of \(L\)._
Then, we introduce some results based on the Cholesky factorization of a given matrix with treewidth \(\tau\):
**Lemma 2.12** ([14, 15, 16]).: _For any matrix \(A\in\mathbb{R}^{n\times n}\) with treewidth \(\tau\), we can compute the Cholesky factorization \(AA^{\top}=LL^{\top}\in\mathbb{R}^{n\times n}\) in \(O(n\tau^{2})\) time, where \(L\in\mathbb{R}^{n\times n}\) is a lower-triangular matrix with real and positive entries. \(L\) satisfies the property that every column is \(\tau\)-sparse._
**Claim 2.13** ([15, 16, 17, 18]).: _Given \(L=MM^{\top}\), where \(M\) has treewidth \(\tau\) and \(M\in\mathbb{R}^{m\times n}\), we have \(\operatorname{nnz}(L)=O(n\tau)\)._
Proof.: We first show that \(\operatorname{nnz}(L)=O(m)\). Let \(M\in\mathbb{R}^{m\times n}\) denote the adjacency matrix of graph \(G=(V,E)\), where \(|E(G)|=m,|V(G)|=n\). The Laplacian matrix of graph \(G\) is \(L=MM^{\top}\) and it is also defined as \(D-A\), where \(D\) is the degree matrix and \(A\) is the adjacency matrix of graph \(G\). As \(\operatorname{nnz}(A)=O(m),\operatorname{nnz}(D)=O(n)\) and \(m\geq n\), we have
\[\operatorname{nnz}(L)=O(m)+O(n)=O(m). \tag{3}\]
Next, we show that the number of edge \(m\) for graph \(G\) is bounded by \(O(n\tau)\). The maximal graphs with treewidth \(\tau\) are the \(\tau\)-trees which are constructed by starting with a \((\tau+1)\)-clique and iteratively adding vertices of degree \(\tau\) such that its neighbours form a \(\tau\)-clique. By counting the edges in the \((\tau+1)\)-clique and the edges incident to the \(n-\tau-1\) vertices iteratively added to the \(\tau\)-tree, the total number of edges in a \(\tau\)-tree with \(n\) vertices is
\[\binom{\tau+1}{2}+\tau(n-\tau-1)=O(n\tau). \tag{4}\]
Since any graph G with treewidth \(\tau\) is a subgraph of a \(\tau\)-tree, we have \(O(n\tau)\) is an upper bound on \(|E(G)|=m\). By combining Eq. (3) and Eq. (4), we have \(\operatorname{nnz}(L)=O(n\tau)\).
Hence, we complete the proof.
## 3 Sinkhorn's Algorithm Analysis
```
1:procedureSinkhornAlgorithm(\(c,r,\epsilon_{0}\))\(\triangleright\) Theorem 3.8
2:\(\triangleright\) Accuracy \(\epsilon_{0}\)
3:\(k\gets 0\)
4:\(u_{0}\gets 0\)
5:\(v_{0}\gets 0\)
6:while\(\|B(u_{k},v_{k})\mathbf{1}_{n}-r\|_{1}+\|B(u_{k},v_{k})^{\top}\mathbf{1}_{n}-c \|_{1}\geq\epsilon_{0}\)do
7:if\(k\mod 2=0\)then
8:\(u_{k+1}\gets u_{k}+\ln r-\ln(B(u_{k},v_{k})\mathbf{1}_{n})\)
9:\(v_{k+1}\gets v_{k}\)
10:else
11:\(v_{k+1}\gets v_{k}+\ln c-\ln(B(u_{k},v_{k})^{\top}\mathbf{1}_{n})\)
12:\(u_{k+1}\gets u_{k}\)
13:endif
14:\(k\gets k+1\)
15:endwhile
16:return\(B(u_{k},v_{k})\).
17:endprocedure
```
**Algorithm 1** Sinkhorn's Algorithm
In Section 3.1, we provides some definitions used in Sinkhorn algorithm. In Section 3.2, we provides the bounds related to \(u\in\mathbb{R}^{n},v\in\mathbb{R}^{n}\). In Section 3.3, we define the potential function \(\widetilde{\psi}\). In Section 3.4, we provide the upper bound of \(\widetilde{\psi}\). In Section 3.5, we show the iteration complexity bound of the Sinkhorn's Algorithm. In Section 3.6, we provide the induction proof for the upper bound of the potential function.
### Definitions
We first introduce some definitions to simplify the derivations.
**Definition 3.1**.: _We define matrix function \(B:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{n\times n}\) as follows: for any given vectors \(u,v\in\mathbb{R}^{n}\)_
\[B(u,v):=\operatorname{diag}(e^{u})K\operatorname{diag}(e^{v})\]
_where \(\mathrm{diag}(a)\in\mathbb{R}^{n\times n}\) is the diagonal matrix with the vector \(a\in\mathbb{R}^{n}\) on the diagonal and \(K\in\mathbb{R}^{n\times n}\) is a matrix which is defined as_
\[K_{i,j}:=\exp(-C_{i,j}/\gamma).\]
**Definition 3.2**.: _We define function \(\psi:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) as follows: for any given vectors \(u,v\in\mathbb{R}^{n}\)_
\[\psi(u,v):=\mathbf{1}_{n}^{\top}B(u,v)\mathbf{1}_{n}-\langle u,r \rangle-\langle v,c\rangle,\]
_where \(B\) is defined in Definition 3.1._
We consider the Sinkhorn-Knopp algorithm (Algorithm 1), which solves the following minimization problem introduced in Lemma 2 of [13]:
\[\min_{u,v\in\mathbb{R}^{n}}\psi(u,v), \tag{5}\]
where \(\psi\) is defined in Definition 3.2.
Problem Eq. (5) is the dual formulation to Eq. (1) as we choose \(\mathcal{R}(X)=-H(X)\).
Here, we show the high level idea of proving the complexity of the Sinkhorn's algorithm.
We first show how to get the bounds for \(u_{k},v_{k}\) and an optimal solution \((u_{*},v_{*})\) for Eq. (5).
Next, we show that, for each iteration, \(\psi(u_{k},v_{k})\) is upper bounded by
\[\|B(u_{k},v_{k})\mathbf{1}_{n}-r\|_{1}+\|B(u_{k},v_{k})^{\top} \mathbf{1}_{n}-c\|_{1}.\]
Eventually, by using the bound of \(\psi(u_{k},v_{k})\), we show our result of the complexity result for the Sinkhorn's algorithm.
**Definition 3.3**.: _We define \(R\) as_
\[R:=\ -\ln(K_{\min}\min_{i,j\in[n]}\{r_{i},c_{j}\}),.\]
_where_
\[K_{\min}:=\ \min_{i,j\in[n]}K_{i,j}=e^{-\|C\|_{\infty/\gamma}}\]
### Bounded \(\max-\min\)
We first present a tool related to the bounds for \(u_{k}\in\mathbb{R}^{n},v_{k}\in\mathbb{R}^{n},u_{*}\in\mathbb{R}^{n}\) and \(v_{*}\in\mathbb{R}^{n}\).
**Lemma 3.4**.: _Let \(k\geq 0\) and \(u_{k}\in\mathbb{R}^{n},v_{k}\in\mathbb{R}^{n}\) be generated by Algorithm 1 and \((u_{*},v_{*})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) be a solution of Eq. (5). Then_
\[\max_{i\in[n]}u_{k,i}-\min_{i\in[n]}u_{k,i} \leq R,\ \max_{j\in[n]}v_{k,j}-\min_{j\in[n]}v_{k,j}\leq R, \tag{6}\] \[\max_{i\in[n]}u_{*,i}-\min_{i\in[n]}u_{*,i} \leq R,\ \max_{j\in[n]}v_{*,j}-\min_{j\in[n]}v_{*,j}\leq R,\]
_where \(R\) is defined in Definition 3.3._
Proof.: First, we prove the bound for \(u_{k}\in\mathbb{R}^{n}\). As \(u,v\) are initialized as \(\mathbf{0}_{n}\), the inequality holds for \(k=0\). Given \(k-1\) is even, the variable \(u\) is updated on the iteration \(k-1\) and \(B(u_{k},v_{k})\mathbf{1}_{n}=r\) by the algorithm construction.
Hence, for each \(i\in[n]\), we have
\[e^{u_{k,i}}K_{\min}\langle\mathbf{1}_{n},e^{v_{k}}\rangle \leq \sum_{j=1}^{n}e^{e_{k,i}}K_{i,j}e^{v_{k,j}} \tag{7}\] \[= [B(u_{k},v_{k})(\mathbf{1}_{n})_{i}]\] \[= r_{i}\] \[\leq 1\]
where the first step follows from the definition of \(K_{\min}\), the second step follows from the definition of \(B\), the third step follows from \(B(u_{k},v_{k})\mathbf{1}_{n}=r\) and the last step follows from the definition of probability simplex \(r\).
Hence, by reorganizing Eq. (7) we have
\[\max_{i\in[n]}u_{k,i}\leq\ -\ln(K_{\min}\langle\mathbf{1}_{n},e^{v_{k}}\rangle). \tag{8}\]
On the other hand, since \(0\leq K_{i,j}\leq 1\) for each \(i\in[n]\),
\[e^{u_{k,i}}\langle\mathbf{1}_{n},e^{v_{k}}\rangle\] \[\geq \sum_{j=1}^{n}e^{u_{k,i}}K_{i,j}e^{v_{k,j}}\] \[= [B(u_{k},v_{k})\mathbf{1}_{n}]_{i}\] \[= r_{i}\]
where the first step follows from \(K_{i,j}\leq 1\), the second step follows from the definition of \(B\) and the last step follows from \(B(u_{k},v_{k})\mathbf{1}_{n}=r\).
We also have
\[\min_{i\in[n]}u_{k,i}\geq\ \min_{i\in[n]}\ln(\frac{r_{i}}{\langle\mathbf{1}_{n },e^{v_{k}}\rangle})=\ln(\frac{\min_{i\in[n]}r_{i}}{\langle\mathbf{1}_{n},e^{ v_{k}}\rangle}).\]
The latter equality and Eq. (8) give
\[\max_{i\in[n]}u_{k,i}-\min_{i\in[n]}u_{k,i}\leq-\ln(K_{\min}\min_{i\in[n]}r_{ i})\leq R\]
### Potential function \(\widetilde{\psi}\)
To simplify derivations, we define \(\widetilde{\psi}\) as follows:
**Definition 3.5**.: _We define \(\widetilde{\psi}\) as_
\[\widetilde{\psi}(u,v):=\psi(u,v)-\psi(u_{*},v_{*})\]
_where the last step follows from the definition of \(\psi\)._
**Claim 3.6**.: _We have_
\[\widetilde{\psi}(u,v) =\langle\mathbf{1}_{n},B(u,v)\mathbf{1}_{n}\rangle-\langle \mathbf{1}_{n},B(u_{*},v_{*})\mathbf{1}_{n}\rangle+\langle u_{*}-u,r\rangle+ \langle v_{*}-v,c\rangle.\]
Proof.: We can get
\[\widetilde{\psi}(u,v) =\psi(u,v)-\psi(u_{*},v_{*})\] \[=\langle\mathbf{1}_{n},B(u,v)\mathbf{1}_{n}\rangle-\langle \mathbf{1}_{n},B(u_{*},v_{*})\mathbf{1}_{n}\rangle+\langle u_{*}-u,r\rangle+ \langle v_{*}-v,c\rangle.\]
where the first step follows from the definition of \(\widetilde{\psi}\), the second step follows from the definition of \(\psi\).
### Upper bounding for potential function
Here, we provide a lemma which will be used later to bound the iteration complexity.
**Lemma 3.7**.: _Let \(k\geq 1\) and \(u_{k},v_{k}\in\mathbb{R}^{n}\) be output of Algorithm 1. We denote \(B_{k}:=B(u_{k},v_{k})\). Then, we have_
\[\widetilde{\psi}(u_{k},v_{k})\leq R\cdot(\|B_{k}\mathbf{1}_{n}-r \|_{1}+\|B_{k}^{\top}\mathbf{1}_{n}-c\|_{1}).\]
Proof.: Given a fixed \(k\geq 1\), for the following convex function of \((\widehat{u},\widehat{v})\)
\[\langle\mathbf{1}_{n},B(\widehat{u},\widehat{v})\mathbf{1}_{n} \rangle-\langle\widehat{u},B(u_{k},v_{k})\mathbf{1}_{n}\rangle-\langle \widehat{v},B(u_{k},v_{k})^{\top}\mathbf{1}_{n}\rangle.\]
The gradient of the convex function vanishes at \((\widehat{u},\widehat{v})=(u_{k},v_{k})\), so the point \((u_{k},v_{k})\) is its minimizer.
Hence,
\[\widetilde{\psi}(u_{k},v_{k}) =[\langle\mathbf{1}_{n},B_{k}\mathbf{1}_{n}\rangle-\langle u_{k},B_{k}\mathbf{1}_{n}\rangle-\langle v_{k},B_{k}^{\top}\mathbf{1}_{n}\rangle]\] \[\quad-[\langle\mathbf{1}_{n},B(u_{*},v_{*})\mathbf{1}_{n}\rangle- \langle u_{*},B_{k}\mathbf{1}_{n}\rangle-\langle v_{*},B_{k}^{\top}\mathbf{1 }_{n}\rangle]\] \[\quad+\langle u_{k}-u_{*},B_{k}\mathbf{1}_{n}-r\rangle+\langle v _{k}-v_{*},B_{k}^{\top}\mathbf{1}_{n}-c\rangle\] \[\leq\langle u_{k}-u_{*},B_{k}\mathbf{1}_{n}-r\rangle+\langle v_{ k}-v_{*},B_{k}^{\top}\mathbf{1}_{n}-c\rangle. \tag{9}\]
where the first step follows from the definition of \(\widetilde{\psi}\). Next, we bound the r.h.s of the inequality. For each iteration, we know that either \(B_{k}\mathbf{1}_{n}=r\) or \(B_{k}^{\top}\mathbf{1}_{n}=c\), so we have that \(\langle\mathbf{1}_{n},B_{k}\mathbf{1}_{n}\rangle=1\) and \(\langle\mathbf{1}_{n},B_{k}\mathbf{1}_{n}-r\rangle=0\).
Taking \(a=0.5\cdot(\max_{i\in[n]}u_{k,i}+\min_{i\in[n]}u_{k,i})\). Then, we have
\[\langle u_{k},B_{k}\mathbf{1}_{n}-r\rangle =\langle u_{k}-a\mathbf{1}_{n},B_{k}\mathbf{1}_{n}-r\rangle\] \[\leq\|u_{k}-a\mathbf{1}_{n}\|_{\infty}\|B_{k}\mathbf{1}_{n}-r\|_{1}\] \[=0.5\cdot(\max_{i\in[n]}u_{k,i}-\min_{i\in[n]}u_{k,i})\|B_{k} \mathbf{1}_{n}-r\|_{1}\] \[\leq\frac{R}{2}\|B_{k}\mathbf{1}_{n}-r\|_{1}.\]
where the first step follows from \(\langle\mathbf{1}_{n},B_{k}\mathbf{1}_{n}-r\rangle=0\), the second step follows from Holder's inequality, the third step follows from the definition of \(a\), and the last step follows from Lemma 3.4.
Similarly, we bound \(\langle-u_{*},B_{k}\mathbf{1}_{n}-r\rangle,\langle v_{k},B_{k}^{\top}\mathbf{1 }_{n}-c\rangle\) and \(\langle-v_{*},B_{k}^{\top}\mathbf{1}_{n}-c\rangle\) in Eq. (3.1) and complete the proof.
### Iteration complexity bound
In this section, we show the iteration complexity bound for the Algorithm 1.
**Theorem 3.8**.: _Given the cost matrix \(C\in\mathbb{R}^{n\times n}\) and two simplex \(r,c\in\mathbb{R}^{n}_{+}\), there is an algorithm (Algorithm 1) outputs \(B(u_{k},v_{k})\) (Definition 3.1) that satisfying_
\[\|B(u_{k},v_{k})\mathbf{1}_{n}-r\|_{1}+\|B(u_{k},v_{k})^{\top}\mathbf{1}_{n}-c \|_{1}\leq\epsilon_{0}\]
_in the number of iterations \(k\) satisfying_
\[k\leq 2+\frac{4R}{\epsilon_{0}}\]
Proof.: We first consider that \(k\geq 1\) is even and define \(B_{k}:=B(u_{k},v_{k})\). We have
\[\psi(u_{k},v_{k})-\psi(u_{k+1},v_{k+1}) \tag{10}\] \[= \langle\mathbf{1}_{n},B_{k}\mathbf{1}_{n}\rangle-\langle\mathbf{ 1}_{n},B_{k+1}\mathbf{1}_{n}\rangle+\langle u_{k+1}-u_{k},r\rangle+\langle v_ {k+1}-v_{k},c\rangle\] \[= \langle r,u_{k+1}-u_{k}\rangle\] \[= \langle r,\ln r-\ln(B_{k}\mathbf{1}_{n})\rangle\] \[= \mathrm{KL}(r\|B_{k}\mathbf{1}_{n})\]
Then, we obtain
\[\widetilde{\psi}(u_{k},v_{k})-\widetilde{\psi}(u_{k+1},v_{k+1}) \tag{11}\] \[= \psi(u_{k},v_{k})-\psi(u_{k+1},v_{k+1})\] \[= \mathrm{KL}(r\|B_{k}\mathbf{1}_{n})\] \[\geq \frac{1}{2}\|B_{k}\mathbf{1}_{n}-r\|_{1}^{2}\] \[\geq \max\{\frac{\widetilde{\psi}(u_{k},v_{k})^{2}}{2R^{2}},\frac{ \epsilon_{0}^{2}}{2}\},\]
where the 1st step follows by the definition of \(\widetilde{\psi}\), the 2nd step follows by Eq. (10), the 3rd step follows by Pinsker's inequality and the last step follows by Lemma 3.7 and \(B_{k}^{\top}\mathbf{1}_{n}=c\). For the last step, we also used that, as soon as the stopping criterion is not yet fulfilled and \(B_{k}^{\top}\mathbf{1}_{n}=c\), \(\|B_{k}\mathbf{1}_{n}-r\|_{1}^{2}\leq\epsilon_{0}^{2}\). Similarly, when \(k\) is odd, we can prove the same inequality.
Given \(\ell=\frac{2R^{2}}{\widetilde{\psi}(u_{1},v_{1})}\), using Lemma 3.9, we have for any \(k\geq 1\)
\[\frac{\widetilde{\psi}(u_{k},v_{k})}{2R^{2}}\leq\frac{1}{k+\ell-1}\]
Thus,
\[k\leq 1+\frac{2R^{2}}{\widetilde{\psi}(u_{k},v_{k})}-\frac{2R^{2}}{ \widetilde{\psi}(u_{1},v_{1})} \tag{12}\]
On the other hand,
\[\widetilde{\psi}(u_{k+m},v_{k+m})\leq\widetilde{\psi}(u_{k},v_{k})-\frac{ \epsilon_{0}^{2}m}{2},\ \ k,m\geq 0 \tag{13}\]
Next, we use a switching strategy, parameterized by number \(s\in(0,\widetilde{\psi}(u_{1},v_{1})]\), to combine Eq. (15) and Eq. (13).
First, by using Eq. (15), we calculate the number of iterations needed to decrease \(\widetilde{\psi}(u,v)\) from its initial value \(\widetilde{\psi}(u_{1},v_{1})\) to a certain value \(s\). Then, by applying Eq. (13) and given \(\widetilde{\psi}(u,v)\geq 0\) by its definition, we calculate the number of iterations required to further decrease \(\widetilde{\psi}(u,v)\) from \(s\) to zero. By minimizing the sum of these two estimates in \(s\in(0,\widetilde{\psi}(u_{1},v_{1})]\), the total number of iterations \(k\) satisfies the following
\[k \leq \min_{0<s\leq\widetilde{\psi}(u_{1},v_{1})}(2+\frac{2R^{2}}{s}- \frac{2R^{2}}{\widetilde{\psi}(u_{1},v_{1})}+\frac{2s}{\epsilon_{0}^{2}})\] \[= \begin{cases}2+\frac{4R}{\epsilon_{0}}-\frac{2R^{2}}{\widetilde{ \psi}(u_{1},v_{1})},&\widetilde{\psi}(u_{1},v_{1})\geq R\epsilon_{0},\\ 2+\frac{2\widetilde{\psi}(u_{1},v_{1})}{\epsilon_{0}^{2}},&\widetilde{\psi}( u_{1},v_{1})<R\epsilon_{0}.\end{cases}\]
where the first step comes from Eq. (12), the first half of the last step comes from \(a+b\geq 2\sqrt{ab}\) for \(a\geq 0,\ b\geq 0\) and the second half follows from \(s=\widetilde{\psi}(u_{1},v_{1})\). In both cases, we have \(k\leq 2+\frac{4R}{\epsilon_{0}}\).
### Induction
Here, we provide the induction proof for the upper bound of the potential function.
**Lemma 3.9**.: _For all \(k\geq 1\),_
\[\frac{\widetilde{\psi}(u_{k},v_{k})}{2R^{2}}\leq\frac{1}{k+\ell-1},\]
_where \(\ell:=\frac{2R^{2}}{\widetilde{\psi}(u_{1},v_{1})}\) and \(\widetilde{\psi}\) is defined in Definition 3.2._
Proof.: Our proof can be divided into two parts. At first, we consider the correctness of the in equalities above with \(k=1\). Then, inducing over \(k>1\), the proof will be completed.
**Base Case.** For \(k=1\).
\[\frac{\widetilde{\psi}(u_{1},v_{1})}{2R^{2}}\] \[= \frac{1}{\ell}\] \[= \frac{1}{k+\ell-1},\]
where, the first step follows from the definition of \(\ell\) and the last step follows from \(k-1=0\). Hence, we have \(\frac{\widetilde{\psi}(u_{k},v_{k})}{2R^{2}}\leq\frac{1}{k+\ell-1}\) for k=1.
**General case** Suppose,
\[\frac{\widetilde{\psi}(u_{k},v_{k})}{2R^{2}}\leq\frac{1}{k+\ell-1} \tag{14}\]
Then we can show
\[\frac{\widetilde{\psi}(u_{k+1},v_{k+1})}{2R^{2}}\] \[\leq \frac{\widetilde{\psi}(u_{k},v_{k})}{2R^{2}}-(\frac{\widetilde{ \psi}(u_{k},v_{k})}{2R^{2}})^{2}\]
\[\leq\frac{1}{k+\ell-1}-(\frac{1}{k+\ell-1})^{2}\] \[\leq\frac{1}{k+\ell}, \tag{15}\]
where the first step follows from Eq. (11), and the second step follows from Eq. (14) and the property of function \(f(x)=x-x^{2}\) (which is \(f(y)\leq f(z)\) if \(y\leq z\leq 1/2\)), the last step follows from \(\frac{1}{A}-\frac{1}{A^{2}}\leq\frac{1}{A+1}\) for any integer \(A\geq 2\). By induction, the proof is completed.
## 4 Running Time with small treewidth setting
In Section 4.1, we introduce the implicit form \(K\). In Section 4.2, we provided the faster Sinkhorn' Algorithm with small treewidth. In Section 4.3, we show the correctness of our rounding algorithm. In Section 4.4, we show the running time needed for our rounding algorithm. In Section 4.5, we provide the running time for approximating the OT distance by using the faster Sinkhorn's Algorithm.
### Implicit form of \(K\)
Here we introduce the implicit form of \(K\) to make use of the small treewidth setting.
**Lemma 4.1**.: _We assume \(C=MM^{\top}\in\mathbb{R}^{n\times n}\), where \(M\in\mathbb{R}^{n\times d}\) has treewidth \(\tau\). Given \(A:=K-D\), where \(D_{i,j}:=1\) for \(i,j\in[n]\) and \(K\) is defined in Definition 3.1, the Cholesky factor \(L_{A}\) for \(A=L_{A}L_{A}^{\top}\) is \(\tau\)-sparse in columns._
Proof.: Given \(C=MM^{\top}\) and \(M\) has treewidth \(\tau\), the Cholesky factor \(L_{C}\) for \(C=MM^{\top}=L_{M}L_{M}^{\top}\) is \(\tau\) sparse in column by using Lemma 2.12. As
\[A_{i,j}=e^{-C_{i,j}/\gamma}-1,\]
we have \(A_{i,j}=0\) when \(C_{i,j}=0\). Hence, matrix \(A\) is as sparse as matrix \(C\). We have that the Cholesky factor \(L_{A}\) for \(A=L_{A}L_{A}^{\top}\) is as sparse as \(L_{M}\). As \(L_{M}\) is \(\tau\)-sparse, we complete the proof.
### Running time of Sinkhorn with small treewidth
This section is to prove the running time of Algorithm 2.
**Theorem 4.2** (Running time of Algorithm 2).: _Given the cost matrix \(C\in\mathbb{R}^{n\times n}\) with small treewidth \(\tau\) and two simplex \(r,c\in\mathbb{R}_{+}^{n}\), there is an algorithm (Algorithm 2) takes \(O(n\tau)\) for each iteration and \(O(n\tau^{2})\) for initialization to output_
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,v,w\in\mathbb{R}^{n}\)__
_such that \(B(u_{k},v_{k})\in\mathbb{R}^{n\times n}\) can be constructed (implicitly) by_
\[B(u_{k},v_{k})=\operatorname{diag}(e^{u_{k}})(L_{A}L_{A}^{\top}) \operatorname{diag}(e^{v_{k}})+\operatorname{diag}(e^{u_{k}})(ww^{\top}) \operatorname{diag}(e^{v_{k}})\]
_satisfying_
\[\|B(u_{k},v_{k})\mathbf{1}_{n}-r\|_{1}+\|B(u_{k},v_{k})^{\top} \mathbf{1}_{n}-c\|_{1}\leq\epsilon_{0}.\]
Proof.: The running time for each step is shown as follows:
* Writing down cost matrix \(C\in\mathbb{R}^{n\times n}\) takes \(O(n\tau)\) time as \(\text{nnz}(C)=n\tau\) by using Claim 2.13.
* Implicitly write down matrix \(D\in\mathbb{R}^{n\times n}\), this takes \(O(n)\) time since \(D\in\mathbb{R}^{n\times n}\) is a rank-1 matrix.
* Initializing \(x_{0}\) and \(y_{0}\) takes \(O(n\tau)\) as \(\text{nnz}(C)=n\tau\).
* Using Lemma 4.1, we know \(L_{A}\) is \(\tau\)-sparse in column. Then, calculating the Cholesky decomposition for \(A\) takes \(O(n\tau^{2})\) time using Lemma 2.11.
* Calculating \(\text{diag}(e^{u_{k}})(L_{A}L_{A}^{\top})\,\text{diag}(e^{v_{k}})\) takes \(O(n\tau)\) time as \(L_{A}\) is \(\tau\)-sparse in column.
* Calculating \(\text{diag}(e^{u_{k}})D\,\text{diag}(e^{v_{k}})\) takes \(O(n)\) time as matrix \(D\) is a rank-1 matrix.
* Updating \(u\in\mathbb{R}^{n},v\in\mathbb{R}^{n}\) takes \(O(n)\) time.
Hence, the initialization time for Algorithm 2 is \(O(n\tau^{2})\) and the per iteration running time is \(O(n\tau)\)
```
1:procedureApproxOT(\(\epsilon\))\(\triangleright\) Theorem 4.5
2:\(\triangleright\) Accuracy \(\epsilon\)
3:\(\gamma\leftarrow\frac{\epsilon}{4\ln n}\)
4:\(\epsilon_{0}\leftarrow\frac{\epsilon}{8\|C\|_{\infty}}\)\(\triangleright\) Find \(\widetilde{r},\widetilde{c}\in\Delta^{n}\) s.t. \(\|\widetilde{r}-r\|_{1}\leq\epsilon_{0}/4,\|\widetilde{c}-c\|_{1}\leq\epsilon_ {0}/4\) and \(\min_{i\in[n]}\widetilde{r}_{i}\geq\epsilon_{0}/(8n),\min_{j\in[n]}\widetilde{c }_{j}\geq\epsilon_{0}/(8n)\).
5:\((\widetilde{r},\widetilde{c})\leftarrow(1-\frac{\epsilon_{0}}{8})((r,c)+\frac{ \epsilon_{0}}{n(8-\epsilon_{0})}(\mathbf{1}_{n},\mathbf{1}_{n}))\)
6:\((u,v,L,w)\leftarrow\textsc{ShikhornAlgorithm}(\widetilde{r},\widetilde{c}, \epsilon_{0}/2)\)\(\triangleright\) Algorithm 2
7:\(\triangleright\) Note that \(u,v,L,w\) is an implicit representation of \(B\), i.e., \(\operatorname{diag}(e^{u_{k}})(L_{A}L_{A}^{\top})\operatorname{diag}(e^{v_{k} })+\operatorname{diag}(e^{u_{k}})(ww^{\top})\operatorname{diag}(e^{v_{k}})\)
8:\((p,\,q,\,X,Y,w,u,v)\leftarrow\textsc{Round}(u,v,L,w,r,c)\)\(\triangleright\) Algorithm 4
9:return\((p,\,q,\,X,Y,L,w,u,v)\)\(\triangleright\) We return \(\widehat{X}\) in an implicit way, i.e., \(\widehat{X}:=XBY+pq^{\top}/\|p\|_{1}\)
10:endprocedure
```
**Algorithm 3** Approximate OT by Sinkhorn
### Correctness of rounding algorithm
We first show the correctness of our rounding algorithm (Algorithm 4).
**Lemma 4.3** (An improved version of of Lemma 7 in [1]).: _Given \(r,c\in\triangle_{n}\), \(B\in\mathbb{R}_{+}^{n\times n}\), \(u,v\in\mathbb{R}^{n}\) and \(r,c\in\mathbb{R}^{n}\), there is an algorithm (Algorithm 4) outputs_
* _a diagonal matrix_ \(X\in\mathbb{R}^{n\times n}\)__
* _a diagonal matrix_ \(Y\in\mathbb{R}^{n\times n}\)__
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,v,w\in\mathbb{R}^{n}\)__
* _vectors_ \(p\in\mathbb{R}^{n}\)_,_ \(q\in\mathbb{R}^{n}\)__
_such that \(G\in\mathcal{U}_{r,c}\) can be constructed (implicitly) by_
\[\widehat{X}=X(\operatorname{diag}(e^{u})L_{A}L_{A}^{\top}\operatorname{diag} (e^{v})+\operatorname{diag}(e^{u})(ww^{\top})\operatorname{diag}(e^{v}))Y+ pq^{\top}/\|p\|_{1}\]
_that satisfying_
\[\|G-B\|_{1}\leq 2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top}\mathbf{1}_{n}-c\|_{1}).\]
Proof.: Let \(G\) be the output of Algorithm 4. As matrix \(B_{1}\) are nonnegative, and the output \(q\) and \(p\) are both negative, with \(\|p\|_{1}=\|q\|_{1}=1-\|B_{1}\|_{1}\), matrix \(G\) are nonnegative and
\[r(G) =r(B_{1})+r(pq^{\top}/\|p\|_{1})\] \[=r(B_{1})+p\] \[=r, \tag{16}\]
where we denote \(r(A):=A\mathbf{1}_{n},c(A):=A^{\top}\mathbf{1}_{n}\) and the first two step comes from the definition of \(r\) and the last step comes from \(p=r-B_{1}\mathbf{1}_{n}\). Similarly, we have \(c(G)=c\). Therefore, we have \(G\in\mathcal{U}_{r,c}\).
Next, we denote \(\Delta:=\|B\|_{1}-\|B_{1}\|_{1}\) and prove the \(\ell_{1}\) bound between the matrix \(B\) and matrix \(G\). We first remove mass from a row of \(B\) when \(r_{i}(B)\geq r_{i}\), and then, we remove mass from a column when \(c_{j}(B_{0})\geq c_{j}\). Now, we have
\[\Delta=\sum_{i=1}^{n}(r_{i}(B)-r_{i})_{+}+\sum_{j=1}^{n}(c_{j}(B_{0})-c_{j})_{+}. \tag{17}\]
Then, we show the analysis of Eq. (17). First, for the left sum of Eq. (17), we have
\[\sum_{i=1}^{n}(r_{i}(B)-r_{i})_{+}=\frac{1}{2}(\|r(B)-r\|_{1}+\|B\|-1).\]
For the second sum in Eq. (17).
\[\sum_{j=1}^{n}(c_{j}(B_{0})-c_{j})_{+}\leq\sum_{j=1}^{n}(c_{j}(B)-c_{j})_{+} \leq\|c(B)-c\|_{1}\]
where the first step comes from the fact that the vector \(c(B)\) is entrywise larger than \(c(B_{0})\) and the last step comes from the definition of \(c\).
Therefore we conclude
\[\|G-B\|_{1} \leq\Delta+\|pq^{\top}\|_{1}/\|p\|_{1}\] \[=\Delta+1-\|B_{1}\|_{1}\] \[=2\Delta+1-\|B\|_{1}\] \[\leq\|r(B)-r\|_{1}+2\|c(B)-c\|_{1} \tag{18}\] \[\leq 2(\|r(B)-r\|_{1}+\|c(B)-c\|_{1})\]
where the first step comes from the definition of \(\Delta\), the second step comes from the fact that \(\|p\|_{1}=\|q\|_{1}=1-\|B_{1}\|_{1}\), the third step comes from the definition of \(\Delta\), the fourth step comes from Eq. (17) and the last step comes from reorganization. Now we complete the proof.
### Running time of rounding algorithm
Next, we show the running time needed for the rounding algorithm (Algorithm 4).
**Lemma 4.4** (An improved version of of Lemma 7 in [1]).: _Given \(r,c\in\triangle_{n}\), \(B\in\mathbb{R}_{+}^{n\times n}\), \(u,v\in\mathbb{R}^{n}\) and \(r,c\in\mathbb{R}^{n}\), there is an algorithm (Algorithm 4) outputs_
* _a diagonal matrix_ \(X\in\mathbb{R}^{n\times n}\)__
* _a diagonal matrix_ \(Y\in\mathbb{R}^{n\times n}\)__
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,v,w\in\mathbb{R}^{n}\)__
* _vectors_ \(p\in\mathbb{R}^{n}\)_,_ \(q\in\mathbb{R}^{n}\)__
_such that \(G\in\mathcal{U}_{r,c}\) can be constructed (implicitly) by_
\[\widehat{X}=X(\operatorname{diag}(e^{u})L_{A}L_{A}^{\top} \operatorname{diag}(e^{v})+\operatorname{diag}(e^{u})(ww^{\top})\operatorname {diag}(e^{v}))Y+pq^{\top}/\|p\|_{1}\]
_that satisfying_
\[\|G-B\|_{1}\leq 2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top} \mathbf{1}_{n}-c\|_{1})\]
_in \(O(n\tau)\) time._
Proof.: The running time for each step is shown as follows:
* Calculating \(r(B)\) takes \(O(n\tau)\) time. Given \[r(B)=B\mathbf{1}_{n}=\operatorname{diag}(e^{u_{k}})(L_{A}L_{A}^ {\top})\mathbf{1}_{n}\operatorname{diag}(e^{v_{k}})+\operatorname{diag}(e^{u _{k}})(ww^{\top})\mathbf{1}_{n}\operatorname{diag}(e^{v_{k}}),\] calculating \(L_{A}(L_{A}^{\top}\mathbf{1}_{n})\) takes \(O(n\tau)\), as \(\operatorname{nnz}(L_{A})=n\tau\). As \(w=\mathbf{1}_{n}\), calculating \((ww^{\top})\mathbf{1}_{n}\) takes \(O(n)\).
* Calculating \(X=\operatorname{diag}(x)\) with \(x_{i}=\min\{\frac{r_{i}}{r_{i}(B)},1\}\) takes \(O(n)\) time.
* For \(B_{0}=XB\), we remark that \(B_{0}\) is not explicitly written down. It is implicitly represented by \(L_{A},w,u,v,X\).
* Similarly, we can calculate \(Y\) in \(O(n)\) time and implicitly write down \(B_{1}\).
* We have \[B_{1}\mathbf{1}_{n}=XBY=\operatorname{diag}(e^{u_{k}})X(L_{A}L_{A }^{\top})Y\mathbf{1}_{n}\operatorname{diag}(e^{v_{k}})+\operatorname{diag}(e^ {u_{k}})X(ww^{\top})Y\mathbf{1}_{n}\operatorname{diag}(e^{v_{k}}).\] For any diagonal matrix \(M\), \(M\cdot L_{A}\) is as sparse as \(L_{A}\) and it takes \(O(n\tau)\) to compute it. Therefore, computing \(P=\operatorname{diag}(e^{u_{k}})X(L_{A}L_{A}^{\top})Y\operatorname{diag}(e^ {v_{k}})\) takes \(O(n\tau)\) time and \(P\) is \(n\tau\)-sparse. Then, we compute \(P\mathbf{1}_{n}\), which takes \(O(n\tau)\) time. Hence, updating \(p\) takes \(O(n\tau)\) time.
* Similarly, updating \(q\) takes \(O(n\tau)\) time.
* For matrix \(G\), it is returned in an implicit way. We use \(p\), \(q\), \(X,Y,w,u,v\) to represent it.
Therefore, the total running time is \(O(n\tau)\)
### Running time of OT Distance by Sinkhorn
The core of our OT algorithm is the entropic penalty
\[X_{\gamma}:=\arg\min_{X\in\mathcal{U}_{r,c}}\langle X,C\rangle+ \gamma\cdot\mathcal{R}(X). \tag{19}\]
The solution to Eq. (19) can be characterized explicitly by analyzing its first-order conditions for optimality.
Now we apply the result of the previous subsection to derive a complexity estimate for finding \(\widehat{X}\in\mathcal{U}(r,c)\) satisfying Eq. (2). The procedure for approximating the OT distance by the Sinkhorn's algorithm is listed as Algorithm 3.
**Theorem 4.5**.: _There is an algorithm (Algorithm 3) takes cost matrix \(C\in\mathbb{R}^{n\times n}\), two \(n\)-dimensional simplex \(r,c\) as inputs and outputs_
* _a diagonal matrix_ \(X\in\mathbb{R}^{n\times n}\)__
* _a diagonal matrix_ \(Y\in\mathbb{R}^{n\times n}\)__
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,v,w\in\mathbb{R}^{n}\)__
* _vectors_ \(p\in\mathbb{R}^{n}\)_,_ \(q\in\mathbb{R}^{n}\)__
_such that \(\widehat{X}\in\mathcal{U}(r,c)\) can be constructed (implicitly) by_
\[\widehat{X}=X(\operatorname{diag}(e^{u})L_{A}L_{A}^{\top} \operatorname{diag}(e^{v})+\operatorname{diag}(e^{u})(ww^{\top})\operatorname {diag}(e^{v}))Y+pq^{\top}/\|p\|_{1}\]
_that satisfying Eq._ (2) _in_
\[O(n\tau^{2}+\epsilon^{-2}n\tau\|C\|_{\infty}^{2}\ln n)\]
_time._
**Remark 4.6**.: _If we don't care about the output format to be lower-triangular matrix, then the additive term \(n\tau^{2}\) can be removed._
Proof.: Let \(X_{*}\in\arg\min_{X\in\mathcal{U}_{r,c}}\langle P,C\rangle\) be an optimal solution to the original OT program.
We first show that \(\langle B,C\rangle\) is not much larger than \(\langle X_{*},C\rangle\).
Since \(B=MAN\in\mathbb{R}^{n\times n}\) for positive diagonal matrices \(M,N\in\mathbb{R}_{+}^{n\times n}\), Lemma 2.6 implies \(B\) is the optimal solution to
\[\arg\min_{X\in\mathcal{U}_{r,c}}\langle X,C\rangle+\gamma\mathcal{ R}(X). \tag{20}\]
By Lemma 4.4, there exists a matrix \(X_{0}\in\mathcal{U}_{B\mathbf{1}_{n},B^{\top}\mathbf{1}_{n}}\) (Definition 2.5) such that
\[\|X_{0}-X_{*}\|_{1}\leq 2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{ \top}\mathbf{1}_{n}-c\|_{1}). \tag{21}\]
Moreover, since \(B\in\mathbb{R}^{n\times n}\) is an optimal solution of Eq. (20), we have
\[\langle B,C\rangle+\gamma\mathcal{R}(B)\leq\langle X_{0},C\rangle +\gamma\mathcal{R}(X_{0}). \tag{22}\]
Thus, we have
\[\langle B,C\rangle-\langle X_{*},C\rangle =\langle B,C\rangle-\langle X_{0},C\rangle+\langle X_{0},C\rangle- \langle X_{*},C\rangle\] \[=\langle B,C\rangle-\langle X_{0},C\rangle+\|X_{0}-X_{*}\|_{1}\|C \|_{\infty}\] \[\leq\gamma(H(B)-H(X_{0}))+\|X_{0}-X_{*}\|_{1}\|C\|_{\infty}\] \[\leq\gamma(H(B)-H(X_{0}))+2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top} \mathbf{1}_{n}-c\|_{1})\|C\|_{\infty}\] \[\leq 2\gamma\ln n+2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top}\mathbf{1}_ {n}-c\|_{1})\|C\|_{\infty} \tag{23}\]
where the first step follows from reorganization, the second step follows from Holder's inequality (Lemma 2.7), the third step follows from Eq. (22) and \(\mathcal{R}(X)=-H(X)\), the fourth step follows from Eq. (21) and the last step follows from the fact that \(0<H(B),H(X_{0})\leq 2\ln n\).
Lemma 4.4 implies that the output \(\widehat{X}\) of Algorithm 4 satisfies the inequality
\[\|B-\widehat{X}\|_{1}\leq 2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top}\mathbf{1}_{n} -c\|_{1}). \tag{24}\]
Recall that \(\widehat{X}\) is the output of Algorithm 3, \(X_{*}\) is a solution to the OT problem Eq. (2) and \(B\) is the matrix obtained in line 7 of Algorithm 3. We have
\[\langle\widehat{X},C\rangle =\langle\widehat{X}-B,C\rangle+\langle B,C\rangle\] \[\leq\|\widehat{X}-B\|_{1}\|C\|_{\infty}+\langle B,C\rangle\] \[\leq 2(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top}\mathbf{1}_{n}-c\|_{1}) \|C\|_{\infty}+\langle B,C\rangle\] \[\leq\langle X_{*},C\rangle+2\gamma\ln n+4(\|B\mathbf{1}_{n}-r\|_ {1}+\|B^{\top}\mathbf{1}_{n}-c\|_{1})\|C\|_{\infty}. \tag{25}\]
where the first step follows from reorganization, the second step follows from Holder's inequality, the third step follows from Eq. (24) and the last step follows from Eq. (23).
At the same time, we have
\[\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top}\mathbf{1}_{n}-c\|_{1}\] \[\leq\|B\mathbf{1}_{n}-\widetilde{r}\|_{1}+\|\widetilde{r}-r\|_{1 }+\|B^{\top}\mathbf{1}_{n}-\widetilde{c}\|_{1}+\|\widetilde{c}-c\|_{1}\] \[\leq\epsilon_{0}\]
where the first step follows from the definition of \(\ell_{1}\)-norm and the last step follows from \(\|B\mathbf{1}_{n}-r\|_{1}+\|B^{\top}\mathbf{1}_{n}-c\|_{1}\leq\epsilon_{0}\) (output of Algorithm 2) and the definitions of \(\widetilde{r}\) and \(\widetilde{c}\).
Setting \(\gamma=\frac{\epsilon}{4\ln n}\) and \(\epsilon_{0}=\frac{\epsilon}{8\|C\|_{\infty}}\), we obtain from the above inequality and Eq. (25) that \(\widehat{X}\) satisfies inequality Eq. (2).
Next, we show complexity of Algorithm 3. When \(\epsilon_{0}\) is sufficiently small, the number of iterations of the Sinkhorn's algorithm in line 7 of Algorithm 3 is \(O(R/\epsilon_{0})\), by using Theorem 3.8. According to Definition 3.3, we have
\[R =\ -\ln(K_{\min}\min_{i,j\in[n]}\{\widetilde{r}_{i},\widetilde{c} _{j}\})\] \[=-\ln(e^{-\|C\|_{\infty}/\gamma}\min_{i,j\in[n]}\{\widetilde{r}_ {i},\widetilde{c}_{j}\})\] \[\leq\frac{\|C\|_{\infty}}{\gamma}-\ln(\frac{\epsilon_{0}}{8n}),\]
where the first step follows from the definition of \(R\),the second step follows from the definition of \(K_{\min}\), the last step follows from the condition of \(\widetilde{r}_{i},\widetilde{c}_{j}\) in line 6 of Algorithm 3.
Since \(\gamma=\frac{\epsilon}{4\ln n}\) and \(\epsilon_{0}=\frac{\epsilon}{8\|C\|_{\infty}}\), we have that
\[R=O(\epsilon^{-1}\|C\|_{\infty}\ln n).\]
As the number of iteration for Algorithm 3 is \(O(R/\epsilon_{0})\), we conclude that the total number of Sinkhorn's algorithm iterations is bounded by \(O(\epsilon^{-2}\|C\|_{\infty}^{2}\ln n)\).
Obviously, \(\widetilde{r}\in\mathbb{R}_{+}^{n}\) and \(\widetilde{c}\in\mathbb{R}_{+}^{n}\) in line 6 of Algorithm 3 can be found in \(O(n)\) time.
Since each iteration of the Sinkhorn's algorithm requires \(O(n\tau)\) time and the initialization takes \(O(n\tau^{2})\) time as shown in Theorem 4.2, the total complexity of Algorithm 3 is
\[O(n\tau^{2}+\epsilon^{-2}n\tau\|C\|_{\infty}^{2}\ln n).\]
## 5 Symmetric
In this section, we provide an algorithm (Algorithm 6) to solve the OT problem in \(O(\epsilon^{-2}n\tau\|C\|_{\infty}^{2}\ln n)\) time, given the two distribution are identical, i.e., \(c=r\).
**Definition 5.1**.: _Given the symmetric OT problem \(\arg\min_{X\in\mathcal{U}_{r}}\langle X\rangle\), we define_
\[\mathcal{U}_{r}=\{X\in\mathbb{R}_{+}^{n\times n}:X\mathbf{1}_{n}=r,X^{\top} \mathbf{1}_{n}=r\}\]
_where \(\mathbf{1}_{n}\) is the all-ones vector in \(\mathbb{R}^{n}\), \(C\in\mathbb{R}_{+}^{n\times n}\) is a given cost matrix, and \(r\in\mathbb{R}^{n}\) are given vectors with positive entries that sum to one._
We first provide the running time of the Sinkhorn's algorithm (Algorithm 5) for symmetric case.
**Theorem 5.2** (Running time of Algorithm 5).: _Given the cost matrix \(C\in\mathbb{R}^{n\times n}\) with small treewidth \(\tau\) and a simplex \(r\in\mathbb{R}_{+}^{n}\), there is an algorithm (Algorithm 5) takes \(O(n\tau)\) for each iteration and \(O(n\tau^{2})\) for initialization to output_
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,w\in\mathbb{R}^{n}\)__
_such that \(B(u_{k})\in\mathbb{R}^{n\times n}\) can be constructed (implicitly) by_
\[B(u_{k})=\operatorname{diag}(e^{u_{k}})(L_{A}L_{A}^{\top}) \operatorname{diag}(e^{u_{k}})+\operatorname{diag}(e^{u_{k}})(ww^{\top}) \operatorname{diag}(e^{u_{k}})\]
_satisfying_
\[\|B(u_{k})\mathbf{1}_{n}-r\|_{1}\leq\epsilon_{0}.\]
Proof.: Similar to the proof of Theorem 4.2, here the two distribution are identical, i.e., \(c=r\).
Next, we show the running time of the rounding algorithm (Algorithm 7) for symmetric case.
**Lemma 5.3** (An improved version of of Lemma 7 in [1]).: _Given \(r\in\triangle_{n}\), \(B\in\mathbb{R}_{+}^{n\times n}\), \(u\in\mathbb{R}^{n}\), there is an algorithm (Algorithm 7) outputs_
* _a diagonal matrix_ \(X\in\mathbb{R}^{n\times n}\)__
* _a diagonal matrix_ \(Y\in\mathbb{R}^{n\times n}\)__
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,w,p\in\mathbb{R}^{n}\)__
_such that \(G\in\mathcal{U}_{r}\) can be constructed (implicitly) by_
\[\widehat{X}=X(\operatorname{diag}(e^{u})L_{A}L_{A}^{\top} \operatorname{diag}(e^{u})+\operatorname{diag}(e^{u})(ww^{\top})\operatorname{ diag}(e^{u}))Y+pp^{\top}/\|p\|_{1}\]
_that satisfying_
\[\|G-B\|_{1}\leq 2(\|B\mathbf{1}_{n}-r\|_{1})\]
in \(O(n\tau)\) time._
Proof.: Similar to the proof of Lemma 4.3 and Lemma 4.4, here the two distribution are identical, i.e., \(c=r\).
Overall, we provide the running time of the algorithm (Algorithm 6) that approximate the OT for symmetric case.
**Theorem 5.4**.: _There is an algorithm (Algorithm 6) takes cost matrix \(C=MM^{\top}=\in\mathbb{R}^{n\times n}\), an \(n\)-dimensional simplex \(r\) as inputs and outputs_
* _a diagonal matrix_ \(X\in\mathbb{R}^{n\times n}\)__
* _a diagonal matrix_ \(Y\in\mathbb{R}^{n\times n}\)__
* _a lower triangular matrix_ \(L_{A}\)__
* _vectors_ \(u,w,p\in\mathbb{R}^{n}\)__
_such that \(\widehat{X}\in\mathcal{U}(r)\) can be constructed (implicitly) by_
\[\widehat{X}=X(\operatorname{diag}(e^{u})L_{A}L_{A}^{\top} \operatorname{diag}(e^{u})+\operatorname{diag}(e^{u})(ww^{\top})\operatorname {diag}(e^{u}))Y+pp^{\top}/\|p\|_{1}\]
_that satisfying Eq._2 _in_
\[O(n\tau^{2}+\epsilon^{-2}n\tau\|C\|_{\infty}^{2}\ln n)\]
_time._
**Remark 5.5**.: _If we don't care about the output format to be lower-triangular matrix, then the additive term \(n\tau^{2}\) can be removed._
Proof.: By using Theorem 5.2, we have the running time of Line 7 is \(O(n\tau\cdot T)\), where \(T\) is the total number of Sinkhorn's algorithm iterations. By using Lemma 5.3, the running time of Line 9 is \(O(n\tau)\). The rest of the proof is similar to the proof of Similar to the proof Theorem 4.5. |
2305.05967 | The Distribution of Argmaximum or a Winner Problem | We consider a limit theorem for the distribution of a r.v. $Y_n:=argmax
{\{X_i, i= 1,..., n\}},$ where $X_i'$s are independent continuous non-negative
random variables. The r.v.'s $\{X_i, i=1,..., n\}$, may be interpreted as the
gains of $n$ players in a game, and the r.v. $Y_n$ itself as the number of a
``winner". In the case of i.i.d.r.v.'s, the distribution of $Y_n$ is, clearly,
uniform on $\{1,..., n\},$ while when the $X'$s are non-identically
distributed, the problem requires some calculations. | Youri Davydov, Vladimir Rotar | 2023-05-10T08:20:07Z | http://arxiv.org/abs/2305.05967v2 | # The Distribution of Argmaximum or
# The Distribution of Argmaximum or
a Winner Problem
Youri Davydov\({}^{1}\) and Vladimir Rotar \({}^{2}\)
\({}^{1}\) Laboratoire Paul Painleve - UMR 8524
Universite de Lille I - Bat. M2
59655 Villeneuve d'Ascq, France
Email: [email protected]
\({}^{2}\) Department of Mathematics
University of California at San Diego, USA and
the National University, USA
Email: [email protected]
**Abstract.** We consider a limit theorem for the distribution of a r.v. \(Y_{n}=\arg\max_{i:1\ldots n}\{X_{i}\}\), where \(X_{i}\)'s are independent continuous non-negative random variables. The r.v.'s \(X_{i},i=1\ldots,n\), may be interpreted as the gains of \(n\) players in a game, and the r.v. \(Y_{n}\) itself as the number of a "winner". In the case of i.i.d.r.v.'s, the distribution of \(Y_{n}\) is, clearly, uniform on \(\{1,\ldots,n\}\), while when the \(X\)'s are non-identically distributed, the problem requires some calculations.
AMS 1991 Subject Classification:
Primary 60F17, Secondary 60G15.
Keywords: limit theorem, maximum of random variables, argmaximum.
## 1 Introduction and a Basic Formula
Let \(X_{1},X_{2},...\) be positive and independent r.v.'s. We will deal with \(\max\{X_{1},...,X_{n}\}\). For the case of identically distributed r.v.'s, the theory of limiting distribution for
the maximum was developed in papers by Fisher&Tippett [2], von Mises [5] and Gnedenko [3]; see also systematic presentations in [1], [4], [6], [7]. The case of non-identically distributed r.v.'s was considered in [6].
This paper concerns the probability
\[p_{in}=P(X_{i}=\max\{X_{1},...,X_{n}\}),\;\;i=1,\ldots,n.\]
If the r.v.'s \(X_{i},i=1.\ldots,n\), are interpreted as the gains of \(n\) players in a game, then \(p_{in}\) is the probability that the \(i\)-th player is a "winner". Below, we assume the \(X\)'s to be continuous, and in this case, the winner is unique with probability one.
In the case of i.i.d. r.v.'s, the probability \(p_{in}\) is, clearly, equal to \(1/n\); if the \(X\)'s are non-identically distributed, the problem requires some calculations.
Let \(F_{i}(x)=P(X_{i}\leq x),\;\;F(0)=0,F(x)>0\) for \(x>0\). For \(x>0\), set
\[\nu_{i}(x)=-\ln F_{i}(x).\]
and \(\nu_{i}(0)=\infty\).
So, for all \(i\),
\[F_{i}(x)=\exp\{-\nu_{i}(x)\}, \tag{1.1}\] \[\nu_{i}(x)\;\;\;\mbox{is non-increasing},\;\;\nu_{i}(0)=\infty,\; \nu_{i}(\infty)=0. \tag{1.2}\]
The asymptotic behavior of \(\nu_{i}(x)\) as \(x\to\infty\) is equivalent to that of \(1-F_{i}(x)\). Below, we assume all \(\nu_{i}(x)\)'s to be continuous for \(x>0\).
_Example (Weibull's distrinution)_
\[F_{i}(x)=\exp\left\{-\frac{c_{i}}{x^{\alpha}}\right\},\mbox{and}\;\;c_{i},\; \alpha>0,\]
a well known distribution stable with respect to maximization. \(\Box\)1
Footnote 1: the symbol \(\Box\) means the end of an example; the symbol \(\blacksquare\) below will mean the end of a proof.
We have
\[p_{in} = \int_{0}^{\infty}\prod_{j=1,\;j\neq i}^{n}F_{j}(x)dF_{i}(x)\] \[= -\int_{0}^{\infty}\exp\left\{-\sum_{j=1,\;j\neq i}^{n}\nu_{j}(x) \right\}\exp\{-\nu_{i}(x)\}d\nu_{i}(x)\] \[= -\int_{0}^{\infty}\exp\left\{-\sum_{j=1}^{n}\nu_{j}(x)\right\}d \nu_{i}(x).\]
Integrating by parts and taking into account (1.1)-(1.2), we have
\[p_{in}=-\int_{0}^{\infty}\nu_{i}(x)\exp\left\{-\sum_{j=1}^{n}\nu_{j}(x)\right\}d \left(\sum_{j=1}^{n}\nu_{j}(x)\right). \tag{1.3}\]
Consider substitution
\[\sum_{i=1}^{n}\nu_{i}(x)=y. \tag{1.4}\]
For any non-increasing function \(r(x)\), we define its inverse as
\[r^{-1}(y)=\sup\{x:r(x)\geq y\}.\]
Let \(x_{n}(y)\) be the inverse of the function \(\sum_{i=1}^{n}\nu_{i}(x)\); in other words, a solution (in the above sense) to equation (1.4). Then from (1.3)-(1.4), it follows that
\[p_{in}=\int_{0}^{\infty}\nu_{i}(x_{n}(y))e^{-y}dy. \tag{1.5}\]
This may serve as a basic formula.
**Remark.** Condition \(F_{i}(x)>0\) for all \(x>0\) is not necessary; we imposed it just to make the proof of (1.5) more explicit. As a matter of fact, it is easy (though a bit long) to prove that the same is true, for example, if for all \(n\) and a finite \(a\geq 0\)
\[a_{n}=:\max_{i=1,\ldots n}\sup\{x:F_{i}(x)=0\}\leq a.\]
## 2 A Basic Example
Suppose
\[\nu_{i}(x)=c_{i}r(x), \tag{2.1}\]
where \(r(x)\) is a non-negative, continuous, and non-increasing function; \(r(0)=\infty\), \(r(\infty)=0\), and \(c_{i}\)'s are non-negative. Then
\[\sum_{i=1}^{n}\nu_{i}(x)=r(x)\sum_{i=1}^{n}c_{i},\]
and a solution to equation (1.4) is
\[x_{n}(y)=r^{-1}\left(\frac{y}{\sum_{i=1}^{n}c_{i}}\right). \tag{2.2}\]
So,
\[\nu_{i}(x_{n}(y))=c_{i}r\left(r^{-1}\left(\frac{y}{\sum_{i=1}^{n}c_{i}}\right) \right)=\frac{c_{i}}{\sum_{i=1}^{n}c_{i}}y. \tag{2.3}\]
Thus, in this case,
\[p_{in}=\frac{c_{i}}{\sum_{i=1}^{n}c_{i}}\int_{0}^{\infty}ye^{-y}dy=\frac{c_{i} }{\sum_{i=1}^{n}c_{i}}.\]
## 3 A General Scheme
When looking at (2.2), one may suppose that for large \(n\), the asymptotic behavior of \(p_{in}\) is based just on the asymptotics of \(r^{-1}(x)\) at zero, which is connected with that of \(r(x)\) at infinity (or tails \(1-F_{i}(x)\)).
Assume the following.
1. \[\nu_{i}(x)=c_{i}r(x)(1+\delta_{i}(x)),\] (3.1) where \(r(x)\) is defined as above, \(\delta_{i}(x)\) are continuous, uniformly in \(i\) \[\delta_{i}(x)\to 0\ \mbox{as}\ x\to\infty,\] (3.2) and for positive constants \(M<\infty\) and \(m<1\), and for all \(i\) and \(x\), \[-m\leq\delta_{i}(x)\leq M.\] (3.3)
2. \[b_{n}=:\sum_{i=1}^{n}c_{i}\to\infty\ \ \mbox{as}\ \ n\to\infty.\] (3.4)
**Proposition 1**: _Set_
\[\alpha_{in}=\frac{c_{i}}{b_{n}}.\]
_Then_
\[p_{in}\sim\alpha_{in}\ \mbox{as}\ n\to\infty,\ \mbox{uniformly in}\ i.\lx@note{ footnote}{The symbol $\sim$ means that the ratio of the left- and right-hand sides converges to one.} \tag{3.5}\]
**Proof**
Let \(x_{n}(y)\) be a solution to equation
\[\sum_{i=1}^{n}\nu_{i}(x)=y, \tag{3.6}\]
that is,
\[r(x)\sum_{i=1}^{n}c_{i}(1+\delta_{i}(x))=y. \tag{3.7}\]
So,
\[x_{n}(y)=r^{-1}\left(\frac{y}{\sum_{i=1}^{n}c_{i}(1+\delta_{i}(x_{n}(y)))} \right). \tag{3.8}\]
From (3.3), it follows that
\[x_{n}(y)\geq r^{-1}\left(\frac{y}{(1-m)\sum_{i=1}^{n}c_{i}}\right). \tag{3.9}\]
Hence, since \(r^{-1}(0)=\infty\), and in view of (3.4),
\[x_{n}(y)\rightarrow\infty \tag{3.10}\]
as \(n\rightarrow\infty\).
Furthermore, in view of (3.8),
\[\nu_{i}(x_{n}(y))=c_{i}\cdot\frac{y(1+\delta_{i}(x_{n}(y)))}{\sum_{j=1}^{n}c_ {j}(1+\delta_{j}(x_{n}(y)))}.\]
Thus,
\[p_{in} = \int_{0}^{\infty}\nu_{i}(x_{n}(y))e^{-y}dy\] \[= \frac{c_{i}}{\sum_{j=1}^{n}c_{j}}\int_{0}^{\infty}y\cdot\frac{(1 +\delta_{i}(x_{n}(y)))\sum_{j=1}^{n}c_{j}}{\sum_{j=1}^{n}c_{j}(1+\delta_{j}(x _{n}(y)))}\cdot e^{-y}dy.\]
For each \(y>0\), in view of (3.10) and (3.2),
\[\frac{(1+\delta_{i}(x_{n}(y)))\sum_{j=1}^{n}c_{j}}{\sum_{j=1}^{n}c_{j}(1+ \delta_{j}(x_{n}(y))}\to 1\,\,\,\mbox{as}\,\,n\rightarrow\infty,\]
uniformly in \(i\). On the other hand,
\[\frac{(1+\delta_{i}(x_{n}(y)))\sum_{j=1}^{n}c_{j}}{\sum_{j=1}^{n}c_{j}(1+\delta_{ j}(x_{n}(y)))}\leq\frac{1+M}{1-m}.\]
Hence,
\[\frac{p_{in}}{\alpha_{in}}\rightarrow\int_{0}^{\infty}ye^{-y}dy.\quad\blacksquare\]
## 4 The case of "regularly" varying \(c_{i}\)'s
In the case where the coefficients \(c_{i}\)'s are varying - in a sense - regularly, we can present the result in a nicer form.
Consider the segment \([0,1]\) and identify a point \(i/n,\;\;i=1\ldots,n\), with a r.v. \(X_{i}\); so to speak, with the \(i\)-th "player". Let us assign to this point probability \(\alpha_{in}\), and suppose that the measure so defined weakly converges to a probability measure \(\alpha\) on \([0,1]\). In other terms,
\[\alpha_{n}=:\sum_{i=1}^{n}\delta_{\{i/n\}}\alpha_{in}\Rightarrow\alpha, \tag{4.1}\]
where \(\delta_{\{x\}}\) is a measure concentrated at point \(x\).
**Proposition 2**: _Suppose that together with conditions of Proposition 1, (4.1) holds. Then discrete measure_
\[\mu_{n}=:\sum_{i=1}^{n}\delta_{\{i/n\}}p_{in}\Rightarrow\alpha, \tag{4.2}\]
**Proof** is straightforward. Since the convergence in (3.5) is uniform in \(i\), for any continuous bounded function \(h\),
\[\int_{0}^{1}hd\mu_{n} = \sum_{1}^{n}h\left(\frac{i}{n}\right)p_{in}=\sum_{1}^{n}h\left( \frac{i}{n}\right)\alpha_{in}(1+o(1))\] \[= (1+o(1))\int_{0}^{1}hd\alpha_{n}\rightarrow\int_{0}^{1}hd\alpha.\quad\blacksquare\]
**Examples**
1. Set \(c_{i}=i^{s}\), for \(s\geq 0\), and let \(x\in(0,1]\). Let \(k=k_{n}\) be such that \(\frac{k}{n}\leq x<\frac{k+1}{n}\). Then, as is easy to verify, \[\frac{\sum_{i=1}^{k_{n}}c_{i}}{\sum_{i=1}^{n}c_{i}}\to x^{s+1}.\] (4.3) (For \(k_{n}=0\), we set \(\sum_{i=1}^{k_{n}}=0\).) In other words, if \(F(x)\) is the distribution function (d.f.) of \(\alpha\), then \(F(x)=x^{s+1}\). Say, if \(c_{i}=i\), then for large \(n\), the distribution of the winner numbers may be well presented by a distribution on \([0,1]\) with d.f. \(F(x)=x^{2}\).
2. Let \(c_{i}=2^{i}\), Then in the same notations, for any \(x<1\), \[\frac{\sum_{i=1}^{k_{n}}c_{i}}{\sum_{i=1}^{n}c_{i}}\to 0,\] (4.4) and measure \(\alpha\) is concentrated at point 1.
3. Let \(c_{i}=1/i\), Then, as is easy to verify, for any \(x\in(0,1]\), \[\frac{\sum_{i=1}^{k_{n}}c_{i}}{\sum_{i=1}^{n}c_{i}}\to 1,\] (4.5) and measure \(\alpha\) is concentrated at point 0. \(\Box\)
As a matter of fact, the class of possible limiting distributions \(\alpha\) is narrow because, as we will see, in (4.1) we deal with regularly varying functions (reg.v.f.'s).3
Footnote 3: A positive function \(H(x)\) on \([0,\infty)\) is regular varying in the sense of Karamata with an order of \(\rho,\ -\infty<\rho<\infty\), iff for any \(x>0\)
\[\frac{H(tx)}{H(x)}\to x^{\rho}\ \text{as}\ t\to\infty.\]
A function \(L(x)\) is called slowly varying if it is regularly varying with \(\rho=0\). Any reg.v.f. \(H(x)=x^{\rho}L(x)\), where \(L(\cdot)\) is slowly varying. A detailed presentation of reg.v.f.'s is given, for example, in Feller, [1], Chapter VIII, Section 8. Some definitions and examples may be also found in [6], Ch,15.
**Proposition 3**: **(A)**: _Suppose (4.1) holds, and_
\[\frac{b_{n+1}}{b_{n}}\to 1\,\,\,\mbox{as}\,\,\,n\to\infty. \tag{4.6}\]
_Then the d.f. of \(\alpha\) is_
\[F(x)=x^{\rho},\,\,\,\,\,x\in[0,1], \tag{4.7}\]
_where \(0\leq\rho\leq\infty\), and \(b_{n}=b(n)\), where \(b(t)\) is a non-decreasing reg.v.f. (In (4.7), if \(\rho=0\), then \(F(x)=1\) for all \(x\in[0,1]\); if \(\rho=\infty\), then \(F(x)=0\) for all \(x<1\).)_
**(B)**: _Vice versa, let_ \(b_{n}=b(n)\)_, where_ \(b(t)\) _is a non-decreasing positive reg.v.f. Then (_4.6_) holds automatically, and (_4.1_) is true with the d.f._ \(F(x)\) _of_ \(\alpha\) _defined in (_4.7_),_
**Proof**
**(A)** Let \(F_{n}(x)\) and \(F(x)\) be the d.f.'s of measures \(\alpha_{n}\) and \(\alpha\), respectively. Then
\[F_{n}(x)\to F(x) \tag{4.8}\]
as \(n\to\infty\) for all \(x\)'s that are continuity points of \(F(x)\).
Let \(b_{0}=0\), and for all \(t\geq 0\) function \(b(t)=b_{n}\) if \(t\in[n,n+1)\). We will prove that \(b(t)\) is a reg.v.f.
Let us fix a continuity point \(x\), and let an integer \(k=k_{n}\) be such that \(\frac{k}{n}\leq x<\frac{k+1}{n}\). Then from (4.8) it follows that
\[\frac{b_{k_{n}}}{b_{n}}\to F(x)\,\,\,\mbox{as}\,\,\,n\to\infty.\]
On the other hand, by definition, \(b_{k_{n}}=b(k_{n})=b(nx)\), and hence
\[\frac{b(nx)}{b(n)}\to F(x)\,\,\,\mbox{as}\,\,\,n\to\infty. \tag{4.9}\]
Together with (4.6), this implies that
\[\frac{b(tx)}{b(t)}\to F(x)\,\,\,\mbox{as}\,\,\,t\to\infty, \tag{4.10}\]
where \(t\)'s are arbitrary positive numbers. Indeed, let \(n=n_{t}\) be such that
\[\frac{b(nx)}{b(n+1)}\leq\frac{b(tx)}{b(t)}\leq\frac{b((n+1)x)}{b(n)}. \tag{4.11}\]
Furthermore, if \(t\to\infty\), then \(n=n_{t}\to\infty\), and
\[\frac{b(nx)}{b(n+1)}=\frac{b(n)}{b(n+1)}\cdot\frac{b(nx)}{b(n)}\to F(x)\]
in view of (4.6) and (4.9). Similarly, the same is true for the very right fraction in (4.11).
So, function \(b(t)\) is a regularly varying function, and the limit in (4.10) must be equal to a power function \(x^{\rho}\); see, for instance, Lemma 1 from Feller [1], VIII, 8.
**(B)** Let \(b_{n}=b(n)\) where \(b(t)\) is a reg.v.f. (that may be different from the piecewise constant function \(b(x)\) defined in part (A) of the proof). Let us fix an \(x\in(0,1]\), and let again an integer \(k=k_{n}\) be such that \(\frac{k}{n}\leq x<\frac{k+1}{n}\).
First, since \(b(x)\) is non-decreasing,
\[F_{n}(x)=\frac{b_{k_{n}}}{b_{n}}=\frac{b(k_{n})}{b(n)}\leq\frac{b(nx)}{b(n)} \to x^{\rho}, \tag{4.12}\]
where \(0\leq\rho<\infty\). On the other hand,
\[F_{n}(x)=\frac{b(k_{n})}{b(n)}\geq\frac{b(xn-1)}{b(n)}=\frac{b(xn-1)}{b(xn)} \frac{b(xn)}{b(n)}. \tag{4.13}\]
Let us note that for any non-decreasing reg.v.f. \(b(x)\)
\[\frac{b(x-1)}{b(x)}\to 1\,\,\,\mbox{as}\,\,\,x\to\infty. \tag{4.14}\]
Indeed, for \(s<1\) and sufficiently large \(x\)'s
\[1\geq\frac{b(x-1)}{b(x)}\geq\frac{b(sx)}{b(x)}\to s^{\rho},\]
and the right-hand side can be made arbitrary close to 1. By virtue of (4.14), the first factor in (4.13), converges to 1, and the whole product converges to \(x^{\rho}\).
Relation (4.14) also implies (4.6). \(\quad\blacksquare\).
**Remarks and Examples**
1. When considering examples, it is more convenient to deal directly with sequences \(b_{n}\) rather than coefficients \(c_{i}\)'s. In particular, if \(b_{n}\) are asymptotically exponential, (4.6) is not true but it is easy to show that the limiting measure \(\alpha\) exists and concentrated at point \(1\) (see also Example 2 above). On the other hand, if for instance, \(b_{n}\sim e^{c\sqrt{n}}\) for a positive \(c\), (4.6) is true though the limiting measure is again concentrated a \(1\).
2. To specify a particular \(\rho\), we may, for example, use the fact that, under conditions of Proposition 3, \[\frac{b_{n}}{b_{2n}}\to\left(\frac{1}{2}\right)^{\rho}.\] So, if we know \(\lim\frac{b_{n}}{b_{2n}}\), then we may find \(\rho\). In particular, if \(\frac{b_{n}}{b_{2n}}\to 0\), then \(\rho=\infty\), and the distribution \(\alpha\) is concentrated at \(1\), while if \(\frac{b_{n}}{b_{2n}}\to 1\), then \(\rho=0\), and the distribution \(\alpha\) is concentrated at \(0\).
3. We may deal with a triangular array, that is, set \(c_{i}=c_{in}\). Then a limiting distribution, if any, may be practically arbitrary. As an example, consider an integrable, non-negative function \(g(x)\) on \([0,1]\) and set the coefficient \(c_{in}=g(i/n)\). Then, the limiting distribution will be that with the density \[f(x)=\frac{g(x)}{\int_{0}^{1}g(x)dx}.\] A proof of (4.1) in this case may run similarly to what we did above. Note that when considering a counterpart of (2.2), we may take into account that in this case \[b_{n}=:\sum_{i=1}^{n}c_{in}\sim n\cdot\int_{0}^{1}g(x)dx.\]
4. Clearly, \(\arg\max\{X_{1},\ldots X_{n}\}\stackrel{{ d}}{{=}}\arg\max\{ \widetilde{X}_{1},\ldots\widetilde{X}_{n}\}\), where \(\widetilde{X}_{i}=f(X_{i})\), and \(f(x)\) is a continuous strictly increasing function. It is easy to verify that the corresponding function \(\widetilde{r}(x)=r(f^{-1}(x))\). This is a way to "improve" \(r(x)\).
5. In the case where the distributions of the \(X\)'s are not continuous, the above technique needs to be improved. Regarding the fact that in this case there
may be several "winners", one can conjecture that the situation may be fixed if we select from winners one at random (throw lots). On the other hand, in this case probability \(p_{in}\neq 1/n\) even if the \(X_{i}\)'s are identically distributed. Consider the simplest
**Example**. Let all \(X_{i}=1\) or \(0\) with probabilities \(p\) and \(q\), respectively. Then
\[p_{in}=p\cdot 1+q\cdot q^{n-1}=p+q^{n},\]
However, in the case of selecting a winner at random, \(p_{in}=1/n\) just by symmetry, though the same may be also proved directly.
We thank professor Vadim Ponomarenko (SDSU) a bygone conversation with whom helped us to come to the statement of this paper problem. The problem we discussed with professor Ponomarenko, above else, may serve as a good application example. It concerns a complex machine consisting of a large number of parts with random and non-identically distributed lifetimes. The question is which part will break down first. Certainly, we deal here with \(\arg\min\) but it can be easily reduced to \(\arg\max\).
|
2306.12793 | Curved spacetime as a dispersive multiferroic medium for an
electromagnetic wave: polarization and magnetization vectors in the
Schwarzschild spacetime | We study one of the interesting properties of the electromagnetic wave
propagation in the curved Schwarzschild background spacetime in the framework
of general relativity (GR). The electromagnetic wave equation has been derived
from vacuum general relativistic Maxwell's equations. It is shown that the
solutions for the electromagnetic field can be expanded in the spherical
harmonic functions and all components of the electromagnetic fields can be
expressed in terms of two radial profile functions. These radial profile
functions can be expressed in terms of the confluent Heun function. The
calculated behaviour of the electric and magnetic susceptibilities near the
event horizon appears to be similar to the susceptibilities of multiferroic
materials near phase transition. The Curie temperature of this phase transition
appears to coincide with the Hawking temperature. | Bobur Turimov, Igor Smolyaninov | 2023-06-22T10:49:07Z | http://arxiv.org/abs/2306.12793v1 | Curved spacetime as a dispersive multiferroic medium for an electromagnetic wave: polarization and magnetization vectors in the Schwarzschild spacetime
###### Abstract
We study one of the interesting properties of the electromagnetic wave propagation in the curved Schwarzschild background spacetime in the framework of general relativity (GR). The electromagnetic wave equation has been derived from vacuum general relativistic Maxwell's equations. It is shown that the solutions for the electromagnetic field can be expanded in the spherical harmonic functions and all components of the electromagnetic fields can be expressed in terms of two radial profile functions. These radial profile functions can be expressed in terms of the confluent Heun function. The calculated behavior of the electric and magnetic susceptibilities near the event horizon appears to be similar to the susceptibilities of multiferroic materials near phase transition. The Curie temperature of this phase transition appears to coincide with the Hawking temperature.
## I Introduction
All gravitational compact objects in the Universe emit, absorb, reflect and transmit electromagnetic radiation which allows an observer to get information about the composition, temperature, density, age, motion, distance, and other chemical and physical quantities of such objects. From this point of view, studying the propagation of the electromagnetic wave in the curved space is becoming extremely important and interesting topic of modern astrophysics and cosmology, in particular, after recent observation of the first images of the supermassive black hole (SMBH) candidates located in the centre of galaxy M87 [1; 2] and galaxy Sgr A\({}^{*}\)[3; 4] by the Event Horizon Telescope (EHT) collaboration. There have been also hugely important events of detection of the gravitational waves from the binary systems, in particular, black hole-black hole binary [5] and neutron star-neutron star binary [6] by LIGO/Virgo scientific collaborations.
It was shown in Ref. [7] that the gravitational field of a rotating compact object can rotate the direction of the polarization vector of an electromagnetic wave passing in its field. Influence of gravitation on propagation of the electromagnetic wave is studied in [8]. The diffraction of the electromagnetic wave [9], scattering of electromagnetic wave [10; 11], and radiation in the Schwarzschild spacetime [12; 13; 14] have been studied. Quasi-normal modes (QNM) of the Schwarzschild black hole were studied in [15].
The vacuum solution for the electromagnetic fields outside a magnetized sphere in the Newtonian framework [16; 17] and its general-relativistic correction [18; 19; 20] have been investigated, while the effects of alternative theories of the electromagnetic fields around compact objects have been studied in Refs. [21; 22]. One of the simplest models of energy loss due to the magneto-dipole radiation from the rotating relativistic star has been studied in [23; 24]. A realistic magnetosphere model of the magnetized star due to the light curve of pulsars has been investigated in [25]. The structure of the pulsar magnetosphere, related astrophysical processes and high energy particle acceleration mechanisms have been widely studied, see e.g. [26; 27; 28]. An analytical estimation for the magneto-dipole radiation and oscillations of a highly magnetized relativistic star have also been studied [29]. The time dependence of dipole magnetic field [30; 31] and multipole magnetic field [32; 33] of a magnetized neutron star have been investigated. The magnetic field decay through Hall drift in stellar crusts have been studied in [34].
Controlling the evolution of light through the space curvature of the medium in the framework of general relativity has been studied in Ref. [35], in particular, they studied trajectory of light in a paraboloid structure inspired by the Schwarzschild metric describing the spacetime around a black hole. It has been shown that this method allows calculations of light-ray trajectories, as well as determination of the diffraction properties and the phase and group velocities of wavepackets propagating within the curved-space structure. The propagation of a light wave in curved
thin elastic waveguides, where curvature is shown to be equivalent to a spatially modulated refractive index, has been investigated in [36]. Introducing topological phases in photonic lattices in curved space [37], it is shown that the curvature of spacetime can induce topological edge states and topological phase transitions in waveguiding layer covering the surface of a three-dimensional body. In Ref. [38] polarization vector of light in the spacetime of a rotating black hole has been studied, and it is mentioned that polarization occurs due to pure gravitational field and rotation of a spacetime. In particular, it was shown by several authors that the index of refraction of a curved space around a black hole is
\[n=\frac{1}{\sqrt{1-\frac{2M}{r}}}. \tag{1}\]
In this paper, we investigate the polarization and magnetization vectors in the curved space, in particular in the Schwarzschild spacetime. We demonstrate that the calculated behavior of the electric and magnetic susceptibilities near the event horizon is similar to the susceptibilities of multiferroic materials near phase transition. The reinterpretation of this behavior as a critical phenomenon is novel and very interesting, especially in view of the fact that the Curie and the Hawking temperatures coincide in this novel physical picture.
The paper is organized as follows. In Sect. II, we discuss the derivation of the expressions for the polarization and magnetization vector for the propagating light-ray in the Schwarzschild spacetime. The analogy between the calculated behavior of the electric and magnetic susceptibilities near the event horizon and the susceptibilities of multiferroic materials near phase transition is described. In Sect. III, we present our results on light propagation in the Schwarzschild spacetime. Finally, we summarize the obtained results in Section IV. Throughout the paper, we use a space-like signature \((-,+,+,+)\), a system of units in which \(G=c=1\). Greek indices are taken to run from \(0\) to \(3\), while Latin indices from \(1\) to \(3\).
II Polarization and magnetization vectors and electric and magnetic susceptibilities: Schwarzschild spacetime as a dispersive multiferroic medium
It is well known that the polarization and the magnetization vectors of a medium are determined as
\[\mathbf{P}=\mathbf{D}-\mathbf{E}\,\qquad\mathbf{M}=\mathbf{B}-\mathbf{H}\, \tag{2}\]
and relations between the vectors \(\mathbf{D}\) and \(\mathbf{E}\), as well as vectors \(\mathbf{B}\) and \(\mathbf{H}\) in the macroscopic electrodynamics are defined as
\[\mathbf{D}=\epsilon\mathbf{E}\,\qquad\mathbf{B}=\mu\mathbf{H}. \tag{3}\]
where \(\epsilon\) and \(\mu\) are the effective electric permittivity and magnetic permeability of the spacetime defined as \(\epsilon=\mu=1/\sqrt{-g_{tt}}\). On the other hand, the effective polarization vector is proportional to the electric field, while the effective magnetization vector is proportional to the magnetic field, or
\[\mathbf{P}=\chi\mathbf{E}\,\qquad\mathbf{M}=\chi^{\prime}\mathbf{H}\, \tag{4}\]
where \(\chi\) is the electric susceptibility and \(\chi^{\prime}\) is the magnetic susceptibility. Taking into account all these facts, one can get
\[\epsilon=\mu=\frac{1}{\sqrt{1-\frac{2M}{r}}}\,\quad\rightarrow\quad\chi=\chi^{ \prime}=\frac{1}{\sqrt{1-\frac{2M}{r}}}-1. \tag{5}\]
Note that the effective permeability and permittivity in the Schwarzschild metric are isotropic. The respective tensors are diagonal, and all the diagonal elements are the same.
Using the explicit solution for the electromagnetic field (see, e.g. App. A), the components of the polarization vector in the Schwarzschild spacetime are
\[P_{\hat{r}} =\frac{e^{-i\omega t}}{r}\left(\frac{1}{\sqrt{f}}-1\right)V_{\ell }Y_{\ell m}\, \tag{6}\] \[P_{\hat{\theta}} =\frac{e^{-i\omega t}(1-\sqrt{f})}{\ell(\ell+1)}\left[D_{r}V_{ \ell}\partial_{\theta}Y_{\ell m}+\frac{i\omega}{f\sin\theta}U_{\ell}\partial _{\phi}Y_{\ell m}\right]\,\] (7) \[P_{\hat{\phi}} =\frac{e^{-i\omega t}(1-\sqrt{f})}{\ell(\ell+1)}\left[D_{r}V_{ \ell}\frac{1}{\sin\theta}\partial_{\phi}Y_{\ell m}-\frac{i\omega}{f}U_{\ell} \partial_{\theta}Y_{\ell m}\right]\, \tag{8}\]
and the components of magnetization vector are
\[M_{\hat{r}} =\frac{e^{-i\omega t}}{r}(1-\sqrt{f})U_{\ell}Y_{\ell m}\, \tag{9}\] \[M_{\hat{\theta}} =\frac{e^{-i\omega t}\sqrt{f}(1-\sqrt{f})}{\ell(\ell+1)}\left[D_{r }U_{\ell}\partial_{\theta}Y_{\ell m}-\frac{i\omega}{f\sin\theta}V_{\ell} \partial_{\phi}Y_{\ell m}\right]\,\] (10) \[M_{\hat{\phi}} =\frac{e^{-i\omega t}\sqrt{f}(1-\sqrt{f})}{\ell(\ell+1)}\left[D_{ r}U_{\ell}\frac{1}{\sin\theta}\partial_{\phi}Y_{\ell m}+\frac{i\omega}{f}V_{ \ell}\partial_{\theta}Y_{\ell m}\right]. \tag{11}\]
This representation is important as a particular example of a situation in which all components of the electromagnetic field and its vector potential may be found in analytical form in the Schwarzschild metric.
Here we will consider transverse magnetic waves (TM mode) characterized by the fact that the magnetic vector (\(\mathbf{B}\)) is always perpendicular to the direction of propagation. In this case the components of polarization and magnetization vectors are
\[P_{\hat{r}} =\frac{e^{-i\omega t}}{r}\left(\frac{1}{\sqrt{f}}-1\right)V_{\ell }Y_{\ell m}\, \tag{12}\] \[P_{\hat{\theta}} =\frac{e^{-i\omega t}(1-\sqrt{f})}{\ell(\ell+1)}D_{r}V_{\ell} \partial_{\theta}Y_{\ell m}\,\] (13) \[P_{\hat{\phi}} =\frac{e^{-i\omega t}(1-\sqrt{f})}{\ell(\ell+1)}D_{r}V_{\ell} \frac{1}{\sin\theta}\partial_{\phi}Y_{\ell m}\,\] (14) \[M_{\hat{r}} =0\,\] (15) \[M_{\hat{\theta}} =-\frac{e^{-i\omega t}}{\ell(\ell+1)}\left(\frac{1}{\sqrt{f}}-1 \right)\frac{i\omega}{\sin\theta}V_{\ell}\partial_{\phi}Y_{\ell m}\,\] (16) \[M_{\hat{\phi}} =\frac{e^{-i\omega t}}{\ell(\ell+1)}\left(\frac{1}{\sqrt{f}}-1 \right)i\omega V_{\ell}\partial_{\theta}Y_{\ell m}\, \tag{17}\]
while in the case of transverse electric waves (TE mode) which is characterized by the fact that the electric vector (\(\mathbf{E}\)) is always perpendicular to the direction of propagation, one can get
\[P_{\hat{r}} =0\, \tag{18}\] \[P_{\hat{\theta}} =\frac{e^{-i\omega t}(1-\sqrt{f})}{\ell(\ell+1)}\frac{i\omega}{f \sin\theta}U_{\ell}\partial_{\phi}Y_{\ell m}\,\] (19) \[P_{\hat{\phi}} =-\frac{e^{-i\omega t}(1-\sqrt{f})}{\ell(\ell+1)}\frac{i\omega}{f }U_{\ell}\partial_{\theta}Y_{\ell m}\,\] (20) \[M_{\hat{r}} =\frac{e^{-i\omega t}}{r}(1-\sqrt{f})U_{\ell}Y_{\ell m}\,\] (21) \[M_{\hat{\theta}} =\frac{e^{-i\omega t}(\sqrt{f}-f)}{\ell(\ell+1)}D_{r}U_{\ell} \partial_{\theta}Y_{\ell m}\,\] (22) \[M_{\hat{\phi}} =\frac{e^{-i\omega t}(\sqrt{f}-f)}{\ell(\ell+1)}D_{r}U_{\ell} \frac{1}{\sin\theta}\partial_{\phi}Y_{\ell m}. \tag{23}\]
We should note that the divergence of the electric and magnetic susceptibilities of vacuum near the event horizon looks very similar to the properties of a multiferroic material [39; 40; 41] near its phase transition. Indeed, due to the Unruh effect, a near-horizon observer must see the electromagnetic field excited at a local temperature
\[T_{\rm loc}=\frac{1}{4\pi\sqrt{2Mr(1-\frac{2M}{r})}} \tag{24}\]
Therefore, after simple algebraic transformations, the derived expressions for the electric and magnetic susceptibilities may be recast as
\[\chi\sim\frac{1}{(T-T_{H})^{\gamma}}\, \tag{25}\]
where \(T_{H}=\frac{1}{8\pi M}\) is the Hawking temperature,
\[T=\frac{1}{4\pi\sqrt{2Mr}} \tag{26}\]
is the local temperature experienced by an observer near horizon at radius \(r\) redshifted to infinity, and \(\gamma=1/2\) is the critical exponent. It is evident that this expression looks similar to the Curie-Weiss law in multiferroics.
Below some critical temperature \(T_{C}\), multiferroic materials simultaneously exhibit ferromagnetic and ferroelectric properties. In some cases Tc may be equal to zero, so that a quantum critical point is observed. Above \(T_{C}\), very near
Figure 1: The radial dependence of the electric susceptibility and magnetic susceptibility.
the critical temperature these materials exhibit simultaneously divergent paramagnetism and paraelectricity. While they have zero net dipole moments, their electric and magnetic susceptibilities diverge:
\[\chi=\frac{C}{T-T_{C}}\, \tag{27}\]
where \(C\) is the material-specific Curie-Weiss constant (for typical multiferroic materials in three spatial dimensions the critical exponent equals \(\gamma=1\)). Below \(T_{C}\) these materials exhibit simultaneous ferromagnetism and ferroelectricity, and their susceptibilities also diverge near the critical temperature. We should also mention the possibility of reverse "re-entrant" behavior in semiconductors doped with magnetic impurities, in which the paramagnetic state exists at low temperatures, while heating the material above \(T_{C}\) leads to establishment of the ferromagnetic order [42]. Re-entrant ferroelectricity was also observed in multiferroic Fe-substituted MnWO4 [43].
It is important to note that the expected critical exponent in a 4D Schwarzschild spacetime does not need to be 1. In general, different theories, such as mean field theory in various dimensions, the two-dimensional Ising model, etc. produce different values of the critical exponents, and these exponents do not necessarily coincide with the experimentally measured values. Therefore, the obtained 1/2 value of the critical exponent should not be considered as not perfect.
Typically, the multiferroic materials also exhibit ferroelasticity [44], which implies that elastic deformations of these materials strongly affect their magnetic and electric properties [45]. For example, such materials as FeZrB(Cu) spin glasses exhibit very pronounced dependencies of their Curie temperature \(T_{C}\) on tensile stress [46]. In a similar fashion, we may interpret Fig.1 as an evidence of a phase transition phenomenon in which a deformation of Minkowski spacetime due to gravity leads to the onset of its multiferroic properties.
Since the divergent magnetic and electric susceptibilities of the Minkowski vacuum are identical, we should classify the physical vacuum as type-II multiferroic material (based on the clasification developed by Khomskii [47]). In some type II multiferroics magnetic ordering breaks the inversion symmetry and directly causes the ferroelectricity, so that the ordering temperatures for the two phenomena are identical. The typical example of such materials is TbMnO3 [48], in which a non-centrosymmetric magnetic spiral structure accompanied by a ferroelectric polarization sets in at \(T_{C}=28\) K. An opposite situation was observed in some Mott insulating charge-transfer salts [49], where a charge-ordering transition to a polar ferroelectric state may drive magnetic ordering.
We should also note that reinterpretation of a quantum black hole using the language of modern condensed matter physics became a very active research field recently [50; 51]. For example, Dvali \(et\)\(al.\)[50] proposed that a black hole may be understood as a graviton Bose-Einstein condensate at the critical point of a quantum phase transition, which is somewhat similar to phase transitions observed in cold atoms. In agreement with our multiferroic analogy, it may be argued that graviton condensation is also somewhat similar to the ferroelastic phase transition, in which spontaneous strain arises in a material in response to an applied stress. In another example, Stephens \(et\)\(al.\) suggested that the black hole system shares similarities with the defect-mediated Kosterlitz-Thouless transition [52] in condensed matter. Thus, our interpretation of Fig.1 as a multiferroic phase transition makes perfect sense within the context of this contemporary line of thought.
We must also point out that regardless of the coordinate choice, the effective permittivity and permeability of the spacetime metric describing a spherically symmetric distribution of mass cannot be eliminated everywhere. In this sense, the polarization and the magnetization of spacetime should be treated as essential physical objects. In some way, these properties of the physical spacetime are very similar to the properties of any real electromagnetic medium. For example, it is well established in Transformation Optics that by transforming the macroscopic Maxwell equation into some curvilinear coordinate system, a local value of the effective permittivity and permeability may be always made equal to 1 in any given location inside the medium. However, this fact does not make such a medium any less real.
Based on the above discussion, and by combining Eqs.(5) and (27), we may conclude that light propagation in the Schwarzschild spacetime may be emulated by engineering a hot spot inside a multiferroic medium in which the temperature distribution near horizon behaves as
\[T=T_{C}+C\sqrt{1-\frac{2M}{r}}\, \tag{28}\]
where \(2M\) now defines the effective Schwarzschild radius of an effective electromagnetic black hole inside a multiferroic medium. The scale of \(2M\) should be large enough to ensure the validity of macroscopic electrodynamic description of the medium, which means that it can span a range from a few nanometers to macroscopic dimensions. Engineering such a temperature distribution using localized heat sources (or localized absorbers of external radiation) is a straightforward task for modern metamaterial engineering. Moreover, local lattice distortions under the influence of
external radiation and heat (which affect the local magnitude of the Curie constant \(C\)) may also be incorporated into such model.
In the next Section we will discuss some interesting aspects of light propagation around both astrophysical and emulated Schwarzschild spacetime. However, we should emphasize the difference in physical mechanisms involved in these two different situations. In the astrophysical case, relativistic solutions for the \(E\)- and \(B\)-field are the reason for curvilinear light ray trajectories. In the second one, the magnetization and polarization of matter lead to the observable optical effects.
## III Light propagation in the Schwarzschild spacetime
As one can see from the equations derived in Section 2, it is sufficient to find the profile functions \(U_{\ell}\) and \(V_{\ell}\) in order to produce all components of the electromagnetic field and its vector potential. Now inserting the expressions (11) and (14) into (19) and (10), the radial equation is expressed as
\[\left[f\partial_{r}\left(f\partial_{r}\right)+\omega^{2}-f\frac{\ell(\ell+1)} {r^{2}}\right]\left[rU_{\ell}(r),rV_{\ell}(r)\right]=0\, \tag{29}\]
which is also known as the Regge-Wheeler equation for the function \(rV_{\ell}(r)\). Notice that the same equation can be obtained for the function \(U_{\ell}(r)\). It is well known that in a flat space, (i.e. \(f=1\)), the solution \(V_{\ell}(r)\) for the electromagnetic wave equation is expressed in terms of the spherical Bessel functions of the first and second kind, \(j_{\ell}(\omega r)\) and \(y_{\ell}(\omega r)\) which describe standing waves with regularity and singularity at the origin; or the spherical Bessel function of third kind, also known as Hankel functions, \(h_{\pm}^{\pm}(\omega r)\) which describe travelling waves with a source at the origin where they have a singularity, see e.g. [53]. Here the upper indices "\(\pm\)" denote outgoing and ingoing waves.
In general, finding the solution of equation (29) is not a simple task. However, for a particular case, when \(\ell=0\), an analytical solution of the wave equation may be found as \(\sim e^{\pm\Omega r}\) with \(\Omega=\omega[1+(1-f)\ln f]\), which is responsible for a monopole solution describing a time dependent monopole point-like charge. However, from the physical point of view the magnetic monopole solution is meaningless. One has to emphasise that in this case the components of the vector potential are divergent. Therefore, monopole solutions can be directly found by solving equations (19) and (10), so that \(E_{\hat{r}},B_{\hat{r}}\sim r^{-2}e^{\pm\Omega r}\). These solutions are not regular at the origin however, at the horizon they have certain value. Figure 2 draws radial dependence of the monopole electromagnetic wave in the Schwarzschild spacetime (solid blue line) and in a flat space (dashed black line) for a different value of the angular frequency. As one can see, there is no difference of the amplitude of the electromagnetic wave. On the other hand, there is a time delay between the propagation of waves in the flat and curved space. In the near zone the field lines in curved space are quite dense in comparison with that in flat space.
It is worth noticing that for the very small angular frequency i.e. \(M\omega\ll 1\), the second term of Eq. (29) may be negligible. In this case, the equation will be quite simple and the stationary solution can be expressed in terms of the special functions, for instance, the reduced Legendre functions of the first and second kind \(\sim P_{\ell}^{1}(1-r/M)\) and
Figure 2: The radial dependence of the monopole electromagnetic wave for different angular frequency in the Schwarzschild spacetime (solid blue line) and in a flat space (dashed black line).
\(\sim Q_{\ell}^{1}(1-r/M)\) in Refs. [20; 53; 54], the hypergeometric function \(\sim{}_{2}F_{1}(\ell,\ell+2,2(\ell+1),2M/r)\) in [55; 56; 57] and the Jacobi polynomial \(\sim J_{\ell-1}^{(2,0)}\left(1-r/M\right)\) (details are to be found in Ref.[58]). Notice that those special functions are not fully solutions of the wave equation. However, these solutions are valid when the \(\omega^{2}\) term is very small in comparison with the other terms of Eq. (29), or in the stationary case.
The numerical solution of the wave equation in the background of Schwarzschild spacetime is presented by number of authors, for instance in the WKB approximation [59; 60; 61; 62] and semi-analytical approach has been suggested in [63; 64]. Introducing the tortoise coordinate \(r_{*}=r+2M\ln(r/2M-1)\), the equation (29) reduces to the standard one dimensional Schrodinger-like equation for a particle with energy \(\omega^{2}\) in the potential \(\ell(\ell+1)f/r^{2}\), (See, e.g. [65]). Very similar equations can be easily derived for the spin zero and spin two fields in the scalar and gravitational perturbations. Unlike other perturbations, the stationary point of the given potential in the electromagnetic perturbation is independent of azimuthal number \(\ell\), or \(r_{c}=3M\), which is responsible for the peak of the potential (a maximum of potential). The critical value of the angular frequency at the turning is \(\omega_{c}=\sqrt{\ell(\ell+1)}/(3\sqrt{3}M)\). If the angular frequency \(\omega\) is a greater than the critical one (i.e. \(\omega>\omega_{c}\)) the wave vector will be real, otherwise (\(\omega<\omega_{c}\)) it should be imaginary [65]. At the large distance, \(r\rightarrow\infty\), the solution of Eq. (29) is expressed as \(rV_{\ell}\sim e^{\pm i\omega\tau}\), which is independent of the orbital number \(\ell\).
The main purpose of the paper is finding the exact analytical solution for the radial waves Eq. (29). Hereafter introducing new radial coordinate, \(z=1-r/2M\), and redefining the profile function \(V_{\ell}(z)=F(z)e^{\varepsilon z/2}z^{(\gamma-1)/2}(z-1)^{(\delta-1)/2}\), the radial equation reduces to the well-known confluent Heun equation [66; 67; 68]:
\[F^{\prime\prime}+\left(\frac{\gamma}{z}+\frac{\delta}{z-1}+\epsilon\right)F^{ \prime}+\frac{\alpha z-q}{z(z-1)}F=0\, \tag{30}\]
which has three singular points at \(z=0\), \(z=1\) and \(z=\infty\). The solution to above equation is represented by the confluent Heun function:
\[F(z) =C_{1\ell}\text{HeunC}(q,\alpha,\gamma,\delta,\epsilon,z)\] \[+C_{2\ell}z^{1-\gamma}\text{HeunC}[q+(\gamma-1)(\delta-\epsilon),\alpha-\gamma\epsilon+\epsilon,2-\gamma,\delta,\epsilon,z]\, \tag{31}\]
where \(C_{1\ell}\) and \(C_{2\ell}\) are integration constants, and the explicit form of the parameters \(\epsilon\), \(\gamma\), \(\delta\), \(\alpha\) and \(q\) are listed in Tab 1. In general, there are eight different cases for the coefficients. The careful analyses showed that solutions \(F(z)\) at some of parameter values are identical and there are only two different cases for all combinations of parameters. These two solutions are responsible for the incoming and outgoing waves, which is illustrated in Fig. 3. Therefore, it is sufficient to use one of solutions of \(F(z)\) for the specific form of the parameters. The Wronskian of the solutions is given by \(W=e^{-\epsilon z}z^{-\gamma}(z-1)^{-\delta}\). At the horizon of the black hole (i.e. \(r=2M\) or \(z=0\)) the function takes a form \(F(0)=C_{1\ell}\). At very small values of the argument, the confluent Heun function reduces to \(\text{HeunC}(q,\alpha,\gamma,\delta,\epsilon,z)\simeq 1-(q/\gamma)z+\mathcal{O} \left(z^{2}\right)\).
## IV Conclusions
In the present research, we have studied the propagation of the electromagnetic field in the Schwarzschild spacetime given by metric function \(f=1-2M/r\). One must mention that this method is also a valid form of the metric function. We first explicitly derived the electromagnetic wave equation for the two independent radial profile functions, which represent all the components of the electromagnetic field. We have also presented an analytical expression for the components of the vector potential in terms of the radial profiles, which allows to produce electromagnetic field lines for a proper choice of the integration constants. The exact analytical solution of the wave equation for the monopole electromagnetic field has been obtained. We have discussed solutions for the electromagnetic field in several cases, in particular, for a small frequency range. Finally, we have explicitly showed that the confluent Heun function satisfies the radial wave equations for both profile functions \(U_{\ell}(r)\) and \(V_{\ell}(r)\) for a combination of the specific parameters. It is shown that there are only two independent solutions for each profile functions for the specific values of the
\begin{table}
\begin{tabular}{c c c c c} \(\delta\) & \(\epsilon\) & \(\gamma\) & \(\alpha\) & \(q\) \\ \hline \(-1\) & \(\pm 4iM\omega\) & \(1\pm 4iM\omega\) & \(0\) & \(\ell(\ell+1)\) \\ \(-1\) & \(\pm 4iM\omega\) & \(1\pm 4iM\omega\) & \(-16M^{2}\omega^{2}\) & \(\ell(\ell+1)-4M\omega(4M\omega\mp i)\) \\ \(3\) & \(\pm 4iM\omega\) & \(1\mp 4iM\omega\) & \(\pm 8iM\omega\) & \((\ell-1)(\ell+2)\pm 8iM\omega\) \\ \(3\) & \(\pm 4iM\omega\) & \(1\pm 4iM\omega\) & \(-8iM\omega(2M\omega\mp i)\) & \((\ell-1)(\ell+2)-4M\omega(4M\omega\mp i)\) \\ \end{tabular}
\end{table}
Table 1: The explicit form of the parameters of the confluent Heun function are listed.
Figure 3: The confluent Heun function as a function of \(z\) and \(\omega\).
parameters, which represent incoming and outgoing waves. In the end we have discussed two different modes of wave propagation known as transverse electric wave (TE) and transverse magnetic wave (TM), which are important for the astrophysical consequences. We also concluded that the calculated behavior of the electric and magnetic susceptibilities near the event horizon appears to be similar to the susceptibilities of multiferroic materials near phase transition. The reinterpretation of this behavior as a critical phenomenon is novel and very interesting, especially in view of the fact that the Curie and the Hawking temperatures coincide in this novel physical picture.
###### Acknowledgements.
This research is supported by Grants F-FA-2021-510 and MRB-2021-527 of the Uzbekistan Ministry for Innovative Development.
## Appendix A Electromagnetic wave equations
The explicit form of the general relativistic Maxwell equations in the Schwarzschild spacetime (in a geometrized units \(G=c=1\)) for the component of the electromagnetic field measured by a local observer are [20]:
\[\frac{\sqrt{f}}{r}\partial_{r}\left(r^{2}E_{\hat{r}}\right)+\frac{ 1}{\sin\theta}\left[\partial_{\theta}\left(\sin\theta E_{\hat{\theta}}\right)+ \partial_{\phi}E_{\hat{\phi}}\right]=0\, \tag{10}\] \[\partial_{t}E_{\hat{r}}=\frac{\sqrt{f}}{r\sin\theta}\left[ \partial_{\theta}\left(\sin\theta B_{\hat{\phi}}\right)-\partial_{\phi}B_{ \hat{\theta}}\right]\,\] (11) \[\partial_{t}E_{\hat{\theta}}=\frac{\sqrt{f}}{r\sin\theta}\left[ \partial_{\theta}B_{\hat{r}}-\sin\theta\partial_{r}\left(r\sqrt{f}B_{\hat{ \phi}}\right)\right]\,\] (12) \[\partial_{t}E_{\hat{\phi}}=\frac{\sqrt{f}}{r}\left[\partial_{r} \left(r\sqrt{f}B_{\hat{\theta}}\right)-\partial_{\theta}B_{\hat{r}}\right]\, \tag{13}\]
and
\[\frac{\sqrt{f}}{r}\partial_{r}\left(r^{2}B_{\hat{r}}\right)+ \frac{1}{\sin\theta}\left[\partial_{\theta}\left(\sin\theta B_{\hat{\theta}} \right)+\partial_{\phi}B_{\hat{\phi}}\right]=0\, \tag{14}\] \[\partial_{t}B_{\hat{r}}=\frac{\sqrt{f}}{r\sin\theta}\left[ \partial_{\phi}E_{\hat{\theta}}-\partial_{\theta}\left(\sin\theta E_{\hat{ \phi}}\right)\right]\,\] (15) \[\partial_{t}B_{\hat{\theta}}=\frac{\sqrt{f}}{r\sin\theta}\left[ \sin\theta\partial_{r}\left(r\sqrt{f}E_{\hat{\phi}}\right)-\partial_{\phi}E_{ \hat{r}}\right]\,\] (16) \[\partial_{t}B_{\hat{\phi}}=\frac{\sqrt{f}}{r}\left[\partial_{ \theta}E_{\hat{r}}-\partial_{r}\left(r\sqrt{f}E_{\hat{\theta}}\right)\right]\, \tag{17}\]
where \(f=1-2M/r\) is the metric function in the Schwarzschild spacetime parameterized by the black hole mass \(M\). The components of the electromagnetic fields \(E_{\hat{i}}\) and \(B_{\hat{i}}\) are measurable quantities by a proper observer (\(i=r,\theta,\phi\)). Acting by the time derivative operator \(\partial_{t}\) on both sides of the radial equations (11), (15) and taking into account other Maxwell equations, the second order radial equations can be obtained as
\[\partial_{t}^{2}E_{\hat{r}} =\frac{f}{r^{2}}\partial_{r}\left[f\partial_{r}\left(r^{2}E_{ \hat{r}}\right)\right]+\frac{f}{r^{2}}\Delta_{\Omega}E_{\hat{r}}\, \tag{18}\] \[\partial_{t}^{2}B_{\hat{r}} =\frac{f}{r^{2}}\partial_{r}\left[f\partial_{r}\left(r^{2}B_{ \hat{r}}\right)\right]+\frac{f}{r^{2}}\Delta_{\Omega}B_{\hat{r}}\, \tag{19}\]
where \(\Delta_{\Omega}\) is the angular part of the Laplace operator which satisfies the following relation, \(\Delta_{\Omega}Y_{\ell m}=-\ell(\ell+1)Y_{\ell m}\), where \(Y_{\ell m}(\theta,\phi)\) are the spherical harmonics with the orbital number \(\ell=0,1,2,...\) and azimuthal number \(|m|\leq\ell\), (see e.g. [69]).
The general solutions of Maxwell's vacuum equations (10)-(17) for the electromagnetic wave can be expressed
as [20]
\[E_{\hat{r}} =\frac{e^{-i\omega t}}{r}V_{\ell}Y_{\ell m}\, \tag{11}\] \[E_{\hat{\theta}} =\frac{e^{-i\omega t}\sqrt{f}}{\ell(\ell+1)}\left[D_{r}V_{\ell} \partial_{\theta}Y_{\ell m}+\frac{i\omega}{f\sin\theta}U_{\ell}\partial_{\phi}Y _{\ell m}\right]\,\] (12) \[E_{\hat{\phi}} =\frac{e^{-i\omega t}\sqrt{f}}{\ell(\ell+1)}\left[D_{r}V_{\ell} \frac{1}{\sin\theta}\partial_{\phi}Y_{\ell m}-\frac{i\omega}{f}U_{\ell} \partial_{\theta}Y_{\ell m}\right]\, \tag{13}\]
and
\[B_{\hat{r}} =\frac{e^{-i\omega t}}{r}U_{\ell}Y_{\ell m}\, \tag{14}\] \[B_{\hat{\theta}} =\frac{e^{-i\omega t}\sqrt{f}}{\ell(\ell+1)}\left[D_{r}U_{\ell} \partial_{\theta}Y_{\ell m}-\frac{i\omega}{f\sin\theta}V_{\ell}\partial_{\phi }Y_{\ell m}\right]\,\] (15) \[B_{\hat{\phi}} =\frac{e^{-i\omega t}\sqrt{f}}{\ell(\ell+1)}\left[D_{r}U_{\ell} \frac{1}{\sin\theta}\partial_{\phi}Y_{\ell m}+\frac{i\omega}{f}V_{\ell} \partial_{\theta}Y_{\ell m}\right]\, \tag{16}\]
where \(\omega\) is the angular frequency of the electromagnetic wave, \(U_{\ell}(r)\), \(V_{\ell}(r)\) are, respectively, the profile functions of the electromagnetic wave, also known as the Debye potentials and the differential operator \(D_{r}\) is defined as \(D_{r}U_{\ell}=r^{-1}\partial_{r}(rU_{\ell})\). Thus, the explicit components of the vector potential of the electromagnetic wave are [20]
\[A_{t} =\frac{e^{-i\omega t}}{\ell(\ell+1)}f\partial_{r}\left(rV_{\ell} \right)Y_{\ell m}\, \tag{17}\] \[A_{r} =-\frac{e^{-i\omega t}}{\ell(\ell+1)}\frac{i\omega r}{f}V_{\ell} Y_{\ell m}\,\] (18) \[A_{\theta} =\frac{e^{-i\omega t}}{\ell(\ell+1)}rU_{\ell}\frac{1}{\sin\theta }\partial_{\phi}Y_{\ell m}\,\] (19) \[A_{\phi} =-\frac{e^{-i\omega t}}{\ell(\ell+1)}rU_{\ell}\sin\theta\partial _{\theta}Y_{\ell m}. \tag{20}\]
Notice that the above expressions are valid for the spherical wave, while for a plane wave one gets \(A_{\theta}=0\) and the spherical harmonics \(Y_{\ell m}(\theta,\phi)\) should be replaced by the associated Legendre polynomial \(P_{\ell}^{m}(\cos\theta)\), see e.g. [69].
|
2306.03846 | A realisation result for moduli spaces of group actions on the line | Given a finitely generated group $G$, the possible actions of $G$ on the real
line (without global fixed points), considered up to semi-conjugacy, can be
encoded by the space of orbits of a flow on a compact space $(Y, \Phi)$
naturally associated with $G$ and uniquely defined up to flow equivalence, that
we call the \emph{Deroin space} of $G$. We show a realisation result: every
expansive flow $(Y, \Phi)$ on a compact metrisable space of topological
dimension 1, satisfying some mild additional assumptions, arises as the Deroin
space of a finitely generated group. This is proven by identifying the Deroin
space of an explicit family of groups acting on suspension flows of subshifts,
which is a variant of a construction introduced by the second and fourth
authors. This result provides a source of examples of finitely generated groups
satisfying various new phenomena for actions on the line, related to their
rigidity/flexibility properties and to the structure of (path-)connected
components of the space of actions. | Joaquín Brum, Nicolás Matte Bon, Cristóbal Rivas, Michele Triestino | 2023-06-06T16:32:37Z | http://arxiv.org/abs/2306.03846v4 | # A realisation result for moduli spaces of group actions on the line
###### Abstract.
Given a finitely generated group \(G\), the possible actions of \(G\) on the real line (without global fixed points), considered up to semi-conjugacy, can be encoded by the space of orbits of a flow on a compact space \((Y,\Phi)\) naturally associated with \(G\) and uniquely defined up to flow equivalence, that we call the _Deroin space_ of \(G\). We show a realisation result: every expansive flow \((Y,\Phi)\) on a compact metrisable space of topological dimension \(1\), satisfying some mild additional assumptions, arises as the Deroin space of a finitely generated group. This is proven by identifying the Deroin space of an explicit family of groups acting on suspension flows of subshifts, which is a variant of a construction introduced by the second and fourth authors. This result provides a source of examples of finitely generated groups satisfying various new phenomena for actions on the line, related to their rigidity/flexibility properties and to the structure of (path-)connected components of the space of actions.
**MSC2020:** Primary 37C85, 57M60. Secondary 37E05, 37B05.
Key words and phrases:Group actions on the real line, semi-conjugacy of actions, Deroin space All the authors acknowledge the support of the project MATH AMSUD, DGT - Dynamical Group Theory (22-MATH-03). NMR and MT are partially supported by the project ANR Gromeov (ANR19-CE40-0007). The work of NMB was supported by the LABEX MILYON (ANR-10-LABX-0070) of Universite de Lyon, within the program "'France 2030" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). MT has been partially supported by the project ANER Agroupes (AAP 2019 Region Bourgogne Franche-Comte) and his host department IMB receives support from the EIPHI Graduate School (ANR-17-EURE-0002).
Introduction
Let \(G\) be a group of \(n\)-dimensional algebraic groups. A group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)\pi(G)=\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\). The group \(G\) is said to be _\(G\)-equivariant_ if there exists a group \(\pi\colon G\to G\) such that \(\pi(G)\pi(G)=\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\pi(G)\)
is given by the non-abelian free groups \(\mathbb{F}_{n}\). In this case the flow \((\mathcal{D},\Psi)\) has a non-smooth space of orbits, as one can show by exhibiting a family of representations of \(\mathbb{F}_{n}\) giving rise to a subflow with chaotic behaviour [3, Remark 3.23]. An explicit description of the Deroin space of \(\mathbb{F}_{n}\) is not known (note that it would encode actions on the line of every \(n\)-generated group).
Our main result shows that many flows can be realised as the Deroin space of a finitely generated group. This is proven by identifying the Deroin space of an explicit family of groups, which are relatives of the groups introduced by two of the authors in [21]. The class of flows that we consider are suspension flows of subshifts, intrinsically characterised (by Bowen and Walters [1]) as expansive flows on compact metrisable spaces of topological dimension \(1\) (all these notions are recalled in SS5.1). We say that a flow \((Y,\Phi)\) is _freely reversible_ if it admits a fixed-point-free order-\(2\) homeomorphism \(\sigma\colon Y\to Y\) which is a flow equivalence between \(\Phi\) and its time-reverse; the Deroin space of a finitely generated group always has this property (Proposition 3.9).
**Theorem 1.1**.: _Let \((Y,\Phi)\) be an expansive flow without fixed points on a compact metrisable space of topological dimension 1. Assume that \(\Phi\) is freely reversible and topologically free (namely, the set of non-periodic points is dense). Then, there exists a finitely generated group \(G\) acting faithfully on \(Y\) by homeomorphisms, preserving each \(\Phi\)-orbit, such that \((Y,\Phi)\) can be identified with the Deroin space of \(G\) (up to \(G\)-equivariant flow equivalence)._
_Remark 1.2_.: Recall that a group is _left-orderable_ if it admits a total order which is invariant under left multiplication. A countable group is left-orderable if and only if it is isomorphic to a subgroup of \(\mathsf{Homeo}_{0}(\mathbb{R})\). Since we assume in Theorem 1.1 that \(\Phi\) is topologically free, the group \(G\) acts faithfully on the union of non-periodic orbits, and thus it is left-orderable by a standard argument (see [21, Proposition 3.3]).
Theorem 1.1 provides the first explicit computation of Deroin spaces with chaotic behaviour (in particular whose space of orbits is non-smooth). In fact it produces examples such that the flow \((\mathcal{D},\Psi)\) has rather arbitrary dynamical properties.
One direction of application of Theorem 1.1 is related to the study of rigidity and flexibility of group actions on the line. In a broad sense, this topic aims at understanding, given a group \(G\), how its representations can be modified (up to semi-conjugacy) by slight perturbations, which boils down to understand how semi-conjugacy classes accumulate onto each other. This leads to several tightly related notions of rigidity and flexibility for representations, appearing in the literature on group actions on one-manifolds, see e.g. [12, 16, 17, 19]. We will use here the following terminology, analogous to the one used by Mann and Wolff [19] for actions on the circle.
**Definition 1.3**.: A representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is _locally rigid_ if it is an interior point in its semi-conjugacy class. It is _rigid_ if its whole semi-conjugacy class is open. A representation which is not locally rigid is called _flexible_.
A representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is rigid if and only if its image \(r_{\mathcal{D}}(\rho)\) in the Deroin space \((\mathcal{D},\Psi)\) defines an isolated point in any local cross-section of the flow \(\Psi\), i.e. if its \(\Psi\)-orbit is an an open subset of \(\mathcal{D}\). Local rigidity is not witnessed by the Deroin space in general, although it is equivalent to rigidity for minimal representations [3, Remark 3.5]. However for the groups that we construct to prove Theorem 1.1, we additionally show that local rigidity and rigidity are equivalent for all representations (see Corollary 6.3). Thus for this family of groups we have complete control on which representations are (locally) rigid (clearly a flow as in Theorem
1.1 may or may not admit open orbits). More generally, given \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), we obtain a precise characterisation of all representations that accumulate on \(\rho\), up to semi-conjugacy, in terms of the flow \((Y,\Phi)\), see Theorem 6.2.
A relevant special case of Theorem 6.2 arises by choosing the flow \((Y,\Phi)\) to be _minimal_, i.e. with all orbits dense. In this case our construction produces examples of groups with the following extreme flexibility property.
**Corollary 1.4**.: _There exist finitely generated groups \(G\) for which any semi-conjugacy class is dense in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) (and \(\mathsf{Rep}_{\mathrm{irr}}(G,\mathbb{R})\) contains uncountably many semi-conjugacy classes)._
In other words, a group \(G\) as in Corollary 1.4 has the property that every representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) admits arbitrarily small perturbations which are semi-conjugate to any other representation in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). We call such a representation \(\rho\)_universally flexible_ (see Definition 6.5). To our knowledge, there were previously no groups known to admit any representation with this property.
_Remark 1.5_.: Corollary 1.4 can be compared with the problem of whether there exists a finitely generated left-orderable group acting minimally on its space of left orders, see Navas [24, Question 6]. The conclusion in Corollary 1.4 is the analogous property at the level of actions on the line; however there is no formal connection: the groups constructed here do not act minimally on their space of left-orders, and conversely this property does not seem to imply _a priori_ the conclusion of Corollary 1.4 (because not all representations in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) on the line are dynamical realisations of orders). However both properties imply that the group \(G\) acts minimally on its Deroin space, which can be identified with a quotient of the space of left-orders [3, SS3].
Beyond the minimal case, different choices for the flow \((Y,\Phi)\) yield groups with other prescribed rigidity/flexibility property, for example groups which admit both universally flexible representations together with a given number (finite or countably infinite) of rigid ones (see Corollary 6.7).
In a similar spirit, we consider a natural analogue of the Cantor-Bendixson rank (CB-rank) for the space of semi-conjugacy classes in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), that we call the _semi-conjugacy CB-rank_ of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), defined according to a transfinite process in which the open semi-conjugacy classes play a role analogous to the isolated points for the usual CB-rank (Definition 6.9). Then Theorem 1.1 produces examples of groups for which the semi-conjugacy CB-rank of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) can be computed and can be an arbitrarily large countable ordinal, see Corollary 6.10. Again, this can be seen as a counterpart at the level of actions of a question asked by Mann and the third author [17] for spaces of left orders, namely whether there exists a finitely generated left-orderable group whose space of left orders has Cantor-Bendixson rank larger than \(1\).
As another application, we obtain examples of finitely generated groups that admit many non semi-conjugate actions on \(\mathbb{R}\), yet have a unique _generic_ action (up to possibly reversing the orientation) in the following strong sense.
**Corollary 1.6** (Groups with a generic representation).: _There exist finitely generated groups \(G\) such that \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) contains infinitely (countably or uncountably) many distinct semi-conjugacy classes, and there is \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) such that the semi-conjugacy classes of \(\rho\) and of its conjugate by the reflection \(x\mapsto-x\) form a dense open set._
_Remark 1.7_.: Since we required semi-conjugacy to be orientation-preserving, it is not possible for a group \(G\) to have a representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) with an open dense (or even co-meager) semi-conjugacy class, as its conjugate by the reflection should have the same property.
Another general theme in the study of representation spaces is the classification of their connected and path-connected components. See for instance the works of Mann and Wolff [15, 19] for the case of surface group actions on the circle (for which this problem remains open in general). For a finitely generated group \(G\), the semi-conjugacy classes in \(\operatorname{\mathsf{Rep}}_{\operatorname{irr}}(G;\mathbb{R})\) are always path connected, so the connected and path-connected components of \(\operatorname{\mathsf{Rep}}_{\operatorname{irr}}(G;\mathbb{R})\) are in correspondence with those of the Deroin space (Proposition 3.13). Again in analogy with the terminology of [19], one may call an action \(\rho\in\operatorname{\mathsf{Rep}}_{\operatorname{irr}}(G;\mathbb{R})\)_path-rigid_ if its path-component coincides with its semi-conjugacy class (that is, \(\rho\) cannot be deformed _along a continuous path_ into any non-semi-conjugate representation). Theorem 1.1 is a source of groups with a rather extreme behaviour with regard to this notion. In particular, it shows that path-continuous deformations of representations might be dramatically more restricted than arbitrary small perturbations in the compact-open topology.
**Corollary 1.8** (Path-continuous deformations vs general perturbations).: _For every group \(G\) satisfying the conclusion of Theorem 1.1, the path-components of \(\operatorname{\mathsf{Rep}}_{\operatorname{irr}}(G;\mathbb{R})\) coincide with the semi-conjugacy classes._
_Combining with Corollary 1.4, there are finitely generated groups \(G\) such that all representations in \(\operatorname{\mathsf{Rep}}_{\operatorname{irr}}(G;\mathbb{R})\) are simultaneously path-rigid and universally flexible, and the space \(\operatorname{\mathsf{Rep}}_{\operatorname{irr}}(G;\mathbb{R})\) is connected but not path-connected, and nowhere locally connected._
### On the group construction
The groups that we consider to prove Theorem 1.1 are based on a modification of the groups \(\mathsf{T}(\varphi)\) acting on the suspension space of subshifts from [21]. A subshift \((X,\varphi)\) is a closed shift-invariant subset \(X\) of \(A^{\mathbb{Z}}\), where \(A\) is a finite alphabet, and \(\varphi:X\to X\) denotes the restriction of the shift. Subshifts are characterised up to topological conjugacy as expansive homeomorphisms of compact, totally disconnected, metrisable spaces. The _suspension space_ of \((X,\varphi)\) is the space \(Y:=(X\times\mathbb{R})_{/(\varphi(x),t)\sim(x,t+1)}\). The space \(Y\) is endowed with the _suspension flow_\(\Phi\colon\mathbb{R}\times Y\to Y\), induced by the natural flow on \(X\times\mathbb{R}\) which translates the \(\mathbb{R}\)-coordinate. To take into account the restriction that the Deroin space is freely reversible, we need to consider reversible subshifts, by which we mean the data \((X,\varphi,\sigma)\) of a subshift additionally equipped with an involution \(\sigma\colon X\to X\) (an order-\(2\) homeomorphism) such that \(\sigma\varphi\sigma=\varphi^{-1}\). Such an involution naturally induces an involution \(\hat{\sigma}\colon Y\to Y\) on the suspension space, which reverses the suspension flow. To ensure that \(\hat{\sigma}\) has no fixed points, we need to further assume that \(\sigma\) does not preserve any \(\varphi\)-orbit (see Lemma 4.1). Every flow as in Theorem 1.1 can be realised as the suspension flow of a subshift \((X,\varphi)\) with these properties, relying on Bowen and Walters [1].
Given \((X,\varphi)\), the group \(\mathsf{T}(\varphi)\) is defined as a group of homeomorphisms of the its suspension space \(Y\) analogous to Thompson's groups, whose action in the flow direction is by dyadic PL homeomorphisms, and such that the displacement along each flow-orbit is locally constant in the transverse direction. For a reversible subshift \((X,\varphi,\sigma)\) one can define an analogous group \(\mathsf{T}(\varphi,\sigma)\) as the centralizer of \(\hat{\sigma}\) in \(\mathsf{T}(\varphi)\) (this definition appears in the work by Le Boudec and the second author [13]); a special case of the groups \(\mathsf{T}(\varphi,\sigma)\) are the groups defined by Hyde and Lodha in [9]. The definition of the groups \(\mathsf{T}(\varphi)\) was largely inspired by the notion of Deroin space, and since their introduction the question naturally arose whether such constructions could be used to prove a realisation result as Theorem 1.1, by identifying their Deroin space completely (see Question 1 in [21]). With the original definition of the groups \(\mathsf{T}(\varphi)\) or \(\mathsf{T}(\varphi,\sigma)\), our attempts to do so have run into technical difficulties, related to the fact that certain subgroups playing an important role (namely the subgroups supported in dyadic charts) have huge abelianisation (free abelian of infinite rank). This complicates the analysis of actions on the line. This technical point is an artifact of the use of PL maps in the
definition of the groups \(\mathsf{T}(\varphi)\). A starting point of this paper is the idea of replacing PL maps by a larger pseudogroup of transformations; namely we consider dyadic PL transformations where we allow a non-discrete set of discontinuity points for the derivative, which accumulate onto some isolated "higher order" singularities with a controlled behaviour, required to locally commute with the doubling map. Certain groups acting by such countably singular PL maps were already considered in our previous work [3, SS12.2]. On the interval \([0,1]\), this definition yields a finitely generated group \(\mathscr{F}\) sharing many features with Thompson's group \(F\), except that it is _perfect_. Accordingly we define groups \(\mathscr{T}(\varphi)\) and \(\mathscr{T}(\varphi,\sigma)\) associated with any reversible subshift \((X,\varphi,\sigma)\). The main content of this work is the identification of the Deroin space of the groups \(\mathscr{T}(\varphi,\sigma)\) with the suspension flow of \((X,\varphi)\), from which Theorem 1.1 follows.
_Remark 1.9_.: Beyond finite generation, the other properties established for the groups \(\mathsf{T}(\varphi)\) in [21] remain valid for the groups \(\mathscr{T}(\varphi)\) and \(\mathscr{T}(\varphi,\sigma)\), with similar proofs. For instance, these groups are simple if and only if \(\varphi\) is minimal [21, Theorem B], they do not have Kazhdan's property \((T)\)[21, Theorem F], and they are not finitely presented if \(\varphi\) is not a subshift of finite type [21, Theorem G]. We avoid a more detailed discussion here, as these properties are not relevant for the main object of the paper.
It would be interesting to identify similar constructions of groups acting on flows over spaces of higher topological dimension, and use them to extend Theorem 1.1. Thus we conclude this introduction by proposing the following general question.
**Question 1.10**.: _Which compact spaces \((Y,\Phi)\) endowed with a flow can be realised as the Deroin space of a finitely generated group?_
### Acknowledgements
We thank Ville Salo for sharing useful comments on the CB-rank of subshifts.
## 2. General preliminaries on actions on the line
Recall that given a group \(G\), we call a representation \(\rho\colon G\to\mathsf{Homeo}_{0}(\mathbb{R})\) irreducible if it has no fixed points. The space of all irreducible representations is denoted by \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). It is endowed with the topology induced from the product topology on \(\mathsf{Homeo}_{0}(\mathbb{R})^{G}\), where \(\mathsf{Homeo}_{0}(\mathbb{R})\) has the compact-open topology. We say that \(\rho_{1},\rho_{2}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) are _semi-conjugate_ if there exists a non-decreasing map \(h\colon\mathbb{R}\to\mathbb{R}\) which intertwines \(\rho_{1}\) and \(\rho_{2}\). Semi-conjugacy is an equivalence relation.
Throughout the text, by a _minimal set_ for a representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) we mean a closed, non-empty, minimal invariant subset \(\Lambda\subset\mathbb{R}\). Every such set satisfies one of the following possibilities:
1. either \(\Lambda\) is a closed discrete orbit, in which case the action of \(G\) on \(\Lambda\) factors through a cyclic quotient of \(G\), or
2. \(\Lambda\) is a perfect set and it is the unique minimal set of \(\rho\). Furthermore, in this case we have either \(\Lambda=\mathbb{R}\), or \(\Lambda\) has empty interior.
It is also well known that when \(G\) is _finitely generated_, every irreducible action \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) admits a minimal set, see for instance Navas [23, Proposition 2.1.12]. (However such a set need not exist if the finite generation assumption is dropped.)
This discussion implies that if \(G\) is a finitely generated group, then every semi-conjugacy class in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) contains a representative which is either minimal, or _cyclic_ (namely which factors through an epimorphism of \(G\) onto a cyclic group of translations). Such a
representative is unique up to conjugacy by an element of \(\mathsf{Homeo}_{0}(\mathbb{R})\). Moreover any map implementing a semi-conjugacy between minimal actions is automatically a homeomorphism, and hence a conjugacy (see for instance Kim, Koberda, and Mj [12, Lemma A.4]).
We also record the following.
**Proposition 2.1**.: _Let \(G\) be a finitely generated group. Then every semi-conjugacy class in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is path-connected._
Proof.: Let \(\mathcal{C}\subset\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) be a semi-conjugacy class, and choose a representative \(\rho\in\mathcal{C}\) which is minimal or cyclic. It is enough to see that there exists a continuous path from any \(\tilde{\rho}\in\mathcal{C}\) to \(\rho\) inside \(\mathcal{C}\). Assume first that \(\rho\) is minimal. Let \(h:\mathbb{R}\to\mathbb{R}\) be a semi-conjugacy between \(\tilde{\rho}\) and \(\rho\) and remark that as \(\rho\) is minimal, the map \(h\) is continuous (see [12, Lemma A.4]). For \(t\in[0,1]\), write \(h_{t}=(1-t)\mathsf{id}+th\), and observe that for \(t\in[0,1)\) this gives a homeomorphism because it is the convex sum of an increasing map with a non-decreasing map. For \(t\in(0,1)\), define \(\tilde{\rho}_{t}:=h_{t}\tilde{\rho}h_{t}^{-1}\). This gives the desired continuous path. Assume now that \(\rho\) is cyclic. This implies that \(\tilde{\rho}\) has a closed discrete orbit \(\Lambda\subset\mathbb{R}\). Let \(\tilde{\rho}^{\prime}\in\mathcal{C}\) be the action obtained by blowing up each point of \(\Lambda\) to an interval. Then there is a semi-conjugacy from \(\rho^{\prime}\) to \(\tilde{\rho}\) which is implemented by a continuous map (obtained by collapsing back the blown-up intervals). There also exists a semi-conjugacy from \(\tilde{\rho}^{\prime}\) to \(\rho\) through a continuous map (obtained by collapsing the complement of the closure of the blown-up intervals). Repeating the argument for the minimal case, we may find continuous paths connecting \(\tilde{\rho}\) first to \(\tilde{\rho}^{\prime}\) and then to \(\rho\).
_Remark 2.2_.: When studying topological properties of \(\mathsf{Rep}_{\mathrm{irr}}(G,\mathbb{R})\), such as the space of path-components and the separation properties of semi-conjugacy classes, it is crucial that we restrict the attention to irreducible representations. For example it is not difficult to see that the space of \(all\) representations of \(G\) into \(\mathsf{Homeo}_{0}(\mathbb{R})\) always deformation retracts onto the trivial representation (using a version of the classical "Alexander trick"), moreover every semi-conjugacy class accumulates on the trivial representation (see Mann and Wolff [19, Proposition 2.12]).
We finally recall the fundamental classification of minimal group actions on the line into three types according to their centralizer. Given an action \(\rho\colon G\to\mathsf{Homeo}_{0}(\mathbb{R})\), we denote by \(\mathcal{Z}(\rho)\) the centraliser of \(\rho(G)\) in \(\mathsf{Homeo}_{0}(\mathbb{R})\). We say that two points \(x,y\in\mathbb{R}\) are _proximal_ for \(\rho\) if there exists a sequence \((g_{n})\subset G\) such that \(\rho(g_{n})(x)\) and \(\rho(g_{n})(y)\) converge to the same point of \(\mathbb{R}\). The action \(\rho\) is _proximal_ if all pairs of points are proximal, and _locally proximal_ if every point is contained in some open interval whose endpoints are proximal. The following result is proven in Malyutin [14].
**Theorem 2.3**.: _Let \(G\) be any group and \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) a minimal action. Then one of the following holds._
1. _Either_ \(\rho\) _is conjugate to an action by translation, and_ \(\mathcal{Z}(\rho)\) _is isomorphic to_ \((\mathbb{R},+)\)_; or_
2. \(\rho\) _is locally proximal, but not proximal,_ \(\mathcal{Z}(\rho)\) _is conjugate to an infinite cyclic group of translations, and the action of_ \(G\) _on the circle_ \(\mathbb{R}/\mathcal{Z}(\rho)\cong\mathbb{S}^{1}\) _is proximal; or_
3. \(\rho\) _is proximal, and_ \(\mathcal{Z}(\rho)\) _is trivial._
_In particular, in all cases, \(\mathcal{Z}(\rho)\) is abelian._
## 3. The Deroin space
### Definition of the Deroin space
Let \(G\) be a discrete group and consider the space \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). Given \(s\in\mathbb{R}\), denote by \(T_{s}\in\mathsf{Homeo}_{0}(\mathbb{R})\) the translation by \(s\). We define the _translation flow_\(\Psi:\mathbb{R}\times\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\to\mathsf{Rep}_{ \mathrm{irr}}(G;\mathbb{R})\), given by \(\Psi^{s}(\rho)=T_{-s}\circ\rho\circ T_{s}\). We also have an involution \(\hat{\iota}\) on \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) defined by conjugation by the reflection \(x\mapsto-x\). Together with \(\Psi\), this gives an action of the group \(\mathsf{Isom}(\mathbb{R})\) of isometries of \(\mathbb{R}\) on \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), which is continuous with respect to the compact-open topology. We also define the _translation action_\(G\times\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\to\mathsf{Rep}_{\mathrm{irr}} (G;\mathbb{R})\) by \(g.\rho=\Psi^{\rho(g)(0)}(\rho)\). A proof that this is a well-defined action can be found in Deroin [6, pp. 187-188]. It is clear that the translation action is continuous and preserves every \(\Psi\)-orbit.
In other words, one can think of an element \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) as an action on the real line with a marker on the point \(0\). The translation flow corresponds to moving the marker by a translation, while the translation action of \(G\) moves the marker according to the underlying \(G\)-action. With this point of view in mind, it is useful to introduce the following terminology.
**Definition 3.1**.: Consider irreducible actions \(\rho_{1},\rho_{2}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\).
1. We say that \(\rho_{1}\) and \(\rho_{2}\) are _pointed conjugate_ if there exists an orientation-preserving conjugacy \(h\colon\mathbb{R}\to\mathbb{R}\) between \(\rho_{1}\) and \(\rho_{2}\) such that \(h(0)=0\).
2. We say that \(\rho_{1}\) and \(\rho_{2}\) are _pointed semi-conjugate_ if there exist \(\rho_{*}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), either minimal or cyclic, and semi-conjugacies \(h_{i}\) between \(\rho_{i}\) and \(\rho_{*}\) for \(i\in\{1,2\}\), such that \(h_{1}(0)=h_{2}(0)\).
_Remark 3.2_.: Notice that pointed semi-conjugacy is an equivalence relation, which is (strictly) finer than semi-conjugacy. Also, notice that if \(\rho_{1}\) and \(\rho_{2}\) are minimal or cyclic, then they are pointed semi-conjugate if and only if they are pointed conjugate. When \(\rho_{1}\) and \(\rho_{2}\) are cyclic, this is clear, and when they are minimal it follows from the fact that a semi-conjugacy between minimal actions is automatically a conjugacy.
Deroin showed in [6] that if \(G\) is finitely generated, then every representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is conjugate to one which belongs to some compact \(\Psi\)-invariant subset of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). A more universal version of this construction follows from the results in [7] and allows to find a single such set which intersects all semi-conjugacy classes, see [3, SS3]. The following definition captures the main properties of this construction which will be important for our purposes.
**Definition 3.3**.: Let \(G\) be a finitely generated group. We say that a subset \(\mathcal{D}\subset\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is a _Deroin space_ for \(G\) if it satisfies:
1. \(\mathcal{D}\) is compact, and invariant under the translation flow \(\Psi\);
2. every \(\rho\in\mathcal{D}\) is either minimal or cyclic;
3. every \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is pointed semi-conjugate to a unique element of \(\mathcal{D}\).
_Remark 3.4_.: As a consequence of the definition, any Deroin space is \(G\)-invariant for the translation action of \(G\), for the latter preserves orbits of the translations flow. Moreover the \(G\)-orbit of every \(\rho\in\mathcal{D}\) is dense in its \(\Psi\)-orbit (when \(\rho\) is cyclic this is trivial since the translation flow fixes \(\rho\), else it follows from the minimality of \(\rho\) by the definition of translation action).
Given a Deroin space \(\mathcal{D}\subset\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), we can define a map
\[r_{\mathcal{D}}:\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\to\mathcal{D}\]
that associates to each irreducible representation its unique representative up to pointed semi-conjugacy in \(\mathcal{D}\).
**Proposition 3.5**.: _Let \(\mathcal{D}\) be a Deroin space for a finitely generated group \(G\). The map \(r_{\mathcal{D}}\) is a \(G\)-equivariant continuous retraction, which preserves pointed-semi-conjugacy classes. Two actions \(\rho_{1},\rho_{2}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) are semi-conjugate if and only if \(r_{\mathcal{D}}(\rho_{1})\) and \(r_{\mathcal{D}}(\rho_{2})\) belong to the same \(\Psi\)-orbit in \(\mathcal{D}\)._
Proof.: The proof that the map \(r_{\mathcal{D}}\) is a continuous retraction is given in [3, Theorem 3.2]. (This result is stated there for a specific realisation of Deroin space, namely the space of normalised \(\mu\)-harmonic actions, but its proof only relies on the abstract properties in Definition 3.3). It remains to prove \(G\)-equivariance of \(r_{\mathcal{D}}\). For this, take \(\rho_{0}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) and let \(\rho_{1}:=r_{\mathcal{D}}(\rho_{0})\) be its projection to \(\mathcal{D}\). Then, there exist a minimal or cyclic model \(\rho_{*}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) and semi-conjugacies \(h_{i}\) between \(\rho_{i}\) and \(\rho_{*}\) for \(i\in\{0,1\}\), so that \(h_{0}(0)=h_{1}(0)\). Thus we have
\[h_{0}(\rho_{0}(g)(0))=\rho_{*}(g)(h_{0}(0))=\rho_{*}(g)(h_{1}(0))=h_{1}(\rho_ {1}(g)(0)),\]
which implies that \(g.\rho_{0}\) and \(g.\rho_{1}\) are pointed semi-conjugate. Since Deroin spaces intersect each pointed-semi-conjugacy class in a single point, we have that \(r_{\mathcal{D}}(g.\rho_{0})=g.\rho_{1}\). To show the last sentence, note that if \(r_{\mathcal{D}}(\rho_{1})\) and \(r_{\mathcal{D}}(\rho_{2})\) are actions in \(\mathcal{D}\) that belong to the same orbit of the translation flow, then they are evidently conjugate, so that \(\rho_{1}\) and \(\rho_{2}\) are semi-conjugate. Conversely if \(\rho_{1}\) and \(\rho_{2}\) are semi-conjugate, then so are \(r_{\mathcal{D}}(\rho_{1})\) and \(r_{\mathcal{D}}(\rho_{2})\), and since actions in \(\mathcal{D}\) are minimal or cyclic, this implies that they are actually conjugate. So some conjugate of \(r_{\mathcal{D}}(\rho_{1})\) by a translation is actually pointed conjugate to \(r_{\mathcal{D}}(\rho_{2})\), and by uniqueness this is possible only if it is equal, so \(r_{\mathcal{D}}(\rho_{1})\) and \(r_{\mathcal{D}}(\rho_{2})\) belong to the same \(\Psi\)-orbit.
**Definition 3.6**.: Let \((Y_{1},\Phi_{1}),(Y_{2},\Phi_{2})\) be spaces with a flow. A _flow equivalence_ is a homeomorphism \(\mathfrak{h}\colon Y_{1}\to Y_{2}\) such that for every \(y\in Y_{1}\), there exists an orientation-preserving homeomorphism \(\tau\colon\mathbb{R}\to\mathbb{R}\) fixing \(0\) such that \(\mathfrak{h}(\Phi_{1}^{s}(y))=\Phi_{2}^{\tau(s)}(\mathfrak{h}(y))\) for every \(s\in\mathbb{R}\).
**Theorem 3.7** (Existence and uniqueness [3, 7]).: _Every finitely generated group has a Deroin space._
_Moreover, such a space is unique in the following sense: if \(\mathcal{D}_{1},\mathcal{D}_{2}\subset\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) are Deroin spaces of \(G\), there exists a \(G\)-equivariant flow equivalence \(\mathfrak{h}:(\mathcal{D}_{1},\Psi|_{\mathcal{D}_{1}})\to(\mathcal{D}_{2}, \Psi|_{\mathcal{D}_{2}})\)._
Proof.: Existence follows from the results of Deroin, Kleptsyn, Navas, Parwani [7]; see [3, Proposition 2.19 and Theorem 2.22]. For uniqueness, consider the map \(\mathfrak{h}:\mathcal{D}_{1}\to\mathcal{D}_{2}\) defined as the restriction of \(r_{\mathcal{D}_{2}}\) to the Deroin space \(\mathcal{D}_{1}\). By Proposition 3.5 this map is \(G\)-equivariant, continuous, and preserves pointed-semi-conjugacy classes. Since each pointed-semi-conjugacy class has exactly one representative in each Deroin space, the map \(\mathfrak{h}\) is bijective. It remains to check that \(\mathfrak{h}\) is a flow equivalence. For this, consider \(\rho_{1}\in\mathcal{D}_{1}\) and \(\rho_{2}:=\mathfrak{h}(\rho_{1})\). Since \(\rho_{1}\) and \(\rho_{2}\) are pointed semi-conjugate, and they are minimal or cyclic, by Remark 3.2, there exists an orientation preserving homeomorphism \(\tau:\mathbb{R}\to\mathbb{R}\) which conjugates \(\rho_{1}\) and \(\rho_{2}\), and sends \(0\) to itself. Thus, since \(\mathfrak{h}\) preserves pointed-semi-conjugacy classes, it must hold that \(\mathfrak{h}(\Psi^{s}(\rho_{1}))=\Psi^{\tau(s)}(\rho_{2})\).
In view of the previous theorem, we can abuse terminology and speak of _the_ Deroin space of \(G\) as a compact space with a flow \((\mathcal{D},\Psi)\) and a \(G\)-action preserving \(\Psi\)-orbits, which is well defined up to \(G\)-equivariant flow equivalence. Note that the space \(\mathcal{D}\) can be more abstractly described, up to homeomorphism, as the quotient of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) by the pointed-semi-conjugacy relation. It can also be identified with a quotient of the space of left-invariant preorders on \(G\), see [3, Theorem 3.20].
### First properties
We conclude with some basic properties of the Deroin space, which follow easily from the definition and Proposition 3.5.
**Definition 3.8**.: A flow \((Y,\Phi)\) is _freely reversible_ if there is a fixed-point-free involution (that is, a homeomorphism of order \(2\)) \(\iota\colon Y\to Y\) which establishes a flow equivalence between the flow \((Y,\Phi)\) and its time-reverse, denoted \((Y,\Phi^{-1})\).
If \(G\) is a finitely generated group, let \(\hat{\iota}\colon\mathsf{Rep}_{\mathrm{i}rr}(G;\mathbb{R})\to\mathsf{Rep}_{ \mathrm{i}rr}(G;\mathbb{R})\) be the conjugation map by the reflection \(x\to-x\). If \(\mathcal{D}\) is a Deroin space, then \(\hat{\iota}\) descends to a map \(\iota\colon\mathcal{D}\to\mathcal{D}\), given by \(\iota=r_{\mathcal{D}}\circ\hat{\iota}\).
**Proposition 3.9**.: _Let \(\mathcal{D}\subset\mathsf{Rep}_{\mathrm{i}rr}(G;\mathbb{R})\) be a Deroin space for \(G\). Then the map \(\iota\colon\mathcal{D}\to\mathcal{D}\) is a fixed-point-free involution which establishes a \(G\)-equivariant flow-equivalence between \((\mathcal{D},\Psi)\) and \((\mathcal{D},\Psi^{-1})\). In particular, \((\mathcal{D},\Psi)\) is freely reversible._
Proof.: The fact that \(\iota\) is a \(G\)-equivariant flow equivalence between \((\mathcal{D},\Psi)\) and \((\mathcal{D},\Psi^{-1})\) follows from Proposition 3.5. To observe that it is fixed-point free, assume by contradiction that \(\iota(\rho)=\rho\) for some \(\rho\in\mathcal{D}\). This implies that every element \(\rho(g)\) commutes with an orientation-reversing homeomorphism \(h\colon\mathbb{R}\to\mathbb{R}\) fixing \(0\). But \(0\) is the unique fixed point of \(h\), so it must also be fixed by \(\rho(G)\), contradicting that \(\rho\) is irreducible.
The next proposition gives a precise dictionary with the statement of Theorem 2.3. It will not be used elsewhere in this paper, but we include it for completeness.
**Proposition 3.10** (Structure of orbits and \(G\)-invariant measures).: _Let \(\mathcal{D}\) be a Deroin space for the finitely generated group \(G\) with flow \(\Psi\). Fix \(\rho\in\mathcal{D}\), and write \(\ell\subset\mathcal{D}\) for its \(\Psi\)-orbit. Then we have the following alternative._
1. _The orbit_ \(\ell\) _is a point if and only if_ \(\rho\) _is semi-conjugate to an action by translation. In particular, the set of fixed points of_ \(\Psi\) _is homeomorphic to the sphere_ \(\mathbb{S}^{b_{1}(G)-1}\)_, where_ \(b_{1}(G)=\mathsf{rk}(H^{1}(G,\mathbb{Z}))\) _(with_ \(\mathbb{S}^{-1}=\varnothing\)_)._
2. _The orbit_ \(\ell\) _is a topological circle if and only if_ \(\rho\) _is locally proximal, but not proximal._
3. _The orbit_ \(\ell\) _is not closed if and only if_ \(\rho\) _is proximal._
_Moreover, every \(G\)-invariant probability measure on \(\mathcal{D}\) must be supported on the set of fixed points of \(\Psi\)._
Proof.: By definition, \(\rho\) is fixed by \(\Psi\) if and only \(\rho(G)\) commutes with all translations, which happens if and only \(\rho\) is an action by translations. Conversely, assume that \(\rho\in\mathcal{D}\) is semi-conjugate to an action by translations, and thus actually conjugate to it (since all actions in \(\mathcal{D}\) are minimal or cyclic). Then all \(\Psi\)-translates of \(\rho\) are pointed conjugate to it. Since \(\mathcal{D}\) contains a unique representative of each pointed-semi-conjugacy class, we have that \(\rho\) is actually fixed by \(\Psi\). This shows the first part of 1, and a similar argument gives 2, and thus 3, using Theorem 2.3. Now, the set of fixed points is homeomorphic to the space of non-trivial homomorphisms of \(G\) to \((\mathbb{R},+)\) up to positive constant, which is either empty or homeomorphic to \(\mathbb{S}^{n}\), where \(n\) is the torsion-free rank of the abelianisation of \(G\) minus \(1\).
To show the last statement, let \(\mu\) be a \(G\)-invariant probability measure on \(\mathcal{D}\). Then its support \(\mathsf{Supp}\,\mu\) is a closed \(G\)-invariant subset, and since the \(G\)-orbit of every \(\rho\in\mathcal{D}\) is dense in its \(\Psi\)-orbit, \(\mathsf{Supp}\,\mu\) is also \(\Psi\)-invariant. Assume by contradiction that \(\rho\in\mathsf{Supp}\,\mu\) is not a fixed point, then it is a locally proximal action by Theorem 2.3. Hence we can find \(g\in G\) and an interval \(I=(-a,a)\) and \(0<\varepsilon<a\) such that \(\rho(g)(\overline{I})\subset J:=(-a+\varepsilon,a-\varepsilon)\). Then \(U=\{\rho^{\prime}\colon\rho^{\prime}(\overline{I})\subset J\}\) is an open neighbourhood of \(\rho\) in \(\mathcal{D}\). Let \(U_{I}=\bigcup_{t\in I}\Psi^{t}(U)\), and \(U_{J}\) be defined similarly. Then by definition of the translation action we have \(g^{n}.U_{I}\subset U_{J}\) for every
\(n\geq 0\), and thus \(\mu(U_{I}\smallsetminus\overline{U_{J}})=0\). Hence \(U_{I}\smallsetminus\overline{U_{J}}\) is an open subset avoiding the support of \(\mu\). However, it contains points in the orbit of \(\rho\), namely \(\Phi^{t}(\rho)\) for every \(t\in I\smallsetminus\overline{J}\). This contradicts the \(\Psi\)-invariance of \(\mathsf{Supp}\,\mu\).
_Remark 3.11_.: The last statement in Proposition 3.10 implies the theorem of Witte Morris [22], which states that a finitely generated amenable group has a non-trivial representation to \(\mathsf{Homeo}_{0}(\mathbb{R})\) if and only if it has a non-trivial homomorphism to \((\mathbb{R},+)\). This proof is close to the one given by Deroin [6] using compact \(\Psi\)-invariant subsets of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), but avoids the machinery of disintegration of measures along \(\Psi\)-orbits.
_Remark 3.12_.: From Proposition 3.10, one can also deduce that a finitely generated group has the property that its Deroin space contains no closed \(\Psi\)-orbit if and only if any action on the circle has a fixed point. The groups \(\mathsf{T}(\varphi)\) and \(\mathsf{T}(\varphi,\sigma)\) (for minimal subshifts \(\varphi\)) were the first known groups displaying the latter property [10, 21].
Finally we record the following straightforward consequence of Propositions 3.5 and 2.1, which further clarifies the relevance of Deroin space from the perspective described in the introduction.
**Proposition 3.13**.: _Let \((\mathcal{D},\Psi)\) be a Deroin space for a finitely generated group \(G\) and \(r_{\mathcal{D}}\colon\,\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\to\mathcal{D}\) the associated retraction._
1. _The open semi-conjugacy-saturated subsets of_ \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) _are precisely the preimages under_ \(r_{\mathcal{D}}\) _of the open_ \(\Psi\)_-invariant subsets of_ \(\mathcal{D}\)_. In particular a representation_ \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) _is rigid if and only if_ \(r_{\mathcal{D}}(\rho)\) _belongs to an open_ \(\Psi\)_-orbit._
2. _The connected and the path-connected components of_ \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) _coincide with their corresponding preimages of those of_ \(\mathcal{D}\)_._
## 4. Finitely generated groups of homeomorphisms of suspension spaces
### Groups of homeomorphisms of suspension spaces
Let \(X\) be a topological space, and \(\varphi:X\to X\) a homeomorphism. Recall that we denote by \(Y\) the suspension space of \((X,\varphi)\), namely \(Y=(X\times\mathbb{R})/\mathbb{Z}\), where \(\mathbb{Z}\) acts on \(X\times\mathbb{R}\) by \(n\cdot(x,t)=(\varphi^{n}(x),t-n)\). We let \(\pi_{Y}\colon X\times\mathbb{R}\to Y\) be the quotient projection, and denote by \(\Phi\) the suspension flow on \(Y\).
Assume that \((X,\varphi)\) is reversible, with associated involution \(\sigma:X\to X\) such that \(\sigma\varphi\sigma=\varphi^{-1}\). The pair \((\varphi,\sigma)\) then determines an action of the infinite dihedral group \(D_{\infty}\) on \(X\), given by \((n,j)\cdot x=\varphi^{n}\sigma^{j}(x)\). Here we identify \(D_{\infty}\) with the semi-direct product \(\mathbb{Z}\rtimes(\mathbb{Z}/2\mathbb{Z})\), and write accordingly its elements as pairs \((n,j)\) with \(n\in\mathbb{Z}\) and \(j\in\mathbb{Z}/2\mathbb{Z}\). We can then consider a smaller suspension space (a "mapping Klein bottle"). To see this, consider the standard isometric action of \(D_{\infty}\) on \(\mathbb{R}\) given by \((n,j)\cdot t=(-1)^{j}x-n\). Then the quotient \(Z:=(X\times\mathbb{R})/D_{\infty}\) with respect to the associated diagonal action is called the _dihedral suspension_ of \((\varphi,\sigma)\). We will denote by \(\pi_{Z}\colon X\times\mathbb{R}\to Z\) the quotient projection. The space \(Z\) is a \(2\)-to-\(1\) quotient of the suspension \(Y\) of \(\varphi\). More precisely, we have \(Z=Y/\langle\hat{\sigma}\rangle\), where \(\hat{\sigma}:Y\to Y\) is the involution given by
\[\hat{\sigma}:\,\pi_{Y}(x,t)\,\mapsto\,\pi_{Y}(\sigma(x),-t).\]
Writing \(p\colon Y\to Z\) for the quotient map, we have a commutative diagram:
\[\diagram{0,0}\diagram{0,0}\node{X\times\mathbb{R}}\node{\pi_{Y}}\node{\pi_{Z}} \node{\pi_{Z}}\node{\pi_{Z}}\node{\pi_{Z}}\node{p}\node{Z=Y/\langle\hat{ \sigma}\rangle}\node{\pi}\node{\pi}\node{\pi}\node{\pi_{\hat{Z}}}\node{p} \node{Z=Y/\langle\hat{\sigma}\rangle}\node{\pi}\node{\pi}\node{\pi_{\hat{Z}}} \node{\pi_{\hat{Z}}}\node{\pi_{\hat{Z}}}\node{\pi_{\hat{Z}}}\node
Note that since the involution \(\sigma\) normalizes the cyclic group \(\langle\varphi\rangle\), it must necessarily send \(\varphi\)-orbits to \(\varphi\)-orbits. The next elementary result clarifies the assumption that we need to put on the involution \(\sigma\).
**Lemma 4.1**.: _Let \((X,\varphi,\sigma)\) be as above. Then, the following are equivalent:_
1. \(\sigma\) _does not preserve any_ \(\varphi\)_-orbit;_
2. \(\sigma(x)\notin\{x,\varphi(x)\}\) _for every_ \(x\in X\)_;_
3. _every element of order 2 in_ \(D_{\infty}\) _acts on_ \(X\) _without fixed points;_
4. _the map_ \(\hat{\sigma}\colon Y\to Y\) _has no fixed point;_
5. _the diagonal action of_ \(D_{\infty}\) _on_ \(X\times\mathbb{R}\) _is free._
Proof.: It is clear that (1)\(\Rightarrow\)(2). Every element of order 2 in \(D_{\infty}\) is conjugate to either \(\sigma\) or \(\sigma\varphi\), so (2)\(\Rightarrow\) (3). The point \(\pi_{Y}(x,t)\) is fixed by \(\hat{\sigma}\) if and only if there exists \(n\in\mathbb{Z}\) such that \(\varphi^{n}\sigma\) fixes \(x\) and \(t=-n/2\), so (3)\(\Leftrightarrow\)(4). Since elements of infinite order already act without fixed points on \(\mathbb{R}\), we have (3)\(\Rightarrow\)(5). Finally if (1) does not hold, then there exist \(x\in X\) and \(n\in\mathbb{Z}\) such that \(\varphi^{-n}\sigma(x)=x\), and thus the element \(\gamma=(-n,1)\in D_{\infty}\) fixes the point \((x,-n/2)\), so that (5)\(\Rightarrow\)(1).
From now on, we will work in the following setting.
**Assumption 4.2**.: We let \(X\) be a totally disconnected, metrisable, compact space, and \((\varphi,\sigma)\) homeomorphisms of \(X\) such that \(\sigma^{2}=\mathsf{id}\) and \(\sigma\varphi\sigma=\varphi^{-1}\). We further assume that:
* the action of \(\mathbb{Z}\) on \(X\) determined by \(\varphi\) is topologically free, that is, the subset of points which are not periodic for \(\varphi\) is dense in \(X\);
* the homeomorphism \(\sigma\) satisfies the equivalent conditions in Lemma 4.1.
As above, we always denote by \(Y\) the suspension of \(\varphi\), with suspension flow \(\Phi\), and by \(Z\) the dihedral suspension of \((\varphi,\sigma)\).
_Example 4.3_.: As a basic example, start with any homeomorphism \(\varphi_{0}\colon X_{0}\to X_{0}\) of a compact totally disconnected metrisable space. Let \(X\) be the disjoint union of two copies of \(X\), and define \(\varphi\) to be equal to \(\varphi_{0}\) on one copy and to \(\varphi_{0}^{-1}\) on the other. Finally let \(\sigma\) exchange the two copies of \(X_{0}\). This class of examples is somewhat degenerate because \(\varphi\) preserves a partition into two clopen sets (in particular it can never be topologically transitive).
_Example 4.4_.: Let \(\varphi\) be an irrational rotation on \(\mathbb{R}/\mathbb{Z}\), and \(\sigma\) the reflection \(x\mapsto-x\). Then the pair \((\varphi,\sigma)\) defines an action of \(D_{\infty}\) on \(\mathbb{R}/\mathbb{Z}\). The fixed points of all involutions in \(D_{\infty}\) belong to exactly 4 orbits (as there are 2 conjugacy classes of reflections and each has two fixed points). Blow up these orbits to obtain a minimal invariant Cantor set \(X\). Then \((X,\varphi,\sigma)\) satisfies Assumption 4.2.
_Example 4.5_.: The following source of examples is inspired from Hyde and Lodha [9], see [13, Remark 4.19]. Take a finite alphabet \(A\) of even cardinality \(|A|\geq 4\), and assume that \(A\) is endowed with a map \(a\mapsto a^{-1}\) which is an involution without fixed points. Let \(\varphi\) be the shift map on \(A^{\mathbb{Z}}\), and consider also the map \(\sigma((a_{n})_{n})=(a_{-(n+1)}^{-1})_{n}\) (namely the formal inverse of a bi-infinite word in \(A\)). It is easy to verify that \(\sigma^{2}=\mathsf{id}\) and \(\sigma\varphi\sigma=\varphi^{-1}\), so \((\varphi,\sigma)\) determines an action of \(D_{\infty}\) on \(\Omega\). This action does not satisfy the conditions in Assumption 4.2, as \(\sigma\) has fixed points. To avoid this problem, we can consider the subshift of finite type \(X_{\mathrm{red}}=\{(a_{n})_{n}\in A^{\mathbb{Z}}:a_{n+1}\neq a_{n}^{-1}\}\) of _reduced_ words in \(A\), which is closed and \(D_{\infty}\)-invariant. Then, Assumption 4.2 is satisfied by \((X_{\mathrm{red}},\varphi,\sigma)\). One can then obtain many more examples as \(D_{\infty}\)-invariant closed subsets \(X\subset X_{\mathrm{red}}\).
The set \(Z\) is naturally partitioned into subsets of the form \(\ell=\pi_{Z}(\{x\}\times\mathbb{R})\), that we call the _leaves_ of \(Z\). As \(X\) is totally disconnected, these coincide with the path components of \(Z\). By Lemma 4.1, the map \(p\colon Y\to Z\) is a local homeomorphism, and is injective in restriction to each \(\Phi\)-orbit. Take now a clopen subset \(C\subset X\), and an open interval \(I\subset\mathbb{R}\). If the restriction of \(\pi_{Y}\) to \(C\times I\) is injective, we denote by \(Y_{C,I}\) its image. Then the map
\[\pi_{Y}\colon C\times I\to Y_{C,I}\subset Y\]
is a homeomorphism, and is called a _chart_ of \(Y\). Similarly, if the restriction of \(\pi_{Z}\) to \(C\times I\) is injective, we denote its image by \(Z_{C,I}\) and call the map
\[\pi_{Z}\colon C\times I\to Z_{C,I}\subset Z\]
a chart of \(Z\). Most of the time, we will just refer to the sets \(Y_{C,I}\) and \(Z_{C,I}\) as charts, with an implicit identification with \(C\times I\). Since the actions of \(\mathbb{Z}\) and \(D_{\infty}\) on \(X\times\mathbb{R}\) are free and properly discontinuous, both maps \(\pi_{Y}\) and \(\pi_{Z}\) are local homeomorphisms, and thus the spaces \(Y\) and \(Z\) can be covered by charts.
We now consider the group \(\mathsf{H}_{0}(\varphi)\) (respectively, \(\mathsf{H}_{0}(\varphi,\sigma)\)) of all homeomorphisms of \(Y\) (respectively, \(Z\)) isotopic to the identity. Note that this condition forces all elements of \(\mathsf{H}_{0}(\varphi)\) (respectively, \(\mathsf{H}_{0}(\varphi,\sigma)\)) to preserve each \(\Phi\)-orbit in \(Y\) (respectively, each leaf of \(Z\)), since these are exactly the path components of the corresponding space. By [2, Theorem 3.1], the group \(\mathsf{H}_{0}(\varphi)\) is exactly the group of homeomorphisms \(h\colon Y\to Y\) for which there exists a continuous function \(\tau_{h}\colon Y\to\mathbb{R}\) such that
\[h(y)=\Phi^{\tau_{h}(y)}(y). \tag{4.1}\]
It follows from Assumption 4.2 that such a function \(\tau_{h}\) is necessarily unique. Indeed, the value \(\tau_{h}(y)\) is uniquely determined by \(h(y)\) provided \(y\) does not belong to a \(\Phi\)-periodic orbit. By our assumption such points are dense, and thus \(\tau_{h}\) is uniquely determined everywhere. The function \(\tau_{h}\) will be called the _translation cocycle_ of \(h\).
**Lemma 4.6**.: _Let \((X,\varphi,\sigma)\) be as in Assumption 4.2. Then, the action of \(\mathsf{H}_{0}(\varphi,\sigma)\) on \(Z\) lifts through the map \(p\colon Y\to Z\) to a unique action on \(Y\) which preserves every \(\Phi\)-orbit. This action identifies \(\mathsf{H}_{0}(\varphi,\sigma)\) with the centralizer of \(\hat{\sigma}\) in \(\mathsf{H}_{0}(\varphi)\), which consists of all elements \(h\) such that \(\tau_{h}\circ\hat{\sigma}=-\tau_{h}\)._
Proof.: By Lemma 4.1, the map \(p\) sends each \(\Phi\)-orbit bijectively onto a leaf \(\ell\) of \(Z\). Hence, at least at a set-theoretic level, the action of \(\mathsf{H}_{0}(\varphi,\sigma)\) on \(Z\) can be leafwise lifted to a unique action on \(Y\) which preserves every \(\Phi\)-orbit, and such that the map \(p\) is equivariant. To check that this action is continuous, fix a finite cover by charts \(Z=\bigcup_{i=1}^{r}Z_{C_{i},I_{i}}\); each chart \(Z_{C_{i},I_{i}}\) lifts to a pair of disjoint charts in \(Y\), mapped homeomorphically onto it by \(p\). Take \(\varepsilon>0\) such that every arc of length at most \(\varepsilon\) in a leaf of \(Z\) is contained in one of the charts \(Z_{C_{i},I_{i}}\). If \(g\in\mathsf{H}_{0}(\varphi,\sigma)\) is an element that displaces any point of \(Z\) by a distance less than \(\varepsilon\) on the corresponding leaf, then one can check on charts that the lift of \(g\) to \(Y\) defined above is continuous. But such elements generate \(\mathsf{H}_{0}(\varphi,\sigma)\), as all elements are isotopic to the identity. This implies that the lift of any element is continuous. By the same reasoning, every isotopy of an element \(g\in\mathsf{H}_{0}(\varphi,\sigma)\) to the identity in the group of homeomorphism of \(Z\) lifts to an isotopy in the group of homeomorphisms of \(Y\). Therefore \(\mathsf{H}_{0}(\varphi,\sigma)\) lifts to a subgroup of \(\mathsf{H}_{0}(\varphi)\). Moreover, its action clearly commutes with \(\hat{\sigma}\). Conversely, assume that \(h\in\mathsf{H}_{0}(\varphi)\) commutes with \(\hat{\sigma}\). It is easily checked that this is equivalent to \(\tau_{h}\circ\hat{\sigma}=-\tau_{h}\). Choose the isotopy \(h_{s}\) of \(h\) to the identity given by \(h_{s}(y)=\Phi^{(1-s)\tau_{h}(y)}(y)\). Then each element \(h_{s}\) commutes with \(\hat{\sigma}\) and thus descends to a homeomorphism of \(Z\). This shows that \(h\) defines indeed an element of \(\mathsf{H}_{0}(\varphi,\sigma)\).
### Perfect germ extensions of (solenoidal) Thompson's groups
Here we consider variations of the classical Thompson's groups. We will use some of the standard properties of these groups (such as finite generation) that the reader can find explained in standard references, such as the text by Cannon, Floyd, and Parry [5].
Let \(I,J\subset\mathbb{R}\) be open dyadic intervals. We say that a homeomorphism \(f:I\to J\) is _dyadic piecewise linear (PL)_ if there exists a finite subset \(\Sigma\subset I\) so that:
* \(\Sigma\subset\mathbb{Z}[\frac{1}{2}]\), and
* \(f\) is locally dyadic affine outside \(\Sigma\), that is, on each connected component of \(I\smallsetminus\Sigma\), the map is defined as \(x\mapsto 2^{n}x+b\) for some \(n\in\mathbb{Z}\) and \(b\in\mathbb{Z}[\frac{1}{2}]\).
We are interested in the family of maps which are locally dyadic PL outside a finite subset, and satisfy an extra local condition. To define this properly, given a dyadic rational, let \(h_{x_{0}}:\mathbb{R}\to\mathbb{R}\) be the map defined by \(h_{x_{0}}(x)=2(x-x_{0})+x_{0}\).
**Definition 4.7**.: Say that a map \(f:I\to J\) is _of type \(\mathfrak{D}\)_ if there exists a finite subset \(\mathsf{BP}^{2}(f)\subset I\) such that
* every \(x\in I\smallsetminus\mathsf{BP}^{2}(f)\) has an open neighborhood \(U\) so that \(f|_{U}\) is dyadic PL, and
* every \(x_{0}\in\mathsf{BP}^{2}(f)\) has a neighborhood \(U\) so that \(f\circ h_{x_{0}}=h_{f(x_{0})}\circ f\) for \(x\in U\).
Given an interval \(I\subset\mathbb{R}\), we denote by \(\mathscr{F}_{I}\) the group of homeomorphisms of \(\overline{I}\) of type \(\mathfrak{D}\), and \(\mathscr{F}:=\mathscr{F}_{(0,1)}\). These are all isomorphic if \(I\) is dyadic. Note that the subgroup of all elements \(f\in\mathscr{F}\) such that \(\mathsf{BP}^{2}(f)=\varnothing\) is the standard Thompson's group \(F\). Given a dyadic \(x\in[0,1]\), denote by \(\mathcal{D}_{x}^{+}\) (respectively, \(\mathcal{D}_{x}^{-}\)) the group of right (respectively, left) germs of \(\mathscr{F}\) at \(x\). Recall that these are defined as the quotient of the stabiliser of \(\mathscr{F}\) by the normal subgroup of elements acting trivially on a right (respectively, left) neighbourhood. Write \(\tilde{T}\subseteq\mathsf{Homeo}_{0}(\mathbb{R})\) for the \(\mathbb{Z}\)-central extension of Thompson's group \(T\), obtained by lifting the action of \(T\) on the circle. Explicitly, \(\tilde{T}\) is the group of all (locally) dyadic PL homeomorphisms of \(\mathbb{R}\) commuting with the unit translation \(t_{1}\colon x\mapsto x+1\). The proof of the next lemma is very similar to that of [3, Lemma 12.22].
**Lemma 4.8**.: _The groups \(\mathcal{D}_{x}^{+}\) and \(\mathcal{D}_{x}^{-}\) are isomorphic to \(\tilde{T}\) for every \(x\in\mathbb{Z}[\frac{1}{2}]\cap[0,1]\)._
Proof.: Take \(x\in(0,1)\) dyadic and notice that the map \(h_{x}|_{(-\infty,x)}\) is conjugate to the translation \(t_{1}\) by a dyadic PL homeomorphism \(f\colon(-\infty,x)\to\mathbb{R}\). This establishes an isomorphism between \(\mathcal{D}_{x}^{-}\) and the group of germs of \(\tilde{T}\) at \(+\infty\), which is isomorphic to \(\tilde{T}\) itself.
**Lemma 4.9**.: _The group \(\mathscr{F}\) is finitely generated and perfect._
Proof.: Fix a dyadic \(x\in(0,1)\). Notice that the group of germs at \(x\), that we denote by \(\mathcal{D}_{x}\), is isomorphic to the direct product \(\mathcal{D}_{x}^{-}\times\mathcal{D}_{x}^{+}\). Thus, by Lemma 4.8, we have that \(\mathcal{D}_{x}\) is isomorphic to \(\tilde{T}\times\tilde{T}\). In particular it is finitely generated, so we can choose a finite subset \(S\subset\mathscr{F}\) that generates \(\mathcal{D}_{x}\). Moreover, we can ask every \(f\in S\) to satisfy \(\mathsf{BP}^{2}(f)=\{x\}\). Thus, given \(g\in\mathscr{F}\), we can find \(g_{*}\) in the group \(\langle F^{\prime},S\rangle\) so that \(\mathsf{BP}^{2}(gg_{*})\cap(0,1)=\varnothing\). Analogously, we can also consider finite subsets \(S_{0}\) and \(S_{1}\) generating \(\mathcal{D}_{0}^{+}\) and \(\mathcal{D}_{1}^{-}\), respectively. Therefore we can write
\[\mathscr{F}=\langle F^{\prime},S,S_{0},S_{1}\rangle=\langle F,S,S_{0},S_{1}\rangle.\]
Since Thompson's group \(F\) is finitely generated, we deduce that so is \(\mathscr{F}\). We next prove that \(\mathscr{F}\) is perfect. For this, notice that, since \(\tilde{T}\) is perfect, we can find \(S\), \(S_{0}\) and \(S_{1}\) consisting of products of commutators. Finally, since \(F^{\prime}\) is also perfect it follows that \(\mathscr{F}\) is perfect.
We now define two countable subgroups of the groups \(\mathsf{H}_{0}(\varphi)\) and \(\mathsf{H}_{0}(\varphi,\sigma)\), respectively, which are type-\(\mathfrak{D}\) analogues of the group \(\mathsf{T}(\varphi)\) defined in [21].
**Definition 4.10**.: We let \(\mathscr{T}(\varphi)\) be the group of homeomorphisms \(g\) of \(Y\) such that for every \(y\in Y\), there exist charts \(Y_{C,I},Y_{C,J}\) containing \(y\) and \(g(y)\), respectively, and a homeomorphism \(f\colon I\to J\) of type \(\mathfrak{D}\) such that \(g(Y_{C,I})=Y_{C,J}\), and \(g|_{Y_{C,I}}\) is given by
\[g(\pi_{Y}(x,t))=\pi_{Y}(x,f(t))\quad\text{for any }(x,t)\in C\times Y.\]
Similarly, we let \(\mathscr{T}(\varphi,\sigma)\) be the group of homeomorphisms \(g\) of \(Z\) such that for every \(y\in Y\), there exist charts \(Z_{C,I},Z_{C,J}\) containing \(y\) and \(g(y)\), respectively, and a homeomorphism \(f\colon I\to J\) of type \(\mathfrak{D}\) such that \(g(Z_{C,I})=Z_{C,J}\), and \(g|_{Z_{C,I}}\) is given by
\[g(\pi_{Z}(x,t))=\pi_{Z}(x,f(t))\quad\text{for any }(x,t)\in C\times Y.\]
**Lemma 4.11**.: _We have \(\mathscr{T}(\varphi)\leq\mathsf{H}_{0}(\varphi)\) and \(\mathscr{T}(\varphi,\sigma)\leq\mathsf{H}_{0}(\varphi,\sigma)\). The group \(\mathscr{T}(\varphi,\sigma)\) coincides with the centralizer of \(\hat{\sigma}\) in \(\mathscr{T}(\varphi)\)._
Proof.: For \(g\in\mathscr{T}(\varphi)\), one can define a continuous function \(\tau_{g}\) as in (4.1) locally around any point \(y\in Y\), by choosing charts \(Y_{C,I},Y_{C,J}\), and \(f\colon I\to J\) as in the definition, and setting \(\tau_{g}(p(x,t))=f(t)-t\). This definition must agree on overlapping charts, since \(\tau_{g}(y)\) satisfying (4.1) is uniquely determined for \(y\) in a dense set. Thus \(\mathscr{T}(\varphi)\leq\mathsf{H}_{0}(\varphi)\). The rest of the proof is similar to that of Lemma 4.6.
The following result characterizes when the group \(\mathscr{T}(\varphi,\sigma)\) is finitely generated. condition is satisfied exactly when \(\varphi:X\to X\) is conjugate to a subshift (or equivalently, when it is expansive).
**Theorem 4.12**.: _Let \((X,\varphi,\sigma)\) be a reversible subshift satisfying Assumption 4.2. Then, \(\mathscr{T}(\varphi,\sigma)\) is finitely generated._
This is the analogue of [21, Theorem A]. The argument in the proof is essentially the same but needs to be slightly modified, as it makes use of the existence of elements in \(\mathsf{T}(\varphi)\) that move all points in the suspension space \(Y\) strictly in the direction of the flow. This is not possible here as there is no globally well-defined direction on the leaves of \(Z\). We provide details in Appendix A.
_Remark 4.13_.: For any \((X,\varphi)\), the group \(\mathscr{T}(\varphi)\) can be identified with \(\mathscr{T}(\varphi_{1},\sigma_{1})\), where \((X_{1},\varphi_{1},\sigma_{1})\) is obtained from \((X,\varphi)\) through the doubling construction in Example 4.3. Thus any result on the groups \(\mathscr{T}(\varphi,\sigma)\) can be translated to the groups \(\mathscr{T}(\varphi)\).
## 5. Actions on the line of \(\mathscr{T}(\varphi,\sigma)\)
### Identifying the Deroin space
In this section we analyse actions on the line of the group \(G=\mathscr{T}(\varphi,\sigma)\), where \((X,\varphi,\sigma)\) is as in Assumption 4.2. Our ultimate goal is to show that, when \(G\) is finitely generated, the Deroin space of \(G\) can be identified with \((Y,\Phi)\). While finite generation is necessary for the Deroin space to be well-defined, the reader will see that the main result of the section does not require that \(G\) be finitely generated: we will prove that any irreducible action of \(G\) on the line, admitting a minimal set, comes from \(Y\). Thus we do not assume here that \((X,\varphi)\) is conjugate to a subshift.
Recall that given \(g\in G\), we use the notation \(\tau_{g}\colon Y\to\mathbb{R}\) for the translation cocycle of \(g\). Given \(y\in Y\), let \(\rho_{y}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) be the action given by
\[\rho_{y}(g)(t)=t+\tau_{g}(\Phi^{t}(y)). \tag{5.1}\]
When \(y\) is not periodic for \(\Phi\), this is simply the action of \(G\) on the \(\Phi\)-orbit of \(y\), if we identify the orbit with \(\mathbb{R}\) using the flow, in such a way that \(0\) corresponds to the point \(y\). When \(y\) is periodic, its orbit is homeomorphic to a circle, and the action \(\rho_{y}\) is a lift of the action of \(G\) under the covering map \(\mathbb{R}\to\mathbb{S}^{1}\) determined by the flow and which maps \(0\) to \(y\). Then we have the following.
**Theorem 5.1**.: _Let \((X,\varphi,\sigma)\) be as in Assumption 4.2, and write \(G:=\mathscr{T}(\varphi,\sigma)\). Let \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) be a minimal action. Then, there exists \(y\in Y\) such that \(\rho\) and \(\rho_{y}\) are pointed conjugate._
Before proving Theorem 5.1, let us deduce Theorem 1.1, which is based on the following consequence.
**Corollary 5.2**.: _Let \((X,\varphi,\sigma)\) be as in Assumption 4.2. Consider the map \(q\colon Y\to\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) given by \(q\colon y\mapsto\rho_{y}\). Then \(q\) is a \(G\)-equivariant homeomorphism onto its image, which conjugates the flow \(\Phi\) to the translation flow \(\Psi\) on \(q(Y)\). In particular if \((X,\varphi)\) is a subshift, then \((Y,\Phi)\) identifies with a Deroin space for \(G\)._
Proof.: From the explicit formula defining the action \(\rho_{y}\), it is clear that \(q\) is continuous. Pick two distinct points \(y_{1},y_{2}\in Y\). If \(y_{1}\neq\hat{\sigma}(y_{2})\), then also their projections to \(Z\) are distinct. It follows that they have distinct stabilisers in \(G\). As a consequence, the actions \(\rho_{y_{1}}\) and \(\rho_{y_{2}}\) are not pointed conjugate, and since they are minimal, they are not pointed semi-conjugate either (see Remark 3.2). If \(y_{1}=\hat{\sigma}(y_{2})\), then every element \(g\in G\) such that \(\rho_{y_{1}}(g)(0)>0\), must satisfy \(\rho_{y_{2}}(g)(0)<0\), and thus also the actions \(\rho_{y_{i}}\) are not pointed semi-conjugate. In particular the map \(q\) is injective and thus a homeomorphism onto its image. It is straightforward that \(q\) conjugates \(\Phi\) to the translation flow and is \(G\)-equivariant.
For the last statement, note that after Theorem 4.12, the group \(G\) is finitely generated, so that any irreducible action admits a minimal set. Moreover, since the group \(G\) is perfect, it does not admit any cyclic action. It thus follows from Theorem 5.1 that \(q(Y)\) contains exactly one representative for each pointed-semi-conjugacy class.
Corollary 5.2 implies Theorem 1.1, using the results of Bowen and Walters [1]. We first recall all the necessary terminology.
**Definition 5.3**.: Let \((Y,\Phi)\) be a flow on a compact metrisable space, and \(d\) a metric on \(Y\). Then \(\Phi\) is _expansive_ if for every \(\varepsilon>0\) there exists \(\delta>0\) with the property that if \(x,y\in Y\) satisfy \(d(\Phi^{t}(x),\Phi^{s(t)})<\delta\) for every \(t\in\mathbb{R}\) and for some continuous map \(s\colon\mathbb{R}\to\mathbb{R}\), then \(y=\Phi^{t}(x)\) with \(|t|<\varepsilon\).
This property does not depend on the choice of the metric \(d\). Recall also that a space \(Y\) has topological dimension \(d\) if every open cover \(\mathcal{U}\) has a refinement \(\mathcal{U}^{\prime}\) such that every point of \(Y\) is contained in at most \(d+1\) elements of \(\mathcal{U}^{\prime}\), and \(d\) is the least integer with this property.
Proof of Theorem 1.1.: Let \((Y,\Phi)\) be as in Theorem 1.1. It is shown in [1] that \(\Phi\) admits a cross-section \(X\subset Y\) (namely a closed subset which intersects every \(\Phi\)-orbit in a discrete set of times) and that for every such cross-section, the first-return map \(\varphi\) to \(X\) is conjugate to a subshift, so that \((Y,\Phi)\) is flow-equivalent to the suspension flow of \((X,\varphi)\). The only minor point to address is that we want \((X,\varphi)\) to be reversible and to satisfy Assumption 4.2. Since \((Y,\Phi)\) is freely reversible, there exists a fixed-point-free involution \(\hat{\sigma}\colon Y\to Y\) sending any \(\Phi\)-orbit to a \(\Phi\)-orbit in an orientation preserving way. Let \(X_{0}\) be any cross-section and set \(X=X_{0}\cup\hat{\sigma}(X_{0})\). Then \(X\) is still a cross-section. Denoting by \(\varphi\) the first-return map and \(\sigma=\hat{\sigma}|_{X}\), then \((X,\varphi,\sigma)\) is a reversible subshift. Moreover \(\hat{\sigma}\) cannot preserve any \(\Phi\)-orbit (or it
would have a fixed point), so neither does \(\sigma\) and by Lemma 4.1 we have that \((X,\varphi,\sigma)\) satisfies Assumption 4.2.
The rest of the section is devoted to the proof of Theorem 5.1.
### Finding minimal sets for non-finitely generated groups
The first step is to show Proposition 5.6 below. We will make repeated use of the following consequence of Theorem 2.3.
**Corollary 5.4**.: _Let \(G\) be a group and \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) an irreducible action admitting a perfect minimal set \(\Lambda\). Then, every perfect subgroup of \(\mathcal{Z}(\rho)\) pointwise fixes \(\Lambda\)._
Proof.: Since \(\Lambda\) is the unique minimal set for \(\rho\), it must be preserved by \(\mathcal{Z}(\rho)\). Moreover, by collapsing connected components of the complement of \(\Lambda\), we can semi-conjugate \(\rho\) to a minimal action \(\hat{\rho}:H\to\mathsf{Homeo}_{0}(\mathbb{R})\). Since \(\mathcal{Z}(\rho)\) preserves \(\Lambda\), it projects to \(\mathcal{Z}(\hat{\rho})\). By Theorem 2.3, we get that any perfect subgroup of \(\mathcal{Z}(\rho)\) projects to the trivial group, implying that it pointwise fixes \(\Lambda\).
Given a group \(H\) and an integer \(n\geq 1\), write \(H^{n}=H_{1}\times\cdots\times H_{n}\) for the direct product of \(n\) copies of \(H\). Denote by \(\pi_{i}:H^{n}\to H_{i}\) the projection from \(H^{n}\) onto \(H_{i}\).
**Lemma 5.5**.: _Let \(H\) be a finitely generated perfect group. Consider also an action \(\rho:H^{n}\to\mathsf{Homeo}_{0}(\mathbb{R})\) and a subgroup \(K\subseteq H^{n}\) for which the projections \(\pi_{i}|_{K}\) are surjective for any \(i\in\{1,\ldots,n\}\). Then we have the following._
1. \(\mathsf{Fix}(\rho|_{K})=\mathsf{Fix}(\rho)\)_._
2. _If_ \(\rho\) _is irreducible, then_ \(\rho|_{K}\) _and_ \(\rho\) _have the same unique perfect minimal set._
Proof.: We will first prove (2). Notice that there is \(m\in\{1,\ldots,n\}\) so that \(\rho|_{H_{m}}\) is irreducible, and without loss of generality we can assume \(m=n\). Indeed, if this was not the case, \(\rho\) would have a fixed point, contradicting its irreducibility. Moreover, since \(H\) is finitely generated, the action \(\rho|_{H_{n}}\) preserves a minimal set that we denote by \(\Lambda\). This set cannot be a single closed orbit, since \(H_{n}\) is perfect, and thus \(\Lambda\) is a perfect set. Thus, by Corollary 5.4, we have that the subgroup \(H_{1}\times\cdots\times H_{n-1}\) must fix \(\Lambda\) pointwise. This implies that the action of \(\rho\) on \(\Lambda\) (that we denote by \(\rho_{*}:H^{n}\to\mathsf{Homeo}(\Lambda)\)) factors through the projection \(\pi_{n}\), that is, \(\rho_{*}=\Psi_{*}\circ\pi_{n}\). On the other hand, by hypothesis \(\pi_{n}|_{K}\) surjects to \(H_{n}\) and therefore \(\rho_{*}|_{K}:K\to\mathsf{Homeo}(\Lambda)\) has the same image as \(\rho_{*}\), implying that it is also a minimal action. This implies that \(\Lambda\) is a minimal set of \(\rho|_{K}\), as desired.
We next prove (1). For this, take a connected component \(J\) of \(\mathbb{R}\smallsetminus\mathsf{Fix}(\rho)\). By part (2) the actions of \(H^{n}\) and \(K\) on \(J\) have the same minimal set, and in particular \(J\) is a connected component of the support of \(\rho|_{K}\). Since the component \(J\) was arbitrary this implies that the support of \(\rho\) is contained in that of \(\rho|_{K}\), which taking complements gives the reverse inclusion \(\mathsf{Fix}(\rho|_{K})\subseteq\mathsf{Fix}(\rho)\). Since the other inclusion is trivial, equality in (1) follows.
Given a space \(X\) and a group \(H\), denote by \(\mathcal{C}(X,H)\) the group of continuous functions from \(X\) to \(H\). Note that this group is not finitely generated in general thus its actions on the line may _a priori_ fail to admit a minimal set. However, this is the case when \(X\) is as in Assumption 4.2, and \(H\) is perfect and finitely generated:
**Proposition 5.6**.: _Let \(X\) be a metrisable, compact, totally disconnected space, and \(H\) a finitely generated perfect group. Then, every action \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(\mathcal{C}(X,H);\mathbb{R})\) admits a perfect minimal set._
Before proving this result, let us introduce some further notation. Assume \(X\) is a compact, totally disconnected space. For a clopen subset \(C\subseteq X\), let \(H_{C}\leq\mathcal{C}(X,H)\) be the subgroup (isomorphic to \(H\)) of functions \(\sigma\colon X\to H\) which are constants on \(C\) and equal to the identity on \(X\smallsetminus C\). Also, if \(\mathcal{P}\) is a clopen partition of \(X\), write \(H^{(\mathcal{P})}:=\langle\bigcup_{C\in\mathcal{P}}H_{C}\rangle\), which is naturally isomorphic to the direct product \(H^{|\mathcal{P}|}\). Recall also that if \(\mathcal{P}_{1},\mathcal{P}_{2}\) are clopen partitions of \(X\), one says that \(\mathcal{P}_{2}\) is _finer_ than \(\mathcal{P}_{1}\) if for every \(C\in\mathcal{P}_{2}\), there exists \(D\in\mathcal{P}_{1}\) so that \(C\subseteq D\).
_Remark 5.7_.: When \(\mathcal{P}_{2}\) is finer that \(\mathcal{P}_{1}\), there exists a natural injective morphism \(\iota:H^{(\mathcal{P}_{1})}\to H^{(\mathcal{P}_{2})}\), and this morphism satisfies that \(\pi\circ\iota\) is surjective for every projection \(\pi:H^{(\mathcal{P}_{2})}\to H_{C}\) given by any clopen subset \(C\in\mathcal{P}_{2}\). This holds in particular when \(\mathcal{P}_{1}=\{X\}\) is the trivial partition.
Proof of Proposition 5.6.: As \(X\) is metrisable, we can find an increasingly finer sequence of partitions \(\mathcal{P}_{n}\) of \(X\) that separates points. Then we can write \(\mathcal{C}(X,H)\) as an increasing union
\[\mathcal{C}(X,H)=\bigcup_{n\in\mathbb{N}}H^{(\mathcal{P}_{n})}. \tag{5.2}\]
By Remark 5.7, for every projection \(\pi:H^{(\mathcal{P}_{n})}\to H_{C}\) associated with every \(C\in\mathcal{P}_{n}\), the restriction \(\pi|_{H_{X}}:H_{X}\to H_{C}\) is surjective. Write \(\rho_{n}:=\rho|_{H^{(\mathcal{P}_{n})}}\). Thus, applying the first part of Lemma 5.5 (with \(\rho=\rho_{n}\) and \(K=H_{X}\)), we have the equality
\[\mathsf{Fix}(\rho_{n}|_{H_{X}})=\mathsf{Fix}(\rho_{n}). \tag{5.3}\]
Since \(\rho_{n}|_{H_{X}}=\rho|_{H_{X}}\), (5.3) gives \(\mathsf{Fix}(\rho|_{H_{X}})=\mathsf{Fix}(\rho_{n})\) for every \(n\in\mathbb{N}\), whereas (5.2) gives \(\mathsf{Fix}(\rho)=\bigcap_{n\in\mathbb{N}}\mathsf{Fix}(\rho_{n})\). As \(\rho\) is irreducible, we deduce that \(\mathsf{Fix}(\rho|_{H_{X}})=\varnothing\), and thus, by (5.3) again, also \(\rho_{n}\) is irreducible for any \(n\in\mathbb{N}\). Therefore, we can apply the second part of Lemma 5.5 (applied to \(\rho=\rho_{n}\) and \(K=H_{X}\) again), and obtain that \(\rho_{n}\) has a unique perfect minimal set \(\Lambda\), which does not depend on \(n\in\mathbb{N}\), so, by (5.2), \(\Lambda\) is a minimal set for \(\rho\).
_Remark 5.8_.: We will apply Proposition 5.6 to the group \(H=\mathscr{F}\), the perfect germ extension of Thompson's group \(F\) introduced in SS4.2. We note that if instead we take \(H\) to be the usual Thompson's group \(F\), then the conclusion of Proposition 5.6 is far from true. Indeed the abelianisation of the group \(\mathcal{C}(X,F)\) is free abelian of infinite rank; one can use this to construct a wealth of faithful actions of \(\mathcal{C}(X,F)\) without a minimal set, by defining orders on \(\mathcal{C}(X,F)\) through lexicographic constructions similar to the one in [3, SS5.3.1]. This is the reason why we consider a germ extension of PL homeomorphisms to define the groups \(\mathscr{T}(\varphi,\sigma)\).
### Building an equivariant map to a leaf
The key step in proving Theorem 5.1 is the following result, which says that any minimal action of \(G\) comes from an action on a leaf of \(Z\).
**Proposition 5.9**.: _With assumptions as in Theorem 5.1, there exists a map \(h\colon\mathbb{R}\to Z\) which is \(G\)-equivariant (with respect to the \(\rho\)-action on \(\mathbb{R}\) and the natural action on \(Z\)), locally injective, and takes values in a single leaf._
The main difference from the statement of Theorem 5.1 is that the map is to a leaf of \(Z\) (instead of \(Y\)), and moreover the leaf can be closed. In the latter case, we want to prove that the minimal action \(\rho\) is conjugate to the lift of the action on the leaf to its universal cover. For this, we introduce the group of circle homeomorphisms \(\mathscr{T}\), defined to be the perfect germ extension of Thompson's group \(T\) acting on the circle, namely the group of all homeomorphisms of \(\mathbb{S}^{1}\) of type \(\mathfrak{D}\). We further let \(\tilde{\mathscr{T}}\) denote its lift to \(\mathbb{R}\), namely the group of all homeomorphisms of \(\mathbb{R}\) which are locally of type \(\mathfrak{D}\) and commute with integer translations.
Thus, \(\tilde{\mathscr{T}}\) has an infinite cyclic center \(\mathcal{Z}(\tilde{\mathscr{T}})\) and \(\tilde{\mathscr{T}}/\mathcal{Z}(\tilde{\mathscr{T}})=\mathscr{T}\). We will need the following lemma.
**Lemma 5.10**.: _Let \(f\colon\tilde{\mathscr{T}}\to\tilde{\mathscr{T}}\) be a group homomorphism such that \(f(\mathcal{Z}(\tilde{\mathscr{T}}))\subset\mathcal{Z}(\tilde{\mathscr{T}})\) and such that \(f\) induces in the quotient the identity map \(\mathscr{T}\to\mathscr{T}\). Then \(f\) is the identity._
Proof.: The analogous statement for the usual Thompson's group \(T\) this is proven at the end of the proof of [21, Theorem 8.7]. The same proof works for \(\mathscr{T}\).
We will also use the fact that the group \(G\) is perfect. For this, given an element \(g\in G\), we call the set of points of \(Z\) (respectively, \(Y\)) that are moved by \(g\) the _support_ of \(g\), and denote it by \(\mathsf{Supp}_{Z}(g)\) (respectively, \(\mathsf{Supp}_{Y}(g)\)). Note that the support is an open set. For an open subset \(U\subset Z\) we let \(G_{U}\) be the subgroup of elements with support contained in \(U\); if \(U=Z_{C,I}\) is a chart, we also denote \(G_{U}\) by \(G_{C,I}\), and use analogous notation for the action on \(Y\). In order to prove that \(G\) is perfect, we prove that it is generated by the perfect subgroups \(G_{C,I}\) (see the proof of Lemma 5.12 below), provided that we take charts \(Z_{C,I}\) covering \(Z\). More precisely, we have the following fundamental property, which is the analogue of [21, Lemma 4.8]. The proof is discussed in Appendix A.
**Proposition 5.11** (Fragmentation lemma).: _Let \((X,\varphi,\sigma)\) be as in Assumption 4.2, and set \(G=\mathscr{T}(\varphi,\sigma)\). For any element \(g\in G\) and open cover \(\mathcal{U}\) of \(\overline{\mathsf{Supp}_{Z}(g)}\), we have \(g\in\langle\bigcup_{U\in\mathcal{U}}G_{U}\rangle\)._
_In particular, the group \(G\) is perfect._
Let us see now how to deduce Theorem 5.1, assuming Proposition 5.9.
Proof of Theorem 5.1.: Let \(h\colon\mathbb{R}\to Z\) be the locally injective map provided by Proposition 5.9 and set \(z:=h(0)\). Since the quotient map \(q\colon Y\to Z\) identifies each \(\Phi\)-orbit to a leaf, and maps exactly two \(\Phi\)-orbits to each leaf, the map \(h\) admits two lifts \(\tilde{h}_{1},\tilde{h}_{2}\colon\mathbb{R}\to Y\) taking values in two different \(\Phi\)-orbits, with \(\tilde{h}_{i}\circ\hat{\sigma}=\tilde{h}_{1-i}\). Moreover, since \(\hat{\sigma}\) reverses the time of the flow \(\Phi\), exactly one of these two lifts is orientation preserving with respect to the orientation of orbits given by the flow. Rename this lift by \(\tilde{h}\) and set \(y=\tilde{h}(0)\). If \(y\) is not a periodic orbit, the map \(\tilde{h}\) is a bijection of \(\mathbb{R}\) onto the orbit, and it follows that \(\rho\) is pointed conjugate to \(\rho_{y}\).
The case where \(y\) is a periodic orbit requires some further inspection. Assume that the period of \(y\) is \(m\in\mathbb{N}\). Let \(H\) be the subgroup of \(G\) of elements that act trivially on the leaf \(\ell\) of \(z\), and let \(H_{0}\leq H\) be the subgroup consisting of elements \(g\) such that \(\overline{\mathsf{Supp}_{Z}(g)}\cap\ell=\varnothing\). Note that both are normal subgroups of \(G\): the group \(H\) coincides with the group of elements \(g\in G\) such that \(\rho_{y}(g)\) is a translation by an integer multiple of \(m\), while \(H_{0}\) is the kernel of \(\rho_{y}\), using that non-periodic orbits of \(\Phi\) are dense. The action of \(G\) on \(\ell\) identifies \(G/H\) with the natural copy of \(\mathscr{T}\) acting on \(\ell=\mathbb{R}/m\mathbb{Z}\cong\mathbb{S}^{1}\). Accordingly, the action \(\rho_{y}\) induces an isomorphism of \(G/H_{0}\) to the natural copy of \(\tilde{\mathscr{T}}\) conjugated by the dilation \(x\mapsto mx\). Now the map \(h\colon\mathbb{R}\to\ell\) is a covering map, equivariant with respect to the action \(\rho\) and the natural action on \(\ell\). Upon conjugating \(\rho\), we can assume that \(h\) coincides with the natural covering map \(\mathbb{R}\) to \(\mathbb{R}/m\mathbb{Z}\cong\ell\), so that \(\rho\) also takes values in the \(m\)-dilated copy of \(\tilde{\mathscr{T}}\). Since \(H\) acts trivially on \(\ell\), we deduce that \(\rho(H)\) is central in \(\rho(G)\), and in particular it is abelian. However, the subgroup \(H_{0}\) is perfect, as a consequence of Proposition 5.11. Therefore \(\rho(H_{0})=\{\mathsf{id}\}\), and thus \(\rho\) descends to a map \(\bar{\rho}\colon G/H_{0}\cong\tilde{\mathscr{T}}\to\tilde{\mathscr{T}}\) that satisfies the assumption of Lemma 5.10. We conclude that \(\bar{\rho}\) is the identity and that \(\rho=\rho_{y}\).
The rest of the section is devoted to the proof of Proposition 5.9. The proof requires some preliminary work. We keep assumptions as in the statement, and we introduce further notation. Let \(H\) be a subgroup of \(G\), and take \(\xi\in\mathbb{R}\). When \(\xi\in\mathsf{Supp}(\rho|_{H})\), we denote by
\(I^{\rho}(H,\xi)\) the component of the support of \(\rho(H)\) containing \(\xi\). When \(\xi\in\mathsf{Fix}(\rho|_{H})\), we simply define \(I^{\rho}(H,\xi)=\{\xi\}\). In the case that \(H=G_{C,I}\) for some chart \(Z_{C,I}\), we write \(I^{\rho}(C,I,\xi)\) instead of \(I^{\rho}(G_{C,I},\xi)\). When there is no risk of confusion, we simply write \(I(H,\xi)\) or \(I(C,I,\xi)\). We then have the following lemma, which is the main application of Proposition 5.6.
**Lemma 5.12**.: _With assumptions as in Theorem 5.1, given any dyadic chart \(Z_{C,I}\) and \(\xi\in\mathbb{R}\), we have that the \(\rho\)-action of \(G_{C,I}\) on \(I(C,I,\xi)\) admits a perfect minimal set._
Proof.: When \(\xi\in\mathsf{Fix}(\rho|_{G_{C,I}})\) there is nothing to prove. So we can assume that \(I(C,I,\xi)\) is a non-empty open interval. The group \(G_{C,I}\) is naturally identified with the group \(\mathcal{C}(C,\mathscr{F}_{I})\). Since \(\mathscr{F}_{I}\cong\mathscr{F}\) is finitely generated and perfect (Lemma 4.9), Proposition 5.6 gives the desired conclusion.
Given points \(z\in Z\), \(\xi\in\mathbb{R}\), and a decreasing sequence of charts \(\big{(}Z_{C_{n},I_{n}}\big{)}_{n\in\mathbb{N}}\) satisfying \(\bigcap_{n\in\mathbb{N}}Z_{C_{n},I_{n}}=\{z\}\) (by metrisability of \(X\), for any point \(z\in Z\) we can find such a sequence), we write \(I(z,\xi):=\bigcap_{n\in\mathbb{N}}I(C_{n},I_{n},\xi)\), which is the intersection of a nested sequence of intervals, and therefore an interval which contains \(\xi\) (possibly reduced to the singleton \(\{\xi\}\)). Notice that the interval \(I(z,\xi)\) does not depend on the decreasing sequence of charts considered.
**Lemma 5.13**.: _With assumptions as in Theorem 5.1, we have \(I(z,\xi)=\{\xi\}\) for every \(z\in Z\) and \(\xi\in\mathbb{R}\)._
Proof.: Consider the family of intervals \(\mathfrak{I}:=\{I(z,\xi):z\in Z,\xi\in\mathbb{R}\}\). We first show that inclusion defines a partial order relation on \(\mathfrak{I}\). For this, given intervals \(I,J\subset\mathbb{R}\) (possibly singletons), we say that \(I\) and \(J\) are _linked_ if their interiors are neither disjoint, nor nested.
**Claim 1**.: _The family \(\mathfrak{I}\) has no linked elements._
Proof of claim.: We distinguish two cases. Suppose first that \(z_{1}=z_{2}=z\). In order to construct the intervals \(I(z,\xi_{1})\) and \(I(z,\xi_{2})\), we can use the same decreasing sequence \(Z_{C_{n},I_{n}}\) converging to \(z\). Then, there are two possibilities:
* either \(I(C_{n},I_{n},\xi_{1})=I(C_{n},I_{n},\xi_{2})\) for every \(n\in\mathbb{N}\), in which case \(I(z,\xi_{1})=I(z,\xi_{2})\), or
* for some \(n\in\mathbb{N}\) it holds that \(I(C_{n},I_{n},\xi_{1})\cap I(C_{n},I_{n},\xi_{2})=\varnothing\), in which case we get that \(I(p,\xi_{1})\) and \(I(p,\xi_{2})\) are disjoint.
In both cases we deduce that the corresponding intervals are unlinked. Assume now that \(z_{1}\neq z_{2}\), and consider nested sequences of charts \((Z_{C_{n},I_{n}})_{n\in\mathbb{N}}\) and \((Z_{D_{n},J_{n}})_{n\in\mathbb{N}}\) converging to \(z_{1}\) and \(z_{2}\), respectively. Suppose by contradiction that \(I(z_{1},\xi_{1})\) and \(I(z_{2},\xi_{2})\) are linked. Then, for sufficiently large \(k\in\mathbb{N}\), we have that \(I(C_{k},I_{k},\xi_{1})\) and \(I(D_{k},J_{k},\xi_{2})\) are linked, while \(Z_{C_{k},I_{k}}\) and \(Z_{D_{k},J_{k}}\) are disjoint. The second condition implies that the subgroups \(G_{C_{k},I_{k}}\) and \(G_{D_{k},J_{k}}\) commute. This gives a contradiction since connected components of the support of commuting subgroups are pairwise unlinked.
We set now
\[\mathfrak{I}_{0}:=\left\{\mathsf{Int}\left(\bigcup_{\alpha}J_{\alpha}\right): \text{$(J_{\alpha})$ is a maximal chain of $(\mathfrak{I},\subseteq)$}\right\}.\]
By Claim 1, elements of \(\mathfrak{I}_{0}\) are pairwise disjoint open intervals.
**Claim 2**.: _Every element of \(\mathfrak{I}_{0}\) is a bounded interval._
Proof of claim.: Let \(J_{n}=I(z_{n},\xi)\) be an increasing sequence of intervals in \(\mathfrak{I}\). Up to extracting a subsequence we can suppose that \((z_{n})\) converges to some point \(z\in Z\). Then for every chart \(Z_{C,I}\) containing \(z\), we eventually have \(I(z_{n},\xi)\subseteq I(C,I,\xi)\), so it is enough to prove that the latter is bounded for some chart \(Z_{C,I}\). Choose a chart small enough so that we can find \(h\in G\) such that \(h(Z_{C,I})\cap Z_{C,I}=\varnothing\). Assume by contradiction that \(I(C,I,\xi)\) is unbounded, and without loss of generality, we assume that it accumulates at \(+\infty\). By Lemma 5.12, \(G_{C,I}\) admits a perfect minimal set \(\Lambda\subset I(C,I,\xi)\), which also accumulate at \(+\infty\). Since \(I(C,I,\xi)\) is unbounded, every element in the centralizer of \(G_{C,I}\) must preserve it. For \(h\in G\) such that \(h(Z_{C,I})\cap Z_{C,I}=\varnothing\), the subgroups \(G_{C,I}\) and \(hG_{C,I}h^{-1}\) commute. We deduce that \(hG_{C,I}h^{-1}\) preserves \(I(C,I,\xi)\), and, since it is perfect, we can apply Corollary 5.4 and deduce that it fixes \(\Lambda\) pointwise. This implies that \(hG_{C,I}h^{-1}\), and therefore also the conjugate \(G_{C,I}\), have fixed points accumulating at \(+\infty\). This gives the desired contradiction.
Finally, we observe that for every \(z\in Z\), \(\xi\in\mathbb{R}\) and \(g\in G\) we have that \(\rho(g)(I(z,\xi))=I(g(z),\rho(g)(\xi))\), from which we deduce that the family \(\mathfrak{I}\) is \(\rho\)-invariant, and therefore so is \(\mathfrak{I}_{0}\). Putting all of this together, we have that the set given by the union of all the intervals in \(\mathfrak{I}_{0}\) is a proper and open \(\rho\)-invariant subset. This implies that \(\mathfrak{I}_{0}\) is empty, as desired.
**Lemma 5.14**.: _With assumptions as in Theorem 5.1, take disjoint charts \(Z_{C,I}\), \(Z_{D,J}\), and a point \(\xi\in\mathsf{Supp}(\rho|_{G_{C,I}})\). Then, \(G_{D,J}\) acts as the identity on \(I(C,I,\xi)\)._
Proof.: As charts are disjoint, we have that the subgroups \(G_{C,I}\) and \(G_{D,J}\) commute. Thus, by Corollary 5.4, it is enough to prove that the action of \(G_{C,I}\) on \(I(C,I,\xi)\) is minimal. To see this, we know by Lemma 5.12 that \(G_{C,I}\) admits a perfect minimal set \(\Lambda\subset I(C,I,\xi)\). Suppose by contradiction that \(\Lambda\neq I(C,I,\xi)\). Note that \(I(C,I,\xi)=I(C,I,\eta)\) for every \(\eta\in I(C,I,\xi)\), so it is not restrictive to assume that \(\xi\notin\Lambda\). Assuming so, take the connected component \(U\) of \(I(C,I,\xi)\smallsetminus\Lambda\) containing \(\xi\). By the choice of \(U\), every element in \(G_{C,I}\) either preserves \(U\) or maps it disjointly. Therefore, for every family of subgroups \(\{H_{\alpha}\}_{\alpha\in A}\) generating \(G_{C,I}\), there must exist some \(\alpha\in A\) such that \(U\subset I(H_{\alpha},\xi)\), otherwise we would have that \(U\) is preserved by \(G_{C,I}\), contradicting minimality of its action on \(\Lambda\). Consider now a family of charts \(\{Z_{D_{\alpha},J_{\alpha}}\}_{\alpha\in A}\) contained in \(Z_{C,I}\), covering \(Z_{C,I}\). By the fragmentation lemma for \(G\) (Proposition 5.11), we have that \(G_{C,I}\) is generated by the subgroups \(\{G_{D_{\alpha},J_{\alpha}}\}_{\alpha\in A}\). Therefore, we can find \(\alpha\in A\) such that \(U\subset I(D_{\alpha},J_{\alpha},\xi)\). Starting the argument again with the chart \(Z_{D_{\alpha},J_{\alpha}}\), and proceeding by induction, we can construct a nested sequence of charts \((Z_{C_{n},I_{n}})_{n\in\mathbb{N}}\) such that \(\bigcap_{n\in\mathbb{N}}Z_{C_{n},I_{n}}\) is a singleton \(\{z\}\), and such that \(U\subset I(C_{n},I_{n},\xi)\) for every \(n\in\mathbb{N}\). We conclude that \(I(z,\xi)\) contains \(U\), but this contradicts Lemma 5.13.
**Lemma 5.15**.: _With assumptions as in Theorem 5.1, we have that for every \(\xi\in\mathbb{R}\), there exists a unique \(z_{\xi}\in Z\) such that \(\xi\in\mathsf{Supp}(\rho|_{G_{C,I}})\) for every chart \(Z_{C,I}\) containing \(z_{\xi}\)._
Proof.: Suppose by contradiction that for every \(z\in Z\), there exists a chart \(Z_{C_{z},I_{z}}\) containing \(z\), and such that \(\xi\in\mathsf{Fix}(\rho|_{G_{C_{z},I_{z}}})\). Thus, by compactness of \(Z\), we can find finitely many points \(z_{1},\ldots,z_{n}\in Z\) whose associated charts cover \(Z\). On the other hand, by the fragmentation lemma (Proposition 5.11), we have that \(G\) is generated by the subgroups \(G_{C_{z_{1}},I_{z_{1}}},\ldots,G_{C_{z_{n}},I_{z_{n}}}\). This implies that \(\xi\in\mathsf{Fix}(\rho)\), which is a contradiction. Thus, there must be at least one \(z\in Z\) such that \(\xi\in\mathsf{Supp}(\rho|_{G_{C,I}})\) for every chart \(Z_{C,I}\) containing \(z\). In order to check uniqueness, suppose by contradiction that there exist \(z_{1}\neq z_{2}\) with this property. Take disjoint charts \(Z_{C_{1},I_{1}}\) and \(Z_{C_{2},I_{2}}\) containing \(z_{1}\) and \(z_{2}\), respectively. By our assumption on \(z_{1}\), Proposition 5.14 gives that \(G_{C_{2},I_{2}}\) acts as the identity on \(I(C_{1},I_{1},\xi)\). This contradicts our assumption on \(z_{2}\)
We are ready to prove Proposition 5.9.
Proof of Proposition 5.9.: Consider the map \(h:\mathbb{R}\to Z\) that associates to each \(\xi\in\mathbb{R}\) the element \(z_{\xi}\in Z\) given by Lemma 5.15. It follows directly from the definition that \(h\) is equivariant.
**Claim**.: _The map \(h\) is continuous._
Proof of claim.: Take \(\xi\in\mathbb{R}\) and a chart \(Z_{C,I}\) containing \(z_{\xi}\). Notice that, in order to show continuity of \(h\), it is enough to show that \(z_{\eta}\in Z_{C,I}\) for every \(\eta\in I(C,I,\xi)\). Suppose by contradiction this is not the case. and take a point \(\eta\in I(C,I,\xi)\) and a chart \(Z_{D,J}\) containing \(z_{\eta}\), and disjoint from \(Z_{C,I}\). By definition of \(z_{\eta}\), we have \(\eta\in\mathsf{Supp}(\rho|_{G_{D,J}})\), but Lemma 5.14 implies that \(G_{D,J}\) acts trivially on \(I(C,I,\xi)\). This provides the desired contradiction.
We deduce from the claim that the image of \(\mathbb{R}\) is contained in a single leaf \(\ell=\pi_{Z}(\{x\}\times\mathbb{R})\subset Z\). It remains to prove that \(h\) is locally injective. Arguing by way of contradiction, fix a point \(\xi_{0}\) such that for any open interval \(U\ni\xi_{0}\), the restriction \(h|_{U}\) is not injective. By continuity of \(h\), we can choose \(U\) sufficiently small, so that \(h(U)\) is a proper subset of the leaf \(\ell\). Consider two points \(\xi_{1}<\xi_{2}\) in \(U\), with \(h(\xi_{1})=h(\xi_{2})=:z\). Notice that there must exist \(\xi\in(\xi_{1},\xi_{2})\) such that \(h(\xi)\neq z\), because otherwise equivariance of \(h\) would allow us to construct a proper, non-empty, open invariant subset of \(\mathbb{R}\), contradicting minimality of \(\rho\). Since \(h([\xi_{1},\xi_{2}])\) does not cover \(\ell\), we can take a chart \(Z_{C,I}\) with the following properties:
\[z\notin Z_{C,I},\quad\pi_{Z}(\{x\}\times I)\subset Z_{C,I},\quad h(\xi)\in\pi_ {Z}(\{x\}\times I).\]
With such a choice, we have that the orbit \(\rho(G_{C,I})(h(\xi))\) is dense in \(\pi_{Z}(\{x\}\times I)\), and therefore \(\rho(G_{C,I})(h(\xi))\not\subseteq h((\xi_{1},\xi_{2}))\). On the other hand, since \(z\notin Z_{C,I}\), we can repeat the argument in the proof of the claim, and get that \(\{\xi_{1},\xi_{2}\}\in\mathsf{Fix}(\rho|_{G_{C,I}})\). By equivariance of \(\rho\), this implies that \(\rho(G_{C,I})(h(\xi))\subset h((\xi_{1},\xi_{2}))\), contradicting the choice of the chart \(Z_{C,I}\).
## 6. From the Deroin space to the space of irreducible representations
Throughout this section we let \((X,\varphi,\sigma)\) be a reversible subshift satisfying Assumption 4.2, and set again \(G=\mathscr{T}(\varphi,\sigma)\). In this section we further study the space \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) and prove the corollaries stated in the introduction.
### Lifting converging sequences
Recall that for every \(y\in Y\) we denote by \(\rho_{y}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) the associated action given by (5.1). By Corollary 5.2, the map \(y\mapsto\rho_{y}\) identifies \((Y,\Phi)\) with a Deroin space for \(G\). We keep this identification as implicit, and denote by
\[r_{Y}\colon\,\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\to Y\]
the retraction map as in Proposition 3.5, namely \(r_{Y}(\rho)=y\) if \(y\) is the unique point such that \(\rho\) is pointed semi-conjugate to \(\rho_{y}\).
In order to derive stronger conclusions on \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) than the general ones that follow from Propositions 3.5 and 3.13, we first need to give a closer look at properties of the map \(r_{Y}\) when we specialize the discussion to the groups \(G=\mathscr{T}(\varphi,\sigma)\). In this case we have the following, which says that the map \(r_{Y}\) behaves like an open map in the direction transversal to the flow.
**Proposition 6.1**.: _Let \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), and assume that \(y_{0}:=r_{Y}(\rho)\) can be written in coordinates as \(y_{0}=\pi_{Y}(x_{0},t_{0})\) for \((x_{0},t_{0})\in X\times\mathbb{R}\). Let \(\mathcal{U}\) be an open neighbourhood of \(\rho\). Then there exists an open neighbourhood \(V\subset X\) of \(x_{0}\) such that \(r_{Y}(\mathcal{U})\) contains \(\pi_{Y}(V\times\{t_{0}\})\)._
Proof.: Let \(S\subset G\) be a finite generating subset, and take a neighborhood \(\mathcal{U}\) of \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). By definition of the compact-open topology, we can find \(\varepsilon>0\) and a compact interval \(\tilde{K}\) containing \(0\) such that if \(\rho^{\prime}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is such that
\[\sup_{\gamma\in S}|\rho(\gamma)(t)-\rho^{\prime}(\gamma)(t)|<\varepsilon\quad \forall t\in\tilde{K}, \tag{6.1}\]
then \(\rho^{\prime}\in\mathcal{U}\). One can simply ask that (6.1) is satisfied for a sufficiently dense, finite collection of points \(t\in\tilde{K}\). More precisely, we are going to use the following property.
**Claim**.: _There exists a finite subset \(\tilde{D}\subset\mathbb{R}\) containing \(0\) such that the following holds. Let \(\rho^{\prime}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) be such that \(\rho^{\prime}(\gamma)(t)=\rho(\gamma)(t)\) for every \(\gamma\in S\) and \(t\in\tilde{D}\). Then, \(\rho^{\prime}\in\mathcal{U}\)._
Proof of claim.: Fix \(\delta>0\) such that for any \(\gamma\in S\) and \(s,t\in\tilde{K}\) satisfying \(|s-t|<\delta\), we have \(|\rho(\gamma)(s)-\rho(\gamma)(t)|<\varepsilon\), and choose a finite subset \(\tilde{D}\subset\mathbb{R}\) which is \(\delta/2\)-dense in \(\tilde{K}\). Given \(t\in\tilde{K}\), take \(t_{-},t_{+}\in\tilde{D}\) such that \(t_{-}<t<t_{+}\), and \(|t_{+}-t_{-}|<\delta\). As actions preserve the orientation, for any \(\gamma\in S\) the images \(\rho(\gamma)(t)\) and \(\rho^{\prime}(\gamma)(t)\) are contained in the interval \((\rho(\gamma)(t_{-}),\rho(\gamma)(t_{+}))=(\rho^{\prime}(\gamma)(t_{-}),\rho^ {\prime}(\gamma)(t_{+}))\), whose length does not exceed \(\varepsilon\). This implies that \(|\rho(\gamma)(t)-\rho^{\prime}(\gamma)(t)|<\varepsilon\) for every \(\gamma\in S\), as wanted.
In what follows, for \(x\in X\), we will shorten notation by writing \(\rho_{x}\) instead of \(\rho_{\pi_{Y}(x,t_{0})}\) for the action given by (5.1). Let \(h:\mathbb{R}\to\mathbb{R}\) be the pointed semi-conjugacy between \(\rho\) and \(\rho_{x_{0}}\) (which is a continuous map), and write \(K=h(\tilde{K})\).
By construction, there exists a clopen neighborhood \(V\) of \(x_{0}\) such that the translation cocycle \(\tau_{\gamma}\) is constant on \(\pi_{Y}(V\times\{t_{0}+t\})\) for every \(t\in K\) and \(\gamma\in S\). Hence for any \(x\in V\), the \(\rho_{x}\)-action of every \(\gamma\in S\) on \(K\) is the same as its \(\rho_{x_{0}}\)-action on \(K\).
We set \(\tilde{\Omega}:=\rho(G)(\tilde{D})\) and \(\Omega_{x}:=\rho_{x}(G)(h(\tilde{D}))\) for every \(x\in V\), which we consider as ordered subsets of \((\mathbb{R},<)\). Note that both these sets contain \(0\), which we think of as a marked point. We let \(<_{x}\) denote the lexicographic ordering of \(\Omega_{x}\times\tilde{\Omega}\): given \(s_{1},s_{2}\in\Omega_{x}\) and \(t_{1},t_{2}\in\tilde{\Omega}\), one has
\[(s_{1},t_{1})<_{x}(s_{2},t_{2})\Leftrightarrow\left\{\begin{array}{l}s_{1} <s_{2},\;\mbox{or}\\ s_{1}=s_{2}\;\mbox{and}\;t_{1}<t_{2}.\end{array}\right.\]
Notice that the following equivalence holds for every \(\gamma_{1},\gamma_{2}\in G\) and every \(t\in\mathbb{R}\):
\[\rho(\gamma_{1})(t)<\rho(\gamma_{2})(t)\Leftrightarrow\left(h(\rho(\gamma_{1}) (t)),\rho(\gamma_{1})(t)\right)<_{x_{0}}\left(\rho(\gamma_{2})(t)),\rho(\gamma_ {2})(t)\right) \tag{6.2}\]
The set \(\Omega_{x}\times\tilde{\Omega}\) is also naturally marked at \((0,0)\). The order \(<_{x}\) is invariant under the action \(\psi_{x}\) of \(G\) defined by \(\psi_{x}(g):(s,t)\mapsto(\rho_{x}(g)(s),\tilde{\rho}(g)(t))\). We can then consider the dynamical realisation \(\rho^{\prime}_{x}:G\to\mathsf{Homeo}_{0}(\mathbb{R})\) of the action \(\psi_{x}\), which comes with an equivariant order-preserving embedding \(j_{x}:(\Omega_{x}\times\tilde{\Omega},<_{x})\to(\mathbb{R},<)\) (see [3, Lemma 2.40]), which we assume to send the marked point to \(0\). Because of the choice of \(V\), for any \(x\in V\), \(t\in\tilde{D}\), and \(\gamma\in S\), we have
\[h(\rho(\gamma)(t))=\rho_{x_{0}}(\gamma)(h(t))=\rho_{x}(\gamma)(h(t)). \tag{6.3}\]
Therefore putting together (6.2) and (6.3), we get that for any \(x\in V\), \(t_{1},t_{2}\in\tilde{D}\), and \(\gamma_{1},\gamma_{2}\in S\), the following inequalities are equivalent:
\[\rho(\gamma_{1})(t_{1})<\rho(\gamma_{2})(t_{2})\] \[\Leftrightarrow \,(\rho_{x}(\gamma_{1})(h(t_{1})),\rho(\gamma_{1})(t_{1}))<_{x}( \rho_{x}(\gamma_{2})(h(t_{2})),\rho(\gamma_{2})(t_{2}))\] \[\Leftrightarrow \,\psi_{x}(\gamma_{1})(h(t_{1}),t_{1})<_{x}\psi_{x}(\gamma_{2})(h( t_{2}),t_{2})\] \[\Leftrightarrow \,\rho^{\prime}_{x}(\gamma_{1})(j_{x}(h(t_{1}),t_{1}))<\rho^{ \prime}_{x}(\gamma_{2})(j_{x}(h(t_{2}),t_{2})).\]
Thus, for given \(x\in V\), we can take an action \(\rho_{x}^{\prime\prime}\) pointed conjugate to \(\rho_{x}^{\prime}\) such that \(\rho(\gamma)(t)=\rho_{x}^{\prime\prime}(\gamma)(t)\) for every \(\gamma\in S\) and \(t\in\tilde{D}\). We deduce from the claim that \(\rho_{x}^{\prime\prime}\in\mathcal{U}\) for every \(x\in V\). We claim that \(r_{Y}(\rho_{x}^{\prime\prime})=\pi_{Y}(x,t_{0})\), i.e. that \(\rho_{x}\) and \(\rho_{x}^{\prime\prime}\) are pointed semi-conjugate. To see this, it is enough to see that \(\rho_{x}\) and \(\rho_{x}^{\prime}\) are. This comes from the fact that \(\rho_{x}\) is pointed conjugate to the dynamical realisation of the induced action on \(\Omega_{x}\) (this follows from [3, Lemma 2.40]), and the latter is pointed semi-conjugate to \(\rho_{x}^{\prime}\) (by considering the order-preserving projection \(\Omega_{x}\times\tilde{\Omega}\to\Omega_{x}\)).
The following result allows to completely describe the closure of semi-conjugacy classes in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) (or in other words which actions can be reached by perturbations) in terms of the closure of \(\Phi\)-orbits in \((Y,\Phi)\).
**Theorem 6.2**.: _Let \(\mathcal{F}\subset\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) be a family of representations, and fix \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). Then the following are equivalent._
1. \(\rho\) _is accumulated by representations which are semi-conjugate to elements of_ \(\mathcal{F}\)_;_
2. \(r_{Y}(\rho)\) _belongs to the closure of the union of_ \(\Phi\)_-orbits of points in_ \(r_{Y}(\mathcal{F})\)_._
Proof.: The implication (1) \(\Rightarrow\) (2) is a general consequence of Proposition 3.5. The converse follows from Proposition 6.1, indeed if (2) holds, and if \(r_{Y}(\rho)=\pi_{Y}(x_{0},t_{0})\), then for every neighbourhood \(V\) of \(x_{0}\) the local transversal \(\pi_{Y}(V\times\{t_{0}\})\) intersects the orbit of some point in \(\mathcal{F}\), and hence every neighbourhood of \(\rho\) intersects the semi-conjugacy class of an element of \(\mathcal{F}\).
In the rest of the section we describe some applications of our results.
### Rigidity and flexibility
**Corollary 6.3**.: _The following conditions on a representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) are equivalent_
1. \(\rho\) _is locally rigid;_
2. \(\rho\) _is rigid;_
3. _the_ \(\Phi\)_-orbit of_ \(r_{Y}(\rho)\) _is an open subset of_ \(Y\)_._
Proof.: The equivalence between (2) and (3) is a general consequence of Proposition 3.13, and it is clear that (2) implies (1). If (1) holds, then by Theorem 6.2, \(r_{Y}(\rho)\) is an interior point of its \(\Phi\)-orbit, hence its whole \(\Phi\)-orbit is open.
**Corollary 6.4**.: _A representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) has a dense semi-conjugacy class if and only if \(r_{Y}(\rho)\) has a dense orbit. In particular if \((X,\varphi)\) is minimal, then all semi-conjugacy classes are dense in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\)._
Proof.: Apply Theorem 6.2 to \(\mathcal{F}=\{\rho\}\) to see that if \(r_{Y}(\rho)\) has a dense orbit, then the semi-conjugacy class of \(\rho\) accumulates on every \(\rho^{\prime}\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\).
Next we introduce the following strong form of flexibility of representations.
**Definition 6.5**.: A representation \(\varphi\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is _universally flexible_ if every neighbourhood \(U\) of \(\varphi\) intersects non-trivially every semi-conjugacy class in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\).
We have the following.
**Corollary 6.6**.: _A representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is universally flexible if and only if \((Y,\Phi)\) (equivalently \((X,\varphi)\)) has a unique non-empty closed minimal invariant subset and \(r_{Y}(\rho)\) belongs to it._
Proof.: If \(r_{Y}(\rho)\) belongs to a unique minimal subset of \((Y,\Phi)\), then the orbit-closure of every point in \(Y\) contains it, so that by Theorem 6.2, \(\rho\) is accumulated by representations in any given semi-conjugacy class. That this condition is necessary follows from Proposition 3.5.
These results can be used to produce examples with various prescribed properties, by constructing subshifts \((X,\varphi,\sigma)\) with the desired corresponding properties (for instance among closed invariant subshifts of the subshift of reduced words \(X_{\mathrm{red}}\subset A^{\mathbb{Z}}\) as in Example 4.5). As an example we include the following.
**Corollary 6.7** (Groups with rigid and universally flexible representations).: _Fix \(n\in 2\mathbb{N}\cup\{\infty\}\). Then there exists a finitely generated group \(G\) such that \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) contains uncountably many semi-conjugacy classes and the following hold._
_(1) There are exactly \(n\) semi-conjugacy classes of rigid representations._
_(2) Every representation which is not rigid is universally flexible._
Proof.: By Corollaries 6.3 and 6.6, it is enough to find for each \(n\in\mathbb{N}\cup\{\infty\}\) a reversible subshift \((X,\varphi,\sigma)\) so that \((X,\varphi)\) has a unique minimal closed invariant subset which is infinite, and exactly \(2n\) orbits of isolated points. This is routine, we sketch a construction below. Consider the subshift of reduced words \(X_{\mathrm{red}}\subset A^{\mathbb{Z}}\) from Example 4.5.
Choose first an infinite minimal subshift \(M\subset X_{\mathrm{red}}\) which is invariant under \(\sigma\). Choose also a sequence \(x\in M\). Using that \(M\) is infinite and minimal, we can write \(M\) as the intersection of a strictly decreasing sequence \(M_{0}\supsetneq M_{1}\supsetneq M_{2}\supsetneq\cdots\) of irreducible subshifts of finite type, which can be assumed to be \(\sigma\)-invariant.
By properties of subshifts of finite type, we can find for each \(n\) a sequence \(x_{n}\in M_{n}\) which has some infinite prefix and suffix which coincide with some prefix and suffix of \(x\), respectively, and such that \(x\notin M_{n+1}\) (for this it is enough that between the desired prefix and suffix there is some finite word which is allowed in \(M_{n}\) but not in \(M_{n+1}\)). Note that every accumulation point of the \(\varphi\)-orbit of \(x_{n}\) and of \(\sigma(x_{n})\) belongs to \(M\). Then for \(n\geq 0\) let
\[X_{n}=X\cup\{\varphi^{i}(x_{j}),\varphi^{i}(\sigma(x_{j}))\colon i\in\mathbb{ Z},j\leq n\}\]
and \(X_{\infty}=\bigcup X_{n}\). These satisfy the desired conclusion, respectively for finite \(n\) and for \(n=\infty\).
_Remark 6.8_.: The restriction that \(n\) be even is necessary in Corollary 6.7 since here we do not allow semi-conjugacies to reverse the orientation, so that rigid semi-conjugacy classes always come in pairs.
### Cantor-Bendixson rank
As another application, let us consider a notion of Cantor-Bendixson rank for the space of semi-conjugacy classes in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\). To this end, given a finitely generated group \(G\) define a decreasing transfinite sequence of subspaces \(\mathsf{Rep}_{\mathrm{irr}\alpha}(G;\mathbb{R})\) invariant under semi-conjugacy, as follows:
* if \(\alpha=\beta+1\) is a successor, we define \(\mathsf{Rep}_{\mathrm{irr}\alpha}(G;\mathbb{R})\) by removing from \(\mathsf{Rep}_{\mathrm{irr}\beta}(G;\mathbb{R})\) all semi-conjugacy classes that are open in \(\mathsf{Rep}_{\mathrm{irr}\beta}(G;\mathbb{R})\);
* if \(\alpha\) is a limit ordinal we define \(\mathsf{Rep}_{\mathrm{irr}\alpha}(G;\mathbb{R})=\bigcap_{\beta<\alpha}\mathsf{ Rep}_{\mathrm{irr}\beta}(G;\alpha)\).
If \((\mathcal{D},\Psi)\) is a Deroin space for \(G\), then a similar process yields a decreasing sequence of closed subsets \(\mathcal{D}_{\alpha}\), obtained by successively removing open orbits, with \(\mathsf{Rep}_{\mathrm{irr}\alpha}(G)=r_{\mathcal{D}}^{-1}(\mathcal{D}_{\alpha})\). These sequences must stabilize at some countable ordinal, see [11, Theorem 6.9].
**Definition 6.9**.: Let \(G\) be a finitely generated group. The smallest ordinal \(\alpha\) such that \(\mathsf{Rep}_{\mathrm{irr}\alpha}(G;\mathbb{R})=\mathsf{Rep}_{\mathrm{irr}\alpha+1 }(G;\mathbb{R})\) (equivalently \(\mathcal{D}_{\alpha}=\mathcal{D}_{\alpha+1}\)) is called the _semi-conjugacy CB-rank_ of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\).
Note that if \(\alpha\) is the semi-conjugacy CB-rank, then \(\mathsf{Rep}_{\mathrm{irr}\alpha}(G;\mathbb{R})=\varnothing\) if and only if \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) contains only countably many semi-conjugacy classes.
Recall that the usual Cantor-Bendixson rank (CB-rank) of a compact space is defined in a similar way, but by successively removing its isolated points. Then for the groups \(G=\mathscr{T}(\varphi,\sigma)\) we have the following.
**Corollary 6.10**.: _The semi-conjugacy CB-rank of \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is equal to the usual CB-rank of \(X\)._
Proof.: It is clear that open orbits of the suspension flow \((Y,\Phi)\) are removed according to the usual Cantor-Bendixson derivative process in \(X\).
_Remark 6.11_.: We could not locate a reference stating which ordinals are realizable as the CB-rank of subshifts (although there are various such results for special classes of subshifts). However Ville Salo kindly communicated us a proof that the CB-rank of countable susbshifts are exactly the finite ordinals and the countable ordinals of the form \(\beta+2\). The following slightly simpler version of his construction allows to realize all countable ordinals of the form \(\beta+3\). On an alphabet of the form \(A=B\sqcup\{*\}\), let \(x\) be the constant sequence of \(*\). Choose a countable closed subset \(C\subset B^{\mathbb{N}}\), and let \((C_{\alpha})\) be its sequence of CB-derivatives. Denote by \(\alpha_{0}\) the CB-rank of \(C\) and notice that \(C_{\alpha_{0}}=\varnothing\) since \(C\) is countable. We proceed to show how to construct a subshift with rank \(\alpha_{0}+2\). The set \(C\) can be chosen so that \(\alpha_{0}=\beta+1\) for any given countable ordinal \(\beta\) (for instance choosing \(C\) to be homeomorphic to the ordinal \(\beta+1\) with the order topology); this will show that any countable ordinal of the form \(\beta+3\) can be the CB-rank of a subshift.
For each \(c=(b_{n})\in C\), let \(x_{c}\) be the sequence in \(A^{\mathbb{Z}}\) obtained by replacing the letter at position \(2^{n}\) of \(x\) with \(b_{n}\) for \(n\geq 0\). Write \(X_{1}=\{\sigma^{n}(x_{c}):c\in C,\ n\in\mathbb{Z}\}\) and let \(X\subset A^{\mathbb{Z}}\) be the subshift given by the closure of \(\{x\}\cup X_{1}\). It is not difficult to see, by construction of \(X_{1}\), that \(X\) is contained in the union \(\{x\}\sqcup X_{1}\sqcup X_{2}\) where \(X_{2}\) consists of those sequences with at most one letter in \(B\). Notice also that the set \(\{x_{c}:c\in B\}\subset X_{1}\) is a clopen subset of \(X\) since it can be defined by looking at the cylinder associated to positions \(1\), \(2\), and \(4\). Thus, it follows that the points \(x_{c}\) are removed from \(X\) according to the Cantor-Bendixson derivative process in \(C\) and therefore \(X_{\alpha_{0}}\subset\{x\}\cup X_{2}\). Moreover, it is direct to see that \(X_{\alpha_{0}}\) meets both \(\{x\}\) and \(X_{2}\). Since the points of \(X_{2}\) are isolated points of \(X_{\alpha_{0}}\) that accumulate on \(\{x\}\), it follows that \(X_{\alpha_{0}+1}=\{*\}\). Therefore the CB-rank of \(X\) is \(\alpha_{0}+2\) as desired.
Notice that \(X\) can be turned into a subshift satisfying Assumption 4.2 through the doubling construction in Example 4.3.
### Connectedness properties
Recall from the introduction that we say that a representation \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) is _path-rigid_ if its path component in \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) coincides with its semi-conjugacy class.
**Corollary 6.12**.: _The following property hold._
1. _All representations_ \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) _are path-rigid._
2. _The space_ \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) _is connected if and only if the only_ \(\varphi\)_-invariant clopen subsets of_ \(X\) _are_ \(\varnothing\) _and_ \(X\)_._
3. _If_ \(X\) _has no isolated point, then_ \(\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\) _is nowhere locally connected._
Proof.: (1) follows from Proposition 3.13. To show (2) note that every partition of \(Y\) into two open subsets must consist of \(\Phi\)-invariant subsets, and thus corresponds to a partition of \(X\) into \(\varphi\)-invariant clopen sets. To show (3), suppose that \(X\) has no isolated point, fix \(\rho\in\mathsf{Rep}_{\mathrm{irr}}(G;\mathbb{R})\), and let \(\mathcal{U}\) be a neighbourhood of \(\rho\) whose image is contained in a chart \(Y_{C,I}\), with \(C\subset X\) clopen. Then Proposition 6.1 implies that the composition of \(r_{Y}|_{\mathcal{U}}\) with the projection \(Y_{C,I}\to C\) is open. Since \(C\) is totally disconnected and has no isolated point, it follows that no open subset of \(\mathcal{U}\) is connected.
## Appendix A Properties of the groups \(\mathscr{T}(\varphi,\sigma)\)
Here we discuss some generating properties of the group \(\mathscr{T}(\varphi,\sigma)\). These results and proof are largely analogous to the results on the groups \(\mathsf{T}(\varphi)\) from [21]. The main technical difference does not come from considering type-\(\mathfrak{D}\) maps, but rather from the fact that we have to deal with charts in the dihedral suspension; we will detail the proofs when differences are relevant. Throughout the section, \((X,\varphi,\sigma)\) is as in Assumption 4.2, and we set \(G=\mathscr{T}(\varphi,\sigma)\).
### More about charts
Say that a chart \(Z_{C,I}\) is _extendable_ if it is contained in a chart \(Z_{C,J}\), with \(\overline{I}\subset J\).
**Lemma A.1**.: _Let \(x\in X\). If \(x\) is periodic, of minimal period \(n\), and \(I\) is an open interval such that \(|I|<n\), then there exists a clopen neighbourhood \(C\) of \(x\) such that \(Z_{C,I}\) is an extendable chart. When \(x\) is not periodic, the same conclusion holds for every bounded interval \(I\subset\mathbb{R}\)._
Proof.: The proof is similar to that of [13, Lemma 4.10]. We consider the case where \(x\) is periodic of minimal period \(n\), the other case can be treated similarly. Take an interval \(I\subset\mathbb{R}\) satisfying \(|I|<n\). Consider the collection
\[\Gamma=\{\gamma\in D_{\infty}\smallsetminus\{\mathsf{id}\}\colon\gamma(I)\cap I \neq\varnothing\},\]
which consists of finitely many elements. It \(\gamma\in\Gamma\) is a translation, then it must be by some integer \(k\), with \(|k|\leq|I|<n\). As \(n\) is chosen as the minimal period of \(x\), we deduce that for any translation \(\gamma\in\Gamma\), we have \(\gamma(x)\neq x\). On the other hand, if \(\gamma\in\Gamma\) is a reflection, then Assumption 4.2 guarantees that \(\gamma(x)\neq x\) (this is one of the equivalent conditions in Lemma 4.1). By continuity, we can find a clopen neighbourhood \(C\subset X\) of \(x\) such that \(\gamma(C)\cap C=\varnothing\) for any \(\gamma\in\Gamma\). With this choice, we have that \(\gamma(C\times I)\cap(C\times I)=\varnothing\) for all \(\gamma\in D_{\infty}\smallsetminus\{\mathsf{id}\}\). Therefore \(\pi_{Z}(C\times I)=Z_{C,I}\) is a chart. Notice that if \(J\supset\overline{I}\) is another open interval such that \(|J|<n\), the same argument shows that \(Z_{C,J}\) is a chart, which implies that \(Z_{C,I}\) is an extendable chart.
If \(Z_{C,I}\) is an extendable chart with \(I=(a,b)\), we have that \(\partial Z_{C,I}\) is the disjoint union of the subsets \(\pi_{Z}(C\times\{a\})\) and \(\pi_{Z}(C\times\{b\})\). We refer to these as the _sides_ of \(Z_{C,I}\).
**Lemma A.2** (Chart decomposition).: _The space \(Z\) can be written as a finite union \(Z=\bigcup_{i=1}^{k}\overline{Z_{C_{i},I_{i}}}\), where \(Z_{C_{i},I_{i}}\) are extendable charts, such that for every distinct \(i,j\in\{1,\ldots,k\}\), the intersection \(\overline{Z_{C_{i},I_{i}}}\cap\overline{Z_{C_{j},I_{j}}}\) is contained in a single side of \(Z_{C_{i},I_{i}}\) and of \(Z_{C_{j},I_{j}}\)._
Proof.: Consider the dyadic intervals \(I_{j}=(j/4,(j+1)/4)\) with \(j\in\{0,1,2,3\}\). By Lemma A.1, for every \(x\in X\), there exists a clopen subset \(C\subset X\) such that \(Z_{C,I_{j}}\) is an extendable chart for \(j\in\{0,1,2,3\}\). Thus, there exists a clopen partition \(X=C_{1}\sqcup\cdots\sqcup C_{m}\) such that \(Z_{i,j}:=Z_{C_{i},I_{j}}\) is an extendable chart for any \(i\in\{1,\ldots,m\}\) and \(j\in\{0,1,2,3\}\). Finally, since the action of \(D_{\infty}\) on \(\mathbb{R}\) is isometric and the intervals \(I_{j}\) have length \(1/4\), which is smaller than \(1/2\), which is the length of a fundamental interval for this action, we conclude that whenever \((i,j)\neq(i^{\prime},j^{\prime})\), the intersection \(\overline{Z_{i,j}}\cap\overline{Z_{i^{\prime},j^{\prime}}}\) is contained in a single side of \(Z_{i,j}\) and of \(Z_{i^{\prime},j^{\prime}}\).
### Fragmentation property
The goal of this subsection is to prove the fragmentation lemma (Proposition 5.11) for \(\mathscr{T}(\varphi,\sigma)\). We start with the fragmentation lemma for the group \(\mathscr{F}\).
**Lemma A.3** (Fragmentation lemma for \(\mathscr{F}\)).: _Let \(I,I_{1},\ldots,I_{n}\) be open dyadic intervals so that \(I=\bigcup_{i=1}^{n}I_{i}\). Then \(\mathscr{F}_{I}=\langle\bigcup_{i=1}^{n}\mathscr{F}_{I_{i}}\rangle\)._
Proof.: Let us detail the proof in the case \(n=2\), the general case follows by induction. Assume, without loss of generality, that \(I_{1}=(a,c)\) and \(I_{2}=(b,d)\), with \(a<b<c<d\), and take a dyadic rational \(x\in(b,c)\). As in the proof of Lemma 4.9, write
(A.1) \[\mathscr{F}_{I}=\langle F^{\prime}_{I},S_{x},S_{a},S_{d}\rangle,\]
where \(S_{x}\subset\mathscr{F}_{(b,c)}\), \(S_{a}\subset\mathscr{F}_{(a,c)}\), and \(S_{d}\subset\mathscr{F}_{(b,d)}\) are finite subsets generating the groups of germs \(\mathcal{D}_{x}\), \(\mathcal{D}_{a}^{+}\), and \(\mathcal{D}_{b}^{-}\), respectively. Analogously, we can write \(\mathscr{F}_{I_{1}}=\langle F^{\prime}_{I_{1}},S_{x},S_{a},S_{c}\rangle\) and \(\mathscr{F}_{I_{2}}=\langle F^{\prime}_{I_{2}},S_{x},S_{b},S_{d}\rangle\). Thus, we have that
\[\langle\mathscr{F}_{I_{1}},\mathscr{F}_{I_{2}}\rangle=\langle F^{\prime}_{I_{1 }},F^{\prime}_{I_{2}},S_{x},S_{a},S_{b},S_{c},S_{d}\rangle.\]
Since \(F^{\prime}_{I}=\langle F^{\prime}_{I_{1}},F^{\prime}_{I_{2}}\rangle\), the previous line and (A.1) imply that \(\mathscr{F}_{I}=\langle\mathscr{F}_{I_{1}},\mathscr{F}_{I_{2}}\rangle\), as desired.
We next need a result on homeomorphisms of \(Z\), whose proof is a refinement of the argument given for Lemma 4.6.
**Lemma A.4**.: _Let \(h\in\mathsf{H}_{0}(\varphi,\sigma)\). Then, there exists an isotopy \((h_{s})_{s\in[0,1]}\) between \(h\) and the identity which satisfies \(\overline{\mathsf{Supp}_{Z}(h_{s})}=\overline{\mathsf{Supp}_{Z}(h)}\) for every \(s\in[0,1)\)._
Proof.: For the isotopy, we will consider the one defined in the proof of Lemma 4.6. Explicitly, we take a lift \(f\in\mathsf{H}_{0}(\varphi)\) of \(h\), consider the isotopy \((f_{s})_{s\in[0,1]}\subset\mathsf{H}_{0}(\varphi)\) between \(f\) and the identity that satisfies \(\tau_{f_{s}}=(1-s)\tau_{f}\), and then project it to an isotopy \((h_{s})_{s\in[0,1]}\subset\mathsf{H}_{0}(\varphi,\sigma)\) between \(h\) and the identity. In order to prove the equality between supports of the elements in the isotopy we need the following.
**Claim**.: _Given \(g\in\mathsf{H}_{0}(\varphi)\), write \(T_{g}=\{y\in Y:\tau_{f}(y)\neq 0\}\). Then \(\overline{T_{g}}=\overline{\mathsf{Supp}_{Y}(g)}\)._
Proof of claim.: Notice that if the leaf through \(y\in T_{g}\) is non-closed, then \(y\in\mathsf{Supp}_{Y}(g)\). On the other hand, Assumption 4.2 implies that the points of \(T_{g}\) with non-closed leaves are dense. Since \(T_{g}\) is open, this implies that \(\overline{T_{g}}\subseteq\overline{\mathsf{Supp}_{Y}(g)}\). The reverse inclusion is trivial.
Take now \(h\in\mathsf{H}_{0}(\varphi,\sigma)\), \(f\in\mathsf{H}_{0}(\varphi)\), and \((f_{s})_{s\in[0,1]}\) as above. By the choice of the isotopy, we have that \(T_{f_{s}}=T_{f}\) for every \(s\in[0,1)\). Thus, the claim implies that \(\overline{\mathsf{Supp}_{Y}(f_{s})}=\overline{\mathsf{Supp}_{Y}(f)}\) for every \(s\in[0,1)\). This implies \(\overline{\mathsf{Supp}_{Z}(h_{s})}=\overline{\mathsf{Supp}_{Z}(h)}\) for every \(s\in[0,1)\), as desired.
We can now prove the fragmentation lemma for \(\mathscr{T}(\varphi,\sigma)\).
Proof of Proposition 5.11.: The proof is somehow analogue to the discussion in [21, Appendix A]. We will use that \(\mathsf{H}_{0}(\varphi)\) is a topological group, whose topology can be defined by looking at displacement of elements along leaves (more precisely, by looking at the uniform norm on the translation cocycles). Moreover, using Lemma 4.6, we can identify \(\mathsf{H}_{0}(\varphi,\sigma)\) to a closed subgroup of \(\mathsf{H}_{0}(\varphi)\).
We first construct a family of charts subordinated to \(\mathcal{U}\) that is well suited for our purposes. For this, choose a decomposition \(Z=\bigcup_{i=1}^{k}\overline{Z_{C_{i},I_{i}}}\) as in Lemma A.2. After subdividing these into smaller charts if needed, we can suppose that each \(\{Z_{C_{i},I_{i}}\}_{i\in\mathcal{I}}\) is contained in some element of \(\mathcal{U}\). Slightly enlarge each chart \(Z_{C_{i},I_{i}}\) to a dyadic chart \(Z_{C_{i},J_{i}}\), where \(J_{i}\) is an \(\varepsilon\)-neighbourhood of \(I_{i}\) satisfying:
1. \(Z_{C_{i},J_{i}}\) is still contained in an element of \(\mathcal{U}\) for every \(i\in\mathcal{I}\);
2. whenever \(i,j\in\mathcal{I}\), with \(i\neq j\), are such that \(\overline{Z_{C_{i},J_{i}}}\cap\overline{Z_{C_{j},J_{j}}}\neq\varnothing\), then \(Z_{C_{i},J_{i}}\cap Z_{C_{j},J_{j}}\) is a chart of the form \(Z_{D_{ij},L_{ij}}\), where \(L_{ij}\) is an interval of the form \(L_{i}=(t_{ij}-\varepsilon,t_{ij}+\varepsilon)\), so that \(\overline{Z_{C_{i},I_{i}}}\cap\overline{Z_{C_{j},I_{j}}}=\pi_{Z}(D_{ij}\times \{t_{ij}\})\).
Set
\[\mathcal{I}=\left\{i\in\{1,\ldots,k\}:\overline{Z_{C_{i},I_{i}}}\cap\overline{ \mathsf{Supp}_{Z}(g)}\neq\varnothing\right\},\]
and notice that \(\{Z_{C_{i},J_{i}}\}_{i\in\mathcal{I}}\) is an open cover of \(\overline{\mathsf{Supp}_{Z}(G)}\) subordinate to \(\mathcal{U}\), and the charts \(\{Z_{D_{ij},L_{ij}}\}_{i,j\in\mathcal{I},i\neq j}\) (which are also subordinate to \(\mathcal{U}\)) are pairwise disjoint and cover all sides of the charts \(\{Z_{C_{i},I_{i}}\}\). To get the statement, it is therefore enough to show that the subgroup
\[G_{g}:=\left\{h\in G:\overline{\mathsf{Supp}_{Z}(h)}\subseteq\overline{ \mathsf{Supp}_{Z}(g)}\right\}\]
is contained in the subgroup
\[K:=\left\langle\bigcup_{i\in\mathcal{I}}G_{C_{i},J_{i}}\right\rangle.\]
For this, we will prove that the elements in \(G_{g}\) with small displacement belong to \(K\), and then that every element in \(G_{g}\) can be written as a product of elements in \(G_{g}\) with small displacement.
For the first step, take \(h\in G_{g}\) which displaces any point \(z\in Z\) of at most \(\varepsilon/2\) along its leaf. We have \(h(\pi_{Z}(D_{ij}\times\{t_{ij}\}))\subset Z_{D_{ij},L_{ij}}\) for any distinct \(i,j\in\mathcal{I}\). Since Thompson's group action on dyadic rationals is transitive, we can find an element \(k\) which is a product of elements from the commuting subgroups \(\{G_{D_{ij},L_{ij}}\}_{i,j\in\mathcal{I},i\neq j}\) (in particular, \(k\in K\)), such that \(kh\) fixes
\[\bigcup_{i,j\in\mathcal{I},i\neq j}\pi_{Z}(D_{ij}\times\{t_{ij}\})=\bigcup_{i \in\mathcal{I}}\partial\overline{Z_{C_{i},I_{i}}}.\]
Hence \(kh\) preserves each chart \(Z_{C_{i},I_{i}}\), and therefore it can be decomposed as a product of elements from the commuting subgroups \(\{G_{C_{i},I_{i}}\}_{i\in\mathcal{I}}\). In particular, we have that \(kh\in K\), and therefore \(h\in K\).
For the second step, consider an arbitrary element \(h\in G_{g}\). By Lemma A.4, there exists an isotopy \((h_{s})_{s\in[0,1]}\) between \(h\) and the identity satisfying \(\overline{\mathsf{Supp}_{Z}(h_{s})}=\overline{\mathsf{Supp}_{Z}(h)}\) for every \(s\in[0,1)\). Choose an increasing sequence \(0=s_{0}<s_{1}<\cdots<s_{N-1}<s_{N}=1\), so that for every \(i\in\{0,\ldots,N-1\}\), the element \(f_{i}=h_{s_{i}}h_{s_{i+1}}^{-1}\) displaces any point of at most \(\varepsilon/2\) along its leaf. Note that we have \(h=f_{0}\cdots f_{N-1}\). Since \(\overline{\mathsf{Supp}_{Z}(f_{i})}\subseteq\overline{\mathsf{Supp}_{Z}(h)}\), we can reason as in the first step to deduce that each \(f_{i}\) can be written as a product of elements in \(\{H_{C_{i},I_{i}}\}_{i\in\mathcal{I}}\), where \(H_{C_{i},I_{i}}\subset\mathsf{H}_{0}(\varphi,\sigma)\) is the subgroup of homeomorphisms supported on \(Z_{C_{i},I_{i}}\) of the form \((x,t)\mapsto(x,f_{x}(t))\). On the other hand, the classical fact that Thompson's group \(F_{I}\) is dense in \(\mathsf{Homeo}_{0}(I)\) implies that \(G_{C_{i},I_{i}}\) is dense in \(H_{C_{i},I_{i}}\). Thus, for any \(i\in\{0,\ldots,N-1\}\), we can approximate each \(f_{i}\) (and hence \(h\)) by an element of \(K\). This allows to find an element \(k\in K\) such that \(hk^{-1}\) displaces points by at most \(\varepsilon/2\) along leaves. Hence, using the first step again, we get \(hk^{-1}\in K\), and so \(h\in K\).
### Finite generation
Here we prove Theorem 4.12, which says that when \((X,\varphi,\sigma)\) is a reversible subshift, the group \(G\) is finitely generated.
Given an open interval \(I\), we denote by \(\mathscr{F}_{I}^{c}\) the subgroup of \(\mathscr{F}_{I}\) whose elements have their support compactly contained in \(I\). Recalling the notation from SS5.2, given a chart \(Z_{C,I}\cong C\times I\), we will write \(\mathscr{F}_{C,I}\) for the subgroup \((\mathscr{F}_{I})_{C}\) of \(G_{C,I}\) of elements which act trivially on the factor \(C\) and as a fixed element of \(\mathscr{F}_{I}\) on \(I\). Similarly, we write \(\mathscr{F}_{C,I}^{c}\) for the subgroup \((\mathscr{F}_{I}^{c})_{C}\).
The following lemma is the analogue of [21, Lemma 4.7].
**Lemma A.5** (Intersection lemma).: _Consider charts \(Z_{C,I}\) and \(Z_{D,J}\), where \((C,I)\) and \((D,J)\) are such that \(C\cap D\neq\varnothing\) and \(I\cap J\neq\varnothing\). Then, the group \(\langle\mathscr{F}^{c}_{C,I},\mathscr{F}^{c}_{D,J}\rangle\) contains the subgroups \(\mathscr{F}^{c}_{C\cap D,I}\) and \(\mathscr{F}^{c}_{C\smallsetminus D,I}\)._
Proof.: For every interval \(L\subset I\cap J\), the charts \(Z_{C,L}\) and \(Z_{D,L}\) are both well defined. Choose \(L\) that avoids the lattice of half integers \(\frac{1}{2}\mathbb{Z}\). Then every non-trivial element \(\gamma\in D_{\infty}\) must map \(L\) disjointly from itself. We deduce that \(\gamma(C\times L)\cap(D\times L)=\varnothing\) for any such \(\gamma\). It follows that \(Z_{C,L}\cap Z_{D,L}=Z_{C\cap D,L}\). Taking commutators as in the proof of [21, Lemma 4.7], we find that \(\mathscr{F}^{c}_{C\cap D,L}\leq\langle\mathscr{F}^{c}_{C,I},\mathscr{F}^{c}_{ D,J}\rangle\). Now, conjugating \(\mathscr{F}^{c}_{C\cap D,L}\) by an element of \(\mathscr{F}^{c}_{C,I}\), we conclude as in [21, Lemma 4.7].
Proof of Theorem 4.12.: Since \(\varphi:X\to X\) is a subshift, \(X\) admits a clopen partition \(C_{1}\sqcup\cdots\sqcup C_{k}\) whose \(\varphi\)-translates form a prebasis of the topology. Fix dyadic intervals \(I_{-1},I_{0},I_{1}\subset\mathbb{R}\) of length \(|I_{j}|<1\) containing respectively \(-1,0,1\), and whose union covers an open neighbourhood of \([-1,1]\). Using Lemma A.1, for every \(i\in\{1,\ldots,k\}\), and \(x\in C_{i}\), we can find a clopen subset \(D\subset C_{i}\) containing \(x\) such that the charts \(Z_{D,I_{\omega}}\) are admissible for \(\omega\in\{-1,0,1\}\). Thus, upon refining the partition \(\{C_{i}\}\), we can assume that all charts \(Z_{C_{i},I_{\omega}}\) are admissible. Let \(H=\left\langle\bigcup_{i=1}^{k}\bigcup_{\omega\in\{-1,0,1\}}\mathscr{F}_{C_{i },I_{\omega}}\right\rangle.\) By Lemma 4.9, \(H\) is finitely generated. We want to prove that \(H=\mathscr{T}(\varphi,\sigma)\).
**Claim**.: _Fix integers \(m\leq 0\leq n\), and a sequence \((i_{j})_{j=m}^{n}\subset\{1,\ldots,k\}\) such that \(D:=\bigcap_{j=m}^{n}\varphi^{j}(C_{i_{j}})\) is non-empty. Let \(J\) be an interval contained in one of the intervals \(I_{\omega}\), for \(\omega\in\{-1,0,1\}\). Then \(\mathscr{F}^{c}_{D,J}\leq H\)._
Proof of claim.: Fix a dyadic interval \(L\subset I_{0}\) containing \(0\), and small enough so that \(L-1\subset I_{-1}\) and \(L+1\subset I_{1}\). We begin by observing that if \(D\subset C_{i}\) for some \(i\in\{1,\ldots,k\}\), and if \(\mathscr{F}^{c}_{D,L}\leq H\), then actually we have \(\mathscr{F}^{c}_{D,J}\leq H\) for every dyadic interval \(J\) as in the statement of the claim. Indeed, by conjugating \(\mathscr{F}^{c}_{D,L}\leq H\) by elements of \(\mathscr{F}^{c}_{C_{i},I_{0}}\), we obtain the conclusion for \(J\subset I_{0}\). Choose \(J\subset I_{0}\cap I_{1}\), and repeat the reasoning to obtain the conclusion for \(J\subset I_{1}\), and argue in the same way for \(J\subset I_{-1}\).
We now proceed by induction on \(m\leq 0\leq n\). If \(n=m=0\), then \(D=C_{i_{0}}\) and \(\mathscr{F}^{c}_{D,L}\leq\mathscr{F}^{c}_{C_{i_{0}},I_{0}}\leq H\). Assume that \(n>0\). Set \(D^{\prime}=\bigcap_{j=m}^{n-1}\varphi^{j}(C_{i_{j}})\). By induction we have \(\mathscr{F}^{c}_{D^{\prime},L}\leq H\) and thus, since \(L-1\subset I_{-1}\), we also have \(\mathscr{F}^{c}_{D^{\prime},L-1}\leq H\). Now note that \(\mathscr{F}^{c}_{D^{\prime},L-1}=\mathscr{F}^{c}_{\varphi^{-1}(D^{\prime}),L}\) by the chart identification rules. Since \(\varphi^{-1}(D^{\prime})\subset C_{i_{1}}\), we can iterate this reasoning, and we find that \(\mathscr{F}^{c}_{\varphi^{-j}(D^{\prime}),L}\) for all \(j\in\{1,\ldots,n\}\). In particular this holds true for \(j=n\), and since \(\varphi^{-n}(D^{\prime})\cap C_{i_{n}}=\varphi^{-n}(D)\), the intersection lemma (Lemma A.5) implies that \(\mathscr{F}^{c}_{\varphi^{-n}(D),L}\leq H\). Now applying again the observation at the beginning of the paragraph, we have that \(\mathscr{F}^{c}_{\varphi^{-n+1}(D),L}=\mathscr{F}^{c}_{\varphi^{-n}(D),L+1}\leq H\), and iterating this reasoning \(n\) times we find \(\mathscr{F}^{c}_{D,L}\leq H\). The inductive step on \(m\) is similar.
Now, let \(Z_{C,J}\) be an arbitrary chart, and let us show that \(\mathscr{F}_{C,J}\leq H\). Since \(\mathscr{F}_{C,J}\) is generated by its subgroups \(\mathscr{F}^{c}_{C,J^{\prime}}\), where \(J^{\prime}\subset J\) is arbitrarily small, it is enough to show this for the subgroups \(\mathscr{F}^{c}_{C,J}\) such that \(|J|\leq\varepsilon\) for some given \(\varepsilon\). Moreover, using that \(\mathscr{F}^{c}_{C,J}=\mathscr{F}^{c}_{\varphi^{n}(C),J-n}\), we can assume that \(J\) intersects \([0,1]\). If \(\varepsilon\) is sufficiently small, this implies that \(J\) is entirely contained in \(I_{0}\) or \(I_{1}\). Since the subsets \(C_{i}\) form a generating partition for \(\varphi\), we have a partition \(C=D_{1}\sqcup\cdots\sqcup D_{k}\), where each \(D_{i}\) is as in the claim. This gives the inclusion \(\mathscr{F}^{c}_{C,J}\leq\langle\bigcup_{i=1}^{k}\mathscr{F}^{c}_{D_{i},J}\rangle \leq H\), as desired.
As observed in the proof of Lemma 5.12, the subgroup \(G_{C,J}\) is identified with the group \(\mathcal{C}(C,\mathscr{F}_{J})\), and it is therefore generated by its subgroups \(\mathscr{F}_{D,J}\), with \(D\subset C\). Therefore, we have \(G_{C,J}\leq H\). The fragmentation lemma (Proposition 5.11), gives finally \(G\leq H\), as desired.
|
2303.17035 | On The Planetary Theory of Everything | Here, we present a simple solution to problems that have plagued
(extra)"galactic" astronomers and cosmologists over the last century. We show
that "galaxy" formation, dark matter, and the tension in the expansion of the
universe can all be explained by the natural behaviors of an overwhelmingly
large population of exoplanets throughout the universe. Some of these ideas
have started to be proposed in the literature, and we commend these pioneers
revolutionizing our understanding of astrophysics. Furthermore, we assert that,
since planets are obviously the ubiquitous answer to every current question
that can be posed by astronomers, planetary science must then be the basis for
all science, and therefore that all current funding for science be reserved for
(exo)planetary science - we happily welcome all astronomers and other
scientists. | J. J. Charfman Jr., M. M. M., J. Dietrich, N. T. Schragal, A. M. Avsar | 2023-03-29T21:39:01Z | http://arxiv.org/abs/2303.17035v1 | # On the Planetary Theory of Everything
###### Abstract
Here, we present a simple solution to problems that have plagued (extra)"galactic" astronomers and cosmologists over the last century. We show that "galaxy" formation, dark matter, and the tension in the expansion of the universe can all be explained by the natural behaviors of an overwhelmingly large population of exoplanets throughout the universe. Some of these ideas have started to be proposed in the literature, and we commend these pioneers revolutionizing our understanding of astrophysics. Furthermore, we assert that, since planets are obviously the ubiquitous answer to every current question that can be posed by astronomers, planetary science must then be the basis for all science, and therefore that all current funding for science be reserved for (exo)planetary science - we happily welcome all astronomers and other scientists.
Exoplanets -- History of astronomy -- Interdisciplinary astronomy 0000-0002-4000-0002]J.J. Charfman Jr.
0000-0002-4880-7880]M. M. M. M.
0000-0002-4880-7880]J. Dietrich
0000-0002-4880-7880]N. T. Schragal
0000-0002-4880-7880]A. M. Avsar
## 1 Introduction
It has come to our attention that a regrettably large number of astronomers do not believe that the existence of planets outside our solar system can be proven (Woodrum, Hviding, Amaro, and Chamberlain, 2023). These astronomers must have their head in the interstellar clouds, since they cannot see the overwhelming evidence that exoplanets are everywhere. By number, exoplanets are the most common objects of their size or larger in the universe by at least an order of magnitude. We have evidence for an average of more than one planet orbiting each star (e.g., Zhu & Dong, 2021), and this does not include the expected number of free-floating planets that did not form around host stars or have been ejected from their original systems (McDonald et al., 2021). Estimates predict upwards of one hundred thousand free-floating planets per star in the universe - planets that either possibly formed on their own in mini-collapses of very locally concentrated matter, or more likely were kicked out of their nascent protoplanetary disk by a bigger badder neighborhood bully planet (see e.g, Strigari et al., 2012).
Due to their overwhelming ubiquity, we instead propose that planets are the solution to the current enigmas of astronomy. As their presence and importance to the field has been proven time and again over the past three decades, we must now begin to expand our planetary horizons and test to see how much can be explained by a universal "planetary theory of everything." There are many questions about the universe that are quickly waved away by some explanation of "dark" whatever, simply because no one has deigned to believe the answer could just be planets.
This paper is organized as follows. In Section 2 we focus on the similarities of "galaxies" with protoplanetary clouds and disks and how therefore "galaxies" are simply planetary systems forming on a large cosmic scale. We discuss the "planets as dark matter" theory and add our own discussion in Section 3. In Section 4 we show that planets can help resolve the Hubble tension between early-universe Planck CMB measurements and recent-universe SN measurements of the Hubble constant. Finally, we conclude that planetary science now encompasses all of astronomy and provide a reasonable statement on what that means for future astronomical funding in Section 5.
## 2 "Galaxies" Are Cosmic Planetary Systems
### Morphology, and the return of the tuning fork
"Galaxy" morphology typically splits "galaxies" into two primary categories: spiral and elliptical. Early in the history of extra-"galactic" astronomy, it was theorized that these two morphological classes were the beginning and end states of an evolutionary sequence commonly referred to as the "Hubble tuning fork". In this sequence, all "galaxies" begin as diffuse and bulbous elliptical "galaxies", and over time coalesce into a spiral
structure. However, as a popular theory of these objects being "galaxies" took hold, successive theories rejected this evolutionary sequence. In contrast, this evolutionary sequence from amorphism to order precisely matches how planetary systems form and evolve. The observed morphology of so-called "galaxies" is a natural result of planetary theory.
Protoplanetary disks form via the collapse of clouds of gas. The initial cloud is amorphous, which explains the observed shape of so-called "elliptical galaxies". Figure 1 compares a sketch of a protoplanetary cloud to the observed object NGC 4150. The ovular, egg-like shape of NGC 4150 matches the shape of a protoplanetary cloud. Therefore, the best explanation is that this so-called "elliptical galaxy" is actually a protoplanetary cloud which is likely in the process of collapsing into a cosmic-scale protoplanetary disk, from which a plethora of planetary systems will be born. This explanation is strongly consistent with the Hubble tuning fork, as a Yakov-Smirnov B-S test of separating populations or models (see e.g., Charfman et al., 2002, even if their main finding has since been challenged) cannot distinguish between the evolution of protoplanetary clouds/disks and the Hubble tuning fork for "galaxy" morphology at a significant level.
The next stage of planet formation includes the congolometration and accretion of solids into protoplanets within the protoplanetary disk. Once large enough, these protoplanets interact with the disk and create structures like spirals. Figure 2 shows the result of a simulation of a protoplanet embedded within its disk. Spirals and density waves naturally appear, and even provide observable evidence for that planet. These planet-induced structures are precisely what we see from so-called "spiral galaxies" which, as shown in Figure 2, exhibit the same exact spiral arms and density fluctuations predicted in the protoplanetary disk simulation. Therefore, these "spiral galaxies" must be large-scale disks actively forming a sizeable population of planets. This explanation is fully consistent with these spiral systems following the aforementioned elliptical systems in the evolutionary tuning fork.
### Planetary Formation Scales
Since we have now shown that "galaxies" themselves are rather protoplanetary clouds and disks, we must address the scales of planetary formation. Local planetary systems form around stars, which happens on length scales of AU and timescales of millions of years. On the other hand, cosmic-scale planetary systems that form around the central core super-star at the center of these "galaxies" have a greater length scale of kiloparsecs, and their timescale must correspondingly be larger by a similar factor.
With the distance of 1 AU when expressed in centimeters only a factor of \(\sim 2\) different from 1 Myr when expressed in seconds, we can assume that this scaling for cosmic protoplanetary systems must roughly be of a similar magnitude. Thus, a length scale of 1 kpc in cm would then roughly correspond to a timescale of 100 trillion years, which is much longer than the age of the universe. Therefore, at just 14 billion years we are only seeing a snapshot of a few early disk-forming cosmic planetary systems, along with some still-nebulous cosmic protoplanetary clouds that have yet to start their disk formation phase.
## 3 Planets as Dark Matter
Since "galactic" astronomers don't believe in planets, they have been missing the most obvious candidate for dark matter. The invisible and almost purely gravitational effects of dark matter on "galaxies" and "galaxy clusters" can easily be explained by bunches of the aforementioned free-floating planets (Strigari et al., 2012), or even dark matter exoplanets that have been theorized to exist (Bai et al., 2023).
Dark matter is theorized to have different forms separated by the initial budget of kinetic energy/temperature, imaginatively named "cold dark matter" and "warm/hot dark matter". Cold dark matter (CDM) is expected to comprise a majority of the dark matter in the universe, forming a halo around individual "galaxies" that speeds up their differential rotation curves, as well as interacting with visible matter within the intracluster medium of "galaxy clusters" and along filaments of large-scale "galactic" structure in the universe.
### Dark Matter Deficient "Galaxies"
A recent problem that has emerged in "galaxy" formation theories is the existence of dark matter deficient "galaxies": diffuse satellite systems with little to no dark matter halo surrounding them (see e.g., van Dokkum et al., 2018). Many teams of theorists have set out to solve the problem of dark matter deficient "galaxies" by running complicated and computationally intensive cosmological simulations to explain how these "galaxies" can exist (e.g., Moreno et al., 2022). Since these theorists most likely do not believe in the existence of exoplanets, they have been missing the simple solution all along.
We argue that these "galaxies" are not dark matter deficient, but planet deficient. To prove this argument, we introduce the Charfman-Avsar relation, which is as
follows:
\[\rm{Less\ Planets}=\rm{Less\ Mass} \tag{1}\]
Other groups have tried to explain dark matter deficiencies through complex "galactic" evolution, involving tidal forces and "galactic" mergers. Our less complicated proposition would be able to explain all observations through a single mechanism, reducing computational time for cosmological simulations. Although we expect there to be pushback from the larger community, we assure this proposition is airtight and should be adopted immediately.
### MACHO Cold Dark Matter
Previous studies have looked into the presence of planetary-mass non-self-luminous objects floating in the outer reaches of "galaxies" that comprise this material that only interacts gravitationally with the rest of the visible matter in the universe. These "MAssive Compact Halo Objects" (MACHOs, e.g., Carr & Primack 1990;
Figure 1: **Left:** a protoplanetary cloud evolving to form a protoplanetary disk **Right:** an “elliptical galaxy”. The “galaxy” shows the same nebulous elliptical structure with a concentrated center and slowly radially decreasing brightness profile as the protoplanetary cloud. **Thus, the “galaxy” is simply a protoplanetary cloud before its evolution into a protoplanetary disk.**
Credit [http://burro.case.edu/Academics/Astr221/SolarSys/Formation/starform.html](http://burro.case.edu/Academics/Astr221/SolarSys/Formation/starform.html) and
[https://esahubble.org/wordbank/elliptical-galaxy/](https://esahubble.org/wordbank/elliptical-galaxy/)
Griest, 1993, and many sources afterwards). Perhaps _not_ coincidentally, the field of exoplanets and the field of MACHO-dominated dark matter evolved contemporaneously in the early 1990s, suggesting a common basic ideology.
Much additional work has since come out to "disprove" the MACHO dark matter theory, mostly in the form of microlensing surveys (e.g., Alcock et al., 2000; Tisserand et al., 2007). These results claim that a fully MACHO dark matter halo is inconsistent with their results, and that a fractional MACHO halo may be more likely but still requires an additional component currently unknown to astronomy. However, microlensing is inherently biased with requiring a dense stellar background, and even observing stars in front of the Milky Way bulge for over a decade has produced \(<200\) microlensing planets bound around their host stars (per the NASA Exoplanet Archive as of the publication of this manuscript), plus a few microlensing free-floating planets (McDonald et al., 2021). Thus, we claim (contrary to any previous evidence to counter this) that observing a small fraction of the Milky Way halo towards the Magellanic Clouds simply does not provide a dense enough stellar background to detect microlensing free-floating planets in the "galactic" halo to a large enough degree.
### Dark Matter Planets
While we have definitively proven that dark matter is just planets, we can also look into the hypothesis of different planet-sized objects made of dark matter that are not themselves planets. Bai et al. (2023) state that if dark matter exoplanets exist, they would mostly be indistinguishable from regular matter planets via the transit method unless they are large or of low opacity, neither of which is likely with current theories. However, since this requires a more complicated theory than dark matter simply being regular planets, especially since the difference is negligible for most of the parameter space, we reject that hypothesis.
Paice & J--C Watkins (2022) declare that, although we have a good concept of the planetary components of our solar system, we could still discover an additional one by methods similar to those used to currently support the existence of Planet Nine (see e.g., Brown and Batygin, 2016). Indeed, a non-self-luminous planetary body hiding inside our own solar system unable to be detected by anything other than its affect on the orbits of other bodies is very similar to what we see with the effects of dark matter across "galaxies". Therefore, with an example from an artist's rendition seen in Figure 3, we unequivocally state that dark matter is just unseen planets gravitationally affecting planetary systems on both the stellar and cosmic scale.
There is also the Warm Dark Matter (WDM) component that was previously believed to simply be the same as cold dark matter, but with a relativistic initial energy bucket. If this is true, it might have enough energy to glow thermally even though it is dark optically. Lovell (2022) found that lava matches many of the same observational signatures as WDM and is therefore a strong candidate to explain WDM. Thus, rocky planets that are cold and dark on the outside yet volcanically active on the inside could be both the CDM we see everywhere as well as the source of WDM that is less prevalent across the universe.
## 4 Resolving the Hubble Tension with Planets
One of the fundamental cosmological parameters of which we know surprisingly little about is the Hubble constant of the expanding universe. The original measurements by Edwin Hubble placed the value somewhere around \(500\) km/s/Mpc (Hubble, 1929, and note how he may have been ahead of the time with calling these "galaxies" outside of the Milky Way as "nebulae"), and as recently as the mid-1990s the actual value was still debated to be between \(50\) and \(100\) km/s/Mpc (Bonnell et al., 1996). However, recent data from Planck measuring the CMB anisotropies gave a value of \(67.66\pm 0.42\)(Planck Collaboration et al., 2020), whereas distance ladder measurements using Cepheid variables and Type Ia SN provide a value of \(73.04\pm 1.04\)(Riess et al., 2022), which are discrepant at \(5\sigma\).
Once again, the obvious solution to this problem is planets, specifically the planet formation timescale (as referenced above in Section 2.2). The Planck measurements of the CMB anisotropies were done in such an early universe that we are sure there were absolutely no planets around at that point, so the presence of planets was not included in determining the Hubble constant, which provides the lower value. However, the distance ladder measurements come from the recent universe, which as we all know is teeming with planets. Therefore, it is obvious that the presence of planets in the recent universe have simply added on another parameter to the Friedman equation for the universe, such that the Hubble constant has increased to its current value. We introduce this as a natural corollary of the Charfman-Avsar relation:
\[\text{Corollary 1: }\frac{\text{More Planets}}{\text{Unit Time}}\propto\frac{ \text{Higher }H_{0}}{\text{Unit Time}} \tag{2}\]
Therefore, both the Planck measurements and the distance ladder measurements can be right in their own
observation epochs! See Figure 4 for an artist's impression of this phenomenon.
Another interesting recent measurement on the Hubble constant is from Anand et al. (2022), done in the extreme local and extreme recent universe with the Moon's orbital recession from Earth. While this is a method ingenious in its simplicity with a precise result, we believe it is not as free from systematic biases and errors as they claim. The measurement is only done with one planet and one satellite object, which is hardly representative of the universe as a whole with its untold magnitude of planets, and the measurement is also only corrected for tides without taking into account any other higher-order issues that may occur. We suspect this is the reasoning for the measurement below even the Planck value, even though they do include more planetary bodies than the Planck measurement does.
## 5 Summary
* Planets are ubiquitous in the universe and are the most common object known with their own self-gravity.
* "Galaxies" are really just cosmically large planetary systems evolving on timescales longer than the age of the universe.
* Dark matter actually does follow the MACHO paradigm, because it is trillions of free-floating non-luminous planets.
* The formation of planets as the numerically dominant object in the universe over its age has also caused an increase in the Hubble constant over that same age.
We recommend that "planetary science", hereafter, should just be known as "science" since we have shown that the planetary aspect is all-encompassing. And since funding for science has been shown to be extremely important and correlated with many different success metrics for universities and institutes of higher study, we assert that this funding must be used as we scientists see fit. We promise that all astronomers and other scientists of previously-branded branches of the field will be welcomed into this new and exciting re-organization of science.
Figure 3: An artist’s impression of an unseen planet in the outskirts of our solar system - Planet Nine, MACHO, dark matter, etc. **It’s all the same thing.** Image Credit: nagualdesign, Tom Rueen, and ESO
The authors would like to thank B.S. Prince at the Lunar and Planetary Laboratory for his help in making the figures and for his lucid skepticism of modern astronomy, as well as Orion and Luna the cats for interesting non-scientific discussions. This paper was written in remembrance of J.J. Charfman, who will be dearly missed even as their legacy lives on (love you Mapa). This "research" has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
|
2307.02642 | Valley-controlled transport in graphene/ WSe$_{2}$ heterostructures
under an off-resonant polarized light | We investigate the electronic dispersion and transport properties of
graphene/WSe$_{2}$ heterostructures in the presence of a proximity-induced
spin-orbit coupling $\lambda_{v}$, sublattice potential $\Delta$, and an
off-resonant circularly polarized light of frequency $\Omega$ that renormalizes
$\Delta$ to $\bar{\Delta}_{\eta p} = \Delta +\eta p \Delta_{\Omega} $ with
$\eta$ and $p$ the valley and polarization indices, respectively, and $
\Delta_{\Omega} $ the gap due to the off-resonant circularly polarized light.
Using a low-energy Hamiltonian we find that the interplay between different
perturbation terms leads to inverted spin-orbit coupled bands. At high $\Omega$
we study the band structure and dc transport using the Floquet theory and
linear response formalism, respectively. We find that the inverted band
structure transfers into the direct band one when the off-resonant light is
present. The valley-Hall conductivity behaves as an even function of the Fermi
energy in the presence and absence of this light. At $\Delta_{\Omega}$ =
$\lambda_{v}$ - $\Delta$ a transition occurs from the valley-Hall phase to the
anomalous Hall phase. In addition, the valley-Hall conductivity switches sign
when the polarization of the off-resonant light changes. The valley
polarization vanishes for $\Delta_{\Omega}$ = 0 but it is finite for
$\Delta_{\Omega}$ $\neq$ 0 and reflects the lifting of the valley degeneracy of
the energy levels, for $\Delta_{\Omega} \neq 0$, when the off-resonant light is
present. The corresponding spin polarization, present for $\Delta_{\Omega}$ =
0, increases for $\Delta_{\Omega}$ $\neq$ 0. Further, pure $K$ or $K^{\prime}$
valley polarization is generated when $\Delta_{\Omega}$ changes sign. Also, the
charge Hall conductivity is finite for $\Delta_{\Omega}\neq 0$ and changes sign
when the handedness of the light polarization changes. | M. Zubair, P. Vasilopoulos, M. Tahir | 2023-07-05T20:17:37Z | http://arxiv.org/abs/2307.02642v1 | Valley-controlled transport in graphene/ WSe\({}_{2}\) heterostructures under an off-resonant polarized light
###### Abstract
We investigate the electronic dispersion and transport properties of graphene/WSe\({}_{2}\) heterostructures in the presence of a proximity-induced spin-orbit coupling \(\lambda_{v}\), sublattice potential \(\Delta\), and an off-resonant circularly polarized light of frequency \(\Omega\) that renormalizes \(\Delta\) to \(\bar{\Delta}_{\eta p}=\Delta+\eta\Delta_{\alpha}\) with \(\eta\) and \(p\) the valley and polarization indices, respectively, and \(\Delta_{\Omega}\) the gap due to the off-resonant circularly polarized light. Using a low-energy Hamiltonian we find that the interplay between different perturbation terms leads to inverted spin-orbit coupled bands. At high \(\Omega\) we study the band structure and dc transport using the Floquet theory and linear response formalism, respectively. We find that the inverted band structure transfers into the direct band one when the off-resonant light is present. The valley-Hall conductivity behaves as an even function of the Fermi energy in the presence and absence of this light. At \(\Delta_{\Omega}=\lambda_{v}\) - \(\Delta\) a transition occurs from the valley-Hall phase to the anomalous Hall phase. In addition, the valley-Hall conductivity switches sign when the polarization of the off-resonant light changes. The valley polarization vanishes for \(\Delta_{\Omega}=0\) but it is finite for \(\Delta_{\Omega}\neq 0\) and reflects the lifting of the valley degeneracy of the energy levels, for \(\Delta_{\Omega}\neq 0\), when the off-resonant light is present. The corresponding spin polarization, present for \(\Delta_{\Omega}=0\), increases for \(\Delta_{\Omega}\neq 0\). Further, pure \(K\) or \(K^{\prime}\) valley polarization is generated when \(\Delta_{\Omega}\) changes sign. Also, the charge Hall conductivity is finite for \(\Delta_{\Omega}\neq 0\) and changes sign when the handedness of the light polarization changes.
## I Introduction
Since its discovery graphene has attracted immense attention both theoretically and experimentally due to its peculiar electronic and optical properties [1]. But, it has limited usage in the field of spintronics due to its very weak intrinsic spin orbit coupling (SOC). The intrinsic SOC in graphene is theoretically predicted to be weak, \(12~{}\mu\)eV [2]. A value of \(20~{}\mu\)eV is reported in a recent experiment for graphene on SiO\({}_{2}\) substrate [3]. A lot of efforts have been made to enhance the strength of SOC in graphene by employing external means, such as graphene hydrogenation [4; 5] or fluorination [6] as well as heavy adatom decoration [7; 8], and bringing it to proximity with other two-dimensional materials specifically transition metal dichalcogenides (TMDCs) [9; 10; 11]. In recent years the heterostructures of graphene and TMDCs have become more promising because the Dirac cone of graphene is well fit in the band gap of TMDCs, which leaves it intact. The giant native SOC of TMDCs is transferred to graphene via hybridization processes. Moreover, the combinations of graphene with TMDCs, such as MoS\({}_{2}\) or WSe\({}_{2}\), exhibit the proximity SOC on the meV scale [12; 13; 14; 15; 16; 17; 18; 19]
Presently SOC, induced by proximity effects, is no longer limited to theoretical studies, as it has been demonstrated by experimentally as well [20]. The breaking of spatial symmetry due to the substrate leads to an alteration of the Hamiltonian and spin degeneracy of graphene and opens a gap in its massless energy dispersion. In addition, it has been verified by experiments [21; 22; 23] that another type of sublattice-resolved intrinsic SOC arises, the so-called valley-Zeeman or staggered SOC with opposite sign on the \(A\) and \(B\) sublattices. Further, enhancement of the Rashba SOC and creation of staggered potentials are also unavoidable [24].
Nowadays, the optical control of functional materials has been become a hot topic in the condensed matter physics. In addition, it creates a bridge between condensed matter physics [25] and ultrafast spectroscopy [26]. Many intriguing phenomena have been realized in optically driven quantum solids such as light induced superconductivity [27; 28], photo-initiated insulator-metal transition [29; 30], microscopic interactions, such as the electron-phonon one, controlled by light [31; 32; 33], and theoretically predicted Floquet topological phases of matters [34; 35; 36; 37; 38]. These Floquet phases have stimulated much interest but direct evidence for electron-photon Floquet dressed states is scarce to date [39; 40] contrary to the field of artificial lattices [41; 42; 43; 44; 45; 46].
Recently, light-induced anomalous Hall effect has been observed experimentally in monolayer graphene by using an ultrafast transport technique [47] and predicted theoretically using a quantum Liouville equation with relaxation [48]. Also, graphene under the influence of light has been studied in various frameworks [34; 35; 36; 37; 49; 50; 51; 52; 53]. The transport properties, especially valley-dependent dc transport, using the Floquet theory, has not been addressed sufficiently in contrast with a large amount of research on proximitized graphene. As far as transport in the presence of an off-resonant light is concerned, we are aware only of an electron transport study in MoS\({}_{2}\)[54], of another one on graphene and the Lieb lattice [55], and of a thermal transport study in topological insulators in the absence of any SOC [56]. Here we investigate theoretically the band structure in laser-driven graphene/WSe\({}_{2}\) heterostructures using the Floquet theory in the high-frequency regime. Also, we study dc transport in such heterostructures in the framework of linear response theory. We show that the interplay between the proximity SOCs and off-resonant light leads to a phase transition from the inverted band regime to the direct one. Our results are in good agreement with experimental results [47] in the limit of vanishing proximity SOCs.
In Sec. II we specify the Hamiltonian and obtain the eigenvalues and eigenfunctions of the proximity modified graphene as well as an analytical expression for the density of states (DOS). In Sec. III we derive analytical expressions for the conductiv
ties and provide numerical results. Conclusions and a summary follow in Sec. IV.
## II Formulation
The real space tight-binding (TB) Hamiltonian of proximitized graphene is written as [57; 58; 24]
\[H = -t_{J}\sum_{\langle i,j\rangle,\alpha}c^{\dagger}_{i\alpha}c_{j \alpha}+\Delta\sum_{i\alpha}\eta_{c_{i}}c^{\dagger}_{i\alpha}c_{i\alpha} \tag{1}\] \[+\frac{i}{3\sqrt{3}}\sum_{\langle\langle i,j\rangle\rangle, \alpha\alpha^{\prime}}\lambda^{i}_{I}\nu_{ij}c^{\dagger}_{i\alpha}c_{j\alpha^ {\prime}}[\mathbf{s}_{z}]_{\alpha\alpha^{\prime}}\] \[+\frac{2i\lambda_{R}}{3}\sum_{\langle i,j\rangle,\alpha\alpha^{ \prime}}c^{\dagger}_{i\alpha}c_{j\alpha^{\prime}}[(\mathbf{s}\times\hat{\mathbf{ d}}_{ij})_{z}]_{\alpha\alpha^{\prime}}.\]
Here \(t_{J}\) is the hopping parameter, \(c^{\dagger}_{i\alpha}\) creates an electron with spin polarization \(\alpha\) at site \(i\) that belongs to sublattice \(A\) or \(B\), and \(\langle i,j\rangle\) (\(\langle\langle i,j\rangle\rangle\)) runs over the nearest (second nearest) neighbouring sites. The second term is a staggered on-site potential, which takes into account the effective energy difference experienced by atoms at the lattice sites \(A\) (\(\eta_{c_{i}}=+1\)) and \(B\) (\(\eta_{c_{i}}=-1\)), respectively. The third and fourth terms represent the proximity-induced enhancement of the spin orbit coupling (SOC) due to a weak hybridization with the heavy atoms in TMDCs. The third term is the sublattice resolved intrinsic SOC (\(\lambda^{i}_{I}\) with \(i=A,B\)) where \(\nu_{ij}=+1\), if the second nearest hopping is anticlockwise, and \(\nu_{ij}=-1\) if it is clockwise with respect to the positive \(z\) axis. The last term is the Rashba SOC parametrized by \(\lambda_{R}\). It arises because the inversion symmetry is broken when the graphene sheet is placed on top of TMDCs. Further, \(\mathbf{s}=(s_{x},s_{y},s_{z})\) is the Pauli spin matrix and \(\hat{\mathbf{d}}_{ij}\) is the unit vector connecting the sites \(i\) and \(j\) in the same sublattice.
We analyze the physics of electrons near the Fermi energy using a low-energy effective Hamiltonian derived from Eq. (1) and a Dirac theory around \(K\) and \(K^{\prime}\) points. It reads [59; 60; 61]
\[H_{s_{z}\eta} = v_{F}(\eta\sigma_{x}p_{x}+\sigma_{y}p_{y})+\Delta\sigma_{z}+ \lambda_{R}(\eta s_{y}\sigma_{x}-s_{x}\sigma_{y}) \tag{2}\] \[+\frac{1}{2}[\lambda^{A}_{I}(\sigma_{z}+\sigma_{0})+\lambda^{B}_{ I}(\sigma_{z}-\sigma_{0})]\eta s_{z}.\]
Here \(\eta=+1(-1)\) denotes the valley \(K\) (\(K^{\prime}\)), \(\Delta\) is the mass term that breaks the inversion symmetry, \(\lambda_{R}\) the Rashba type SOC strength, \(\mathbf{\sigma}=(\sigma_{x}\), \(\sigma_{y}\), \(\sigma_{z}\)) the Pauli matrix that corresponds to the pseudospin (i.e., \(A-B\) sublattice); \(\sigma_{0}\) is the unit matrix in the sublattice space and \(v_{F}\) (\(8.2\times 10^{5}\) m/s) denotes the Fermi velocity of Dirac fermions. The last term arises due to the breaking of sublattice symmetry and can be categorized into two groups according to its dependence on sublattice spin: (i) \(\lambda_{so}\sigma_{z}\eta s_{z}\) when \(\lambda_{so}=(\lambda^{A}_{I}+\lambda^{B}_{I})/2\). This is called conventional Kane-Mele (KM) type SOC, which has a magnitude of the order of \(\mu\)eV in graphene/TMDCs heterostructures [2; 61; 24]; (ii) \(\lambda_{e}\sigma_{0}\eta s_{z}\) when \(\lambda_{v}=(\lambda^{A}_{I}-\lambda^{B}_{I})/2\). It is called valley-Zeeman or staggered SOC and has been experimentally confirmed in graphene on TMDCs [21; 22; 23; 19]; it occurs only for \(\lambda^{A}_{I}=-\lambda^{B}_{I}\). Further, Refs. [2; 24; 61] show that \(\lambda_{so}\) is negligibly small or zero. In view of that, we treat only the regime \(\lambda_{v}>>\lambda_{so}\) and neglect \(\lambda_{so}\) altogether. As shown in Fig. 1, monolayer graphene, irradiated by off-resonant circularly polarized light, is grown on WSe\({}_{2}\) that provides a staggered potential and induces SOC in graphene. We study the changes induced by circularly polarized light in graphene/WSe\({}_{2}\) in the presence of a perpendicular electric field \(E\). We describe the monochromatic light through a time-dependent vector potential \(\vec{A}(t)=(E_{0}/\Omega)(\cos\Omega t,p\sin\Omega t)\) with \(\Omega\) its frequency, \(E_{0}\) the amplitude of the field \(E\), and \(p=+1(-1)\) for left (right) circular polarization. The vector potential is periodic in time \(A(t+T)=A(t)\) with \(T=2\pi/\Omega\). For high frequencies \(\hbar\Omega\gg t_{J}\) and low light intensities, i.e., \(\mathcal{A}^{2}<<1\) with \(\mathcal{A}=ev_{F}E_{0}/\hbar\Omega\) characterizing the intensity of light, Eq. (2) gives the Hamiltonian
\[H_{s\eta}(t) = H^{0}_{s\eta}+V(t), \tag{3}\]
with
\[H^{0}_{s_{z}\eta} = v_{F}(\eta\sigma_{x}p_{x}+\sigma_{y}p_{y})+\Delta\sigma_{z}+ \lambda_{v}\sigma_{0}\eta s_{z} \tag{4}\] \[+\lambda_{R}(\eta s_{y}\sigma_{x}-s_{x}\sigma_{y})\] \[V(t) = -(ev_{F}/\hbar)[\eta\sigma_{x}A_{x}(t)+\sigma_{y}A_{y}(t)].\]
For \(\hbar\Omega\gg t_{J}\) and \(\mathcal{A}^{2}<<1\), Eq. (3) can be reduced to an effective, time-independent Hamiltonian \(H^{\rm eff}_{s_{z}\eta}(t)\) using Floquet theory [35]. \(H^{\rm eff}_{s\eta}(t)\) is defined through the time evolution operator over one period
\[\hat{U}=\hat{T}\exp[-i\int_{0}^{T}H_{s_{z}\eta}(t)dt]=\exp[-iH^{\rm eff}_{s_{z} \eta}T], \tag{5}\]
where \(\hat{T}\) is time ordering operator. Using perturbation theory and expanding \(\hat{U}\) in the limit of large frequency \(\Omega\), we obtain
\[H^{\rm eff}_{s_{z}\eta}=H^{0}_{s_{z}\eta}+[V_{-1},V_{1}]/\hbar\Omega+O(\Omega^{- 2}), \tag{6}\]
where \(V_{m}=(1/T)\int_{0}^{T}e^{-im\Omega t}V(t)dt\) is the \(m\)-th Fourier har
Figure 1: (a) Real-space graphene with \(\vec{a}_{1}\) and \(\vec{a}_{2}\) the primitive lattice vectors. (b) Graphene’s first Brillouin zone and high symmetry points \(\Gamma\), \(K\), \(K^{\prime}\), and \(M\) in reciprocal space. Its primitive lattice vectors are \(\vec{b}_{1}\) and \(\vec{b}_{2}\). (c) Schematics of graphene epitaxially grown on a WSe\({}_{2}\) substrate and irradiated by a left circularly polarized light.
monic of the time-periodic Hamiltonian and \([V_{-1},V_{1}]\) the commutator between \(V_{-1}\) and \(V_{1}\). Corrections to Eq. (6), to all orders of \(1/\Omega\), can be obtained by the method of Ref. [55]. Here we neglect them because we treat only the case \(\hbar\Omega\gg t_{J}\). Using Eqs. (3) and (6) we obtain
\[H_{s_{\uparrow}\eta}^{\rm eff} = v_{F}[\eta\sigma_{x}p_{x}+\sigma_{y}p_{y}]+\bar{\Delta}_{\eta p} \sigma_{z}+\lambda_{v}\sigma_{0}\eta s_{z} \tag{7}\] \[+\lambda_{R}(\eta s_{y}\sigma_{x}-s_{x}\sigma_{y}),\]
where \(\bar{\Delta}_{\eta p}=\Delta+\eta p\Delta_{\Omega}\) with \(\Delta_{0}=v_{F}^{2}e^{2}E_{0}^{2}/\hbar\Omega^{3}\); \(\bar{\Delta}_{\eta p}\) is the renormalized mass term due to the circularly polarized light which creates a gap \(\Delta_{\Omega}\) in pure graphene, i.e., for \(\Delta=0\), see Ref. [35].
The diagonalization of Eq. (7) gives the dispersion
\[E_{\xi}^{\eta p}(k) = l\{G_{\eta}+2\lambda_{R}^{2}+\epsilon_{k}^{2}+2s\sqrt{\Upsilon} \}^{1/2}, \tag{8}\]
where \(\xi=\{l,s\}\) and \(G_{\eta}=\lambda_{v}^{2}+\bar{\Delta}_{\eta p}^{2}\), \(\Upsilon=\epsilon_{k}^{2}\bar{\lambda}^{2}+(\lambda_{R}^{2}-\lambda_{v}\bar{ \Delta}_{\eta p})^{2}\) with \(\epsilon_{k}=\hbar v_{F}k\), \(\bar{\Delta}_{\eta p}=\Delta+\eta p\Delta_{\Omega}\) and \(\bar{\lambda}^{2}=\lambda_{R}^{2}+\lambda_{v}^{2}\). Further, \(l=+1(-1)\) denotes the conduction (valence) band and \(s=+1(-1)\) represents the spin-up (spin-down) branches and is not a Pauli matrix \(s_{z}\). The normalized eigenfunctions for both valleys are
\[\psi_{\xi}^{+p}(k)=\frac{N_{\xi}^{+p}}{\sqrt{S_{0}}}\begin{pmatrix}1\\ A_{\xi}^{\eta p}e^{i\phi}\\ -iB_{\xi}^{\eta p}e^{i\phi}\\ -iC_{\xi}^{\eta p}e^{2i\phi}\end{pmatrix}e^{i{\bf k}\cdot{\bf r}}, \tag{9}\]
\[\psi_{\xi}^{-p}(k)=\frac{N_{\xi}^{-p}}{\sqrt{S_{0}}}\begin{pmatrix}-A_{\xi}^{ \eta p}e^{i\phi}\\ 1\\ iC_{\xi}^{\eta p}e^{2i\phi}\\ -iB_{\xi}^{\eta p}e^{i\phi}\end{pmatrix}e^{i{\bf k}\cdot{\bf r}}, \tag{10}\]
respectively, with
\[N_{\xi}^{\eta p}=l\big{[}1+(A_{\xi}^{\eta p})^{2}+(B_{\xi}^{\eta p})^{2}+(C_{ \xi}^{\eta p})^{2}\big{]}^{-1/2}, \tag{11}\]
\(S_{0}=L_{x}L_{y}\) the area of the sample, and \(\phi=\tan^{-1}(k_{y}/k_{x})\). Further, \(A_{\xi}^{\eta p}=\{E_{\xi}^{\eta p}-\eta\alpha_{1}^{\eta}\}/\epsilon_{k}\), \(B_{\xi}^{\eta p}=2\lambda_{R}\{(E_{\xi}^{\eta p})^{2}-(\alpha_{1}^{\eta})^{2} \}/\epsilon_{k}\{(E_{\xi}^{\eta p}+\eta\alpha_{1}^{\eta})(E_{\xi}^{\eta p}- \eta\alpha_{2}^{\eta})-\epsilon_{k}^{2}\}\), and \(C_{\xi}^{\eta p}=2\lambda_{R}\{E_{\xi}^{\eta p}-\eta\alpha_{1}^{\eta}\}/\{(E_ {\xi}^{\eta p}+\eta\alpha_{1}^{\eta})(E_{\xi}^{\eta p}-\eta\alpha_{2}^{\eta}) -\epsilon_{k}^{2}\}\) with \(\alpha_{1}^{\eta}=\bar{\Delta}_{\eta p}+\lambda_{v}\), and \(\alpha_{2}^{\eta}=\bar{\Delta}_{\eta p}-\lambda_{v}\).
In numerical calculations throughout the manuscript, we use values of the parameters \(\Delta\), \(\lambda_{v}\), and \(\lambda_{R}\) somewhat larger than those of [57] to have well-resolved spin and valley splittings since the overall physics of the system is not changed when we do so. As for the values of \(\Delta_{\Omega}\), it is known that the off-resonant light does not directly excite the electrons; instead, it modifies the electron bands through virtual photon absorption processes. To study the topological transitions of bands, this light must satisfy the condition \(\hbar\Omega\gg t_{J}\) and \({\cal A}^{2}<<1\). Accordingly, we will use the values of \(\Delta_{\Omega}\) from Refs. [35; 47; 54].
The typical band structure (8) for both valleys is illustrated in Fig. 2 for \(p=+1\), \(\Omega_{\Omega}<\Delta+\lambda_{v}\) (inverted band regime), and \(\Delta_{\Omega}>\Delta+\lambda_{v}\) (direct band regime). The left panel shows the inverted band regime. The inversion occurs due to the anticrossing of the bands with opposite spins and in the presence of the Rashba SOC. The right panel depicts the direct band regime with simple parabolic dispersion. It is found that the spin and valley degeneracies are completely lifted when \(\Delta_{\Omega}>\Delta+\lambda_{v}\), whereas the valley degeneracy is restored in the opposite limit similar to silicene [62]. The valleys are interchanged if proximitized graphene is irradiated by a right circularly polarized light \(p=-1\) (not shown here).
### Limiting cases and density of states (DOS)
i) Setting \(\Delta=0\) in Eq. (8), we obtain
\[E_{\xi}^{\eta p}(k) = l\{\lambda_{v}^{2}+\Delta_{\Omega}^{2}+2\lambda_{R}^{2}+ \epsilon_{k}^{2}+2s\sqrt{Y}\}^{1/2}, \tag{12}\]
Figure 3: Density of states for two values of \(\Delta_{\Omega}\), as indicated, and \(\Gamma=0.01\) meV. The left panel shows the valley components of the DOS, with both spins included, whereas the right panel shows the spin components of the DOS, with both valleys included. In both panels the curves indicated by arrows show the total DOS. The parameters \(\Delta,\lambda_{v}\), and \(\lambda_{R}\) are the same as in Fig. 2. The marking of the curves is shown inside the panels. In the left panel both spin contributions are included, in the right one both valley contributions are included.
Figure 2: Energy dispersion curves around \(K\) and \(K^{\prime}\) of a graphene/WSe\({}_{2}\) heterostructure for \(\Delta=1\) meV, \(\lambda_{v}=4\) meV, and \(\lambda_{R}=2\) meV. The left panel shows the inverted band regime, with strong spin mixing of different states, obtained for \(\Delta_{\Omega}<\Delta+\lambda_{v}\). The right panel shows the direct band regime, with nearly full spin polarization, obtained for \(\Delta_{\Omega}>\Delta+\lambda_{v}\). The marking of all curves resulting from Eq. (8), with \(p=1\) for all of them, is shown inside the panels. The solid black (red) curves are for \(\eta=+1\) and \(s=+1(-1)\) and the dashed black (red) ones for \(\eta=-1\) and \(s=+1(-1)\).
with \(Y=\epsilon_{k}^{2}\tilde{\lambda}^{2}+(\lambda_{R}^{2}-\eta\lambda_{v}\Delta_{ \Omega})^{2}\).
ii) In the limit \(\lambda_{R}=0\), Eq. (8) reduces
\[E_{\xi}^{up}(k)=l\big{[}\epsilon_{k}^{2}+\tilde{\Delta}_{up}^{2}\big{]}^{1/2}+s \lambda_{v}. \tag{13}\]
The DOS per unit area corresponding to Eq. (8) is given by
\[D(E)=\frac{|E|v_{F}^{-2}}{2\pi\hbar^{2}}\sum_{up}\Big{[}\frac{\theta(|E|-|E|^{ up}_{1g}|)}{1-\tilde{\lambda}/M^{+}}+\frac{\theta(|E|-|E^{up}_{2g}|)}{1+ \tilde{\lambda}/M^{-}}\Big{]}, \tag{14}\]
with
\[E^{up}_{1g} = \lambda_{v}+\tilde{\Delta}_{up},\quad E^{up}_{2g}=\big{[}( \lambda_{v}-\tilde{\Delta}_{up})^{2}+4\lambda_{R}^{2}\big{]}^{1/2}\] \[M^{\pm} = \big{[}(\lambda_{R}^{2}-\lambda_{v}\tilde{\Delta}_{up})^{2}+ \hbar^{2}v_{F}^{2}\tilde{\lambda}^{2}\epsilon_{\pm}\big{]}^{1/2} \tag{15}\] \[\hbar^{2}v_{F}^{2}\epsilon_{\pm} = E^{2}+\lambda_{v}^{2}-\tilde{\Delta}_{up}^{2}\pm 2\big{[} \tilde{\lambda}^{2}E^{2}-\lambda_{R}^{2}(\lambda_{v}+\tilde{\Delta}_{up})^{2} \big{]}^{1/2}.\]
In Fig. 3 we plot the DOS given by Eq. (14). The two jumps in the DOS indicate that two gaps open at each valley, displaying the clear signature of lifting the spin and valley degeneracies, when graphene on WSe\({}_{2}\) substrate is in the direct band regime. The spin and valley degeneracies are completely lifted in the direct band regime while only the spin degeneracy is lifted in the inverted band regime. Note that the DOS diverges in the inverted band regime as \(D(E)\propto(E-\Delta_{1})^{-1/2}\) with \(\Delta_{1}=\lambda_{R}(\lambda_{v}+\Delta)/(\lambda_{R}^{2}+\lambda_{v}^{2})^ {1/2}\) (see green curves in both panels). This divergence is due to the Mexican-hat energy dispersion [63], cf. Fig. 2. In passing we may add that this behaviour of the DOS remains the same as the broadened one provided the level width \(\Gamma\) is small, \(\Gamma<0.5\) meV. For higher \(\Gamma\) the small structure of the DOS curves is smoothened out.
## III Conducttivities
We consider a many-body system described by the Hamiltonian \(H=H_{0}+H_{I}-{\bf R}\cdot{\bf F}({\bf t})\), where \(H_{0}\) is the unperturbed part, \(H_{I}=\lambda V\) is a binary-type interaction (e.g., between electrons and impurities or phonons) of strength \(\lambda\), and \(-{\bf R}\cdot{\bf F}(t)\) is the interaction of the system with the external field \(F(t)\)[64]. For conductivity problems we have \({\bf F}(t)=e{\bf E}(t)\), where \({\bf E}(t)\) is the electric field, \(e\) the electron charge, \({\bf R}=\sum_{i}{\bf r}_{i}\), and \({\bf r}_{i}\) the position operator of electron \(i\). In the representation in which \(H_{0}\) is diagonal the many-body density operator \(\rho=\rho^{d}+\rho^{nd}\) has a diagonal part \(\rho^{d}\) and a nondiagonal part \(\rho^{nd}\). Using \(\rho=e^{-\beta H}\) and \(H=H_{0}+\lambda V\), all operators were evaluated in the van Hove limit, \(\lambda\to 0,t\rightarrow\infty\) but \(\lambda^{2}t\) finite, and all averages \(<X>=Tr\{X\rho\}\) in the representation in which \(H_{0}\) is diagonal. In this representation \(\lambda V\) is assumed nondiagonal; if it has a diagonal part, it's included in \(H_{0}\). Correspondingly, for weak electric fields and weak scattering potentials, for which the first Born approximation applies, the conductivity tensor has a diagonal part \(\sigma^{d}_{\mu\nu}\) and a nondiagonal part \(\sigma^{nd}_{\mu\nu}\); the total conductivity is \(\sigma^{tot}_{\mu\nu}=\sigma^{d}_{\mu\nu}+\sigma^{nd}_{\mu\nu},\mu,\nu=x,y\). For further details see Ref. [64].
In general we have two kinds of currents, diffusive and hopping, with \(\sigma^{d}_{\mu\nu}=\sigma^{dif}_{\mu\nu}+\sigma^{col}_{\mu\nu}\), but usually only one of them is present. The term \(\sigma^{col}_{\mu\nu}\) was introduced in Ref. [64] to distinguish collisional current contributions that are different from the standard diffusive ones valid for elastic scattering and characterized by a relaxation time \(\tau\). As such, this is the main term for transport in a magnetic field when the diffusion contributions vanish. It also describes hopping between localized states. If no magnetic field is present, the hopping term \(\sigma^{col}_{\mu\nu}\) vanishes identically and only the term \(\sigma^{dif}_{\mu\nu}\) survives. For elastic scattering it is given by [64]
\[\sigma^{d}_{\mu\nu}=\frac{\beta e^{2}}{S_{0}}\sum_{\zeta}f_{\zeta}(1-f_{\zeta} )v_{\nu\zeta}\,v_{\mu\zeta}\,\tau_{\zeta}, \tag{16}\]
with \(\tau_{\zeta}\) the momentum relaxation time, and \(v_{\mu\zeta}\) the diagonal matrix elements of the velocity operator. Further, \(f_{\zeta}=\big{[}1+\exp[\beta(E_{\zeta}-E_{F})]\big{]}^{-1}\) is the Fermi-Dirac distribution function, \(\beta=1/k_{B}T\), and \(T\) the temperature.
Regarding the contribution \(\sigma^{nd}_{\mu\nu}\) one can use the identity \(f_{\zeta}(1-f_{\zeta^{\prime}})\big{[}1-\exp[\beta(E_{\zeta}-E_{\zeta^{\prime}}) ]\big{]}=f_{\zeta}-f_{\zeta^{\prime}}\) and cast the original form [64] in the more familiar one
\[\sigma^{nd}_{\mu\nu}=\frac{i\hbar e^{2}}{S_{0}}\sum_{\zeta\neq\zeta^{\prime}} \frac{(f_{\zeta}-f_{\zeta^{\prime}})\,v_{\nu\zeta^{\prime}}\,v_{\mu\zeta^{ \prime}}}{(E_{\zeta}-E_{\zeta^{\prime}})(E_{\zeta}-E_{\zeta^{\prime}}-i\Gamma )}, \tag{17}\]
Figure 4: Longitudinal conductivity vs Fermi energy \(E_{F}\) for \(T=0\) K, and \(\tau_{F}=1\times 10^{-15}\) sec. The other parameters are the same as in Fig. 2.
where the sum runs over all quantum numbers \(\zeta\) and \(\zeta^{\prime}\) with \(\zeta\neq\zeta^{\prime}\). The infinitesimal quantity \(\epsilon\), in the original form of the conductivity, has been replaced by \(\Gamma_{\zeta}\) to phenomenologically account for the broadening of the energy levels. One should keep in mind that a _strong_ disorder may modify the Hall conductivity considerably. However, this problem is not studied here. In Eq. (17) \(v_{\nu\zeta\zeta^{\prime}}\) and \(v_{\mu\zeta\zeta^{\prime}}\) are the off-diagonal matrix elements of the velocity operator. The relevant velocity operators are given by \(v_{x}=\partial H/\hbar\partial k_{x}\) and \(v_{y}=\partial H/\hbar\partial k_{y}\). With \(\zeta=\{l,s,k,\eta,p\}=\{\xi,k,\eta,p\}\) for brevity, they read
\[\left\langle\zeta\right|v_{x}\left|\zeta^{\prime}\right\rangle=v_{F}N_{\xi}^{ np}N_{\xi^{\prime}}^{np}(D_{\xi,\xi^{\prime}}^{np}e^{i\phi}+F_{\xi,\xi^{\prime}}^{ np}e^{-i\phi})\delta_{\eta,\eta^{\prime}}\delta_{k,k^{\prime}}, \tag{18}\]
\[\left\langle\zeta^{\prime}\right|v_{y}\left|\zeta\right\rangle=iv_{F}N_{\xi}^ {np}N_{\xi^{\prime}}^{np}(D_{\xi,\xi^{\prime}}^{np}e^{-i\phi}-F_{\xi,\xi^{ \prime}}^{np}e^{i\phi})\delta_{\eta,\eta^{\prime}}\delta_{k,k^{\prime}}, \tag{19}\]
where \(D_{\xi,\xi^{\prime}}^{np}=A_{\xi^{\prime}}^{np}+B_{\xi}^{np}C_{\xi^{\prime}}^ {np}\) and \(F_{\xi,\xi^{\prime}}^{np}=A_{\xi}^{np}+B_{\xi^{\prime}}^{np}C_{\xi}^{np}\).
The diagonal velocity matrix elements \(v_{x\zeta}=\partial E_{\xi}^{np}/\hbar\partial k_{x}\) from Eq. (8) can be readily found
\[v_{x\zeta}=\frac{l\hbar v_{F}^{2}k_{x}}{E_{\xi}^{np}}\big{[}1+\frac{s\bar{ \lambda}^{2}}{\sqrt{\Upsilon}}\big{]}. \tag{20}\]
The above mentioned general expressions for conductivities are modified for Floquet theory [34] but are still valid for driven systems in the limit of large frequencies and weak intensity of light (\(\mathcal{A}<<1\)) since only the zeroth level of the Floquet states contributes [35], cf. Sec. III. Thus, these states can be taken as the eigenstates of Eq. (6). In addition, although Eq. (6) is perturbative in \(\Omega\), the above Hall conductivities expressions are non-perturbative in \(\Omega\); that is, an infinitesimal gap \(\bar{\Delta}_{np}\) is sufficient to yield a topological band with a quantized Hall conductance in units of \(2e^{2}/h\)[35]. Further the Fermi distribution is nonuniversal for systems which are out of equilibrium but for some cases of system-bath couplings [65], the steady-state distribution becomes thermal, and we restrict our results to such cases. Additionally, the electrode chemical potential will be small, for linear responses, compared to the intrinsic chemical potential of the system, and so we ignore the electrode chemical potential in our calculations. This allows us to write the chemical potential in the Kubo formalism as a constant, i.e. without accounting for sources at the boundaries. Also, it's worth pointing out that our approach for evaluating the conductivity tensor is the same or similar with that followed in Refs. [54] for MoS\({}_{2}\), [66; 67] for silicene, and [68] for WSe\({}_{2}\). In all of them a perpendicular electric field, not the source-to-drain one, was included in \(H_{0}\). This is similar to our inclusion of the off-resonant light term \(V(t)\) in \(H_{0}\), as in the present work, and was also the case of Ref. [56].
We now calculate the conductivity \(\sigma_{yx}^{nd}\) given by Eq. (17). Further, the velocity matrix elements (18) and (19) are diagonal in \(k\), therefore \(k\) will be suppressed in order to simplify the notation. The summation in Eq. (17) runs over all quantum numbers \(\xi\), \(\xi^{\prime}\), \(\eta\), \(\eta^{\prime}\), and \(k\). The parameter \(\Gamma_{\zeta}=\Gamma_{\eta\eta^{\prime}}^{\xi\xi^{\prime}}\), that takes into account the level broadening, is assumed independent of the band and valley indices, i.e., \(\Gamma_{\eta\eta^{\prime}}^{\xi\xi^{\prime}}=\Gamma\). Using Eqs. (18) and (19) we can express Eq. (17) as
\[\mathrm{Re}\sigma_{yx}^{nd}(\xi,\xi^{\prime},\eta,p) = \frac{2e^{2}\hbar^{2}v_{F}^{2}}{h}\int dkk\,\frac{(N_{\xi}^{np}N_{ \xi^{\prime}}^{np})^{2}(f_{\xi k}^{np}-f_{\xi^{\prime}k}^{np})}{(\Delta_{\xi \xi^{\prime}}^{np})^{2}+\Gamma^{2}}\] \[\times\big{[}(D_{\xi,\xi^{\prime}}^{np})^{2}-(F_{\xi,\xi^{ \prime}}^{np})^{2}\big{]},\] \[\mathrm{Im}\sigma_{yx}^{nd}(\xi,\xi^{\prime},\eta,p) = 0, \tag{21}\]
where \(\Delta_{\xi\xi^{\prime}}^{np}=E_{\xi k}^{np}-E_{\xi^{\prime}k}^{np}\).
For \(\lambda=\Delta=\Delta_{\Omega}=0\) and \(\lambda_{R}\neq 0\), Eq. (21) vanishes because the factor \((D_{\xi,\xi^{\prime}}^{np})^{2}-(F_{\xi,\xi^{\prime}}^{np})^{2}\) becomes zero. Ignoring skew and intervalley scatterings, the valley-Hall conductivity \((\sigma_{yx}^{v})\) obtained from Eq. (21) can be evaluated as
\[\sigma_{yx}^{v}=\sum_{\xi\xi^{\prime}p}\big{[}\sigma_{yx}^{nd}(\xi,\xi^{\prime},+,p)-\sigma_{yx}^{nd}(\xi,\xi^{\prime},-,p)\big{]}, \tag{22}\]
where we set \(\mathrm{Re}\sigma_{yx}^{nd}(\xi,\xi^{\prime},\eta,p)\equiv\sigma_{yx}^{nd}( \xi,\xi^{\prime},\eta,p)\). The spin-Hall conductivity \(\sigma_{yx}^{v}\) corresponding to Eq. (21) is finite only when both KM and staggered SOCs are present [69]. Therefore, \(\sigma_{yx}^{v}\) vanishes even in the presence of Rashba SOC. Even if it does not in graphene on WSe\({}_{2}\), it is assumed negligible in the regime \(\lambda_{v}>>\lambda_{so}\) that we treat and we neglect it altogether, see also Sec. II, above Eq. (3). As usual, we have to multiply \(\sigma_{yx}^{v}\) by \(1/2e\)[58].
We can find a simple analytical result from Eq. (22) for the specific case \(\lambda_{v},\lambda_{R}=0\) in the low temperature limit. It is
\[\sigma_{yx}^{v}=\begin{cases}\frac{e}{2h},&-(\Delta+\eta p\Delta_{\Omega})<E_{F}< \Delta+\eta p\Delta_{\Omega}\\ \frac{e}{2h}\frac{\eta\Delta+p\Delta_{\Omega}}{E_{F}},&E_{F}>\Delta+\eta p \Delta_{\Omega}\end{cases} \tag{23}\]
Eqs. (16)-(17) of Ref. [54] in the limit \(\lambda\to 0\) are similar to Eq. (23). For \(\Delta_{\Omega}\to 0\), Eq. (23) reduces to a result reported in
Ref. [70]. Further, we find the charge Hall conductivity
\[\sigma^{c}_{yx}=\sum_{p\eta\eta^{\prime}\xi\xi^{\prime}}\sigma^{nd}_{yx}(\xi,\xi^{ \prime},\eta,\eta^{\prime},p)=\begin{cases}0,&\Delta_{\Omega}=0\\ &\\ \neq 0,&\Delta_{\Omega}\neq 0\end{cases} \tag{24}\]
In the limit \(\Delta_{\Omega}\to 0\), \(\sigma^{c}_{yx}\) vanishes.
We now consider the diagonal component \(\sigma^{d}_{xx}\) given by Eq. (16). Using Eq. (18), with \(\xi=\xi^{\prime}\), we obtain
\[\sigma^{d}_{xx}(\xi,\eta,p) = \frac{e^{2}v_{F}^{2}\beta}{\pi}\int dkk\,(N_{\xi}^{\eta p})^{4}f_{ \xi k}^{\eta p}(1-f_{\xi k}^{\eta p}) \tag{25}\] \[\times(A_{\xi}^{\eta p}+B_{\xi}^{\eta p}C_{\xi}^{\eta p})^{2}\, \tau_{\xi k}^{\eta p}.\]
At very low temperatures we can make the approximation \(\beta f_{\xi k}^{\eta p}(1-f_{\xi k}^{\eta p})\approx\delta(E_{F}^{\eta p}-E_ {F})\) and \(\tau_{\xi k}^{\eta p}=\tau_{\xi k}^{\eta p}\). We find \(r=\sigma^{nd}_{xx}(\xi,\eta,p)/\sigma^{d}_{xx}(\xi,\eta,p)<<1\), mainly because \(\sigma^{nd}_{xx}(\xi,\eta,p)\propto\Gamma\). The precise value of \(r\) depends on the scattering strength through \(\Gamma\) and \(\tau\) appearing in \(\sigma^{d}_{xx}(\xi,\eta,p)\). In what follows we neglect \(\sigma^{nd}_{xx}(\xi,\eta,p)\).
After evaluating the integral over \(k\), Eq. (25) becomes
\[\sigma^{d}_{xx}(\xi,\eta,p)=\frac{e^{2}\tau_{F}E_{F}}{\pi\hbar^{2 }}\Big{[} Q^{\eta p}_{\xi} \frac{\theta(E_{F}-E_{1g}^{\eta p})}{1-\tilde{\lambda}^{2}/M} \Big{|}_{\epsilon_{+F}} \tag{26}\] \[+ Q^{\eta p}_{\xi}\frac{\theta(E_{F}-E_{2g}^{\eta p})}{1+\tilde{ \lambda}^{2}/M}\Big{|}_{\epsilon_{-F}}\Big{]},\]
where \(Q^{\eta p}_{\xi}=(A_{\xi}^{\eta p}+B_{\xi}^{\eta p}C_{\xi}^{\eta p})^{2}(N_{ \xi}^{\eta p})^{4}\) and \(\tau_{F}\equiv\tau_{\xi k_{F}}^{\eta p}\) is the relaxation time evaluated at the Fermi level. As indicated, the 1st and 2nd line in the square brackets are to be evaluated at \(\epsilon_{+F}\) and \(\epsilon_{-F}\), respectively, where \(\epsilon_{\pm F}\) is obtained from Eq. (15) for \(E=E_{F}\). To evaluate Eq. (25) numerically we used a Lorentzian broadening of \(\delta(E_{\xi}^{\eta p}-E_{F})\).
The valley \(P_{v}\) and spin \(P_{s}\) polarizations, corresponding to Eq. (25), are
\[P_{v}=\sum_{\xi p}\frac{\sigma^{d}_{xx}(l,s,+,p)-\sigma^{d}_{xx}(l,s,-,p)}{ \sigma^{d}_{xx}(l,s,+,p)+\sigma^{d}_{xx}(l,s,-,p)}, \tag{27}\]
and
\[P_{s}=\sum_{\eta pl}\frac{\sigma^{d}_{xx}(l,+,\eta,p)-\sigma^{d}_{xx}(l,-,\eta,p)}{\sigma^{d}_{xx}(l,+,\eta,p)+\sigma^{d}_{xx}(l,-,\eta,p)}. \tag{28}\]
In Fig. 4 we plot the conductivity, given by Eq. (25), as a function of the Fermi energy \(E_{F}\) by evaluating the integral over \(k\) numerically for two values of the parameter \(\Delta_{\Omega}\) and \(p=+1\). Further, the left panel represents the valley-dependent contribution of Eq. (25), with both spins included, whereas the right one depicts its spin-dependent contribution with both valleys included. To display the result clearly, we set \(\Delta=1\) meV, \(\lambda_{R}=2\) meV, \(\lambda_{v}=4\) meV, and \(\tau_{F}=1\times 10^{-15}\) sec. We find that \(\sigma^{d}_{xx}(\xi,\eta,p)\) vanishes when \(E_{F}\) is in the gap while it increases linearly when \(E_{F}\) is outside the gap. The kink appears when \(E_{F}\) crosses the conduction band (\(E_{++}^{\eta\eta}\)). Moreover, we find \(\sigma^{d}_{xx}(\xi,+,+)=\sigma^{d}_{xx}(\xi,-,+)\) in the inverted band regime \((\Delta_{\Omega}=0)\) while \(\sigma^{d}_{xx}(\xi,+,+)\neq\sigma^{d}_{xx}(\xi,-,+)\) in the direct band regime \((\Delta_{\Omega}\neq 0)\). We also verified that the analytical result (Eq. (26)) agrees well with the numerical one obtained from Eq. (25).
We plot the total longitudinal conductivity, with both valleys and spins included, in Fig. 5 for different values of \(\Delta_{\Omega}\). As expected, \(\sigma^{d}_{xx}\) is an even function of \(\Delta_{\Omega}\). In addition, the band gap increases with \(\Delta_{\Omega}\).
The valley \(P_{v}\) and spin \(P_{s}\) polarizations versus \(E_{F}\) are shown in Fig. 6 for \(\lambda_{R}=4\) meV and three different values of \(\Delta_{\Omega}\). It can be seen that \(P_{v}=0\) in the inverted band regime while \(P_{v}\neq 0\) in the direct band one. In other words, the valley polarization can be switched on and off by controlling the parameter \(\Delta_{\Omega}\). On the other hand, \(P_{s}\neq 0\) in both band regimes. It is interesting to study \(P_{v}\) in the direct band regime \((\Delta_{\Omega}\neq 0)\). The contribution of \(\sigma^{d}_{xx}(\xi,+)\) to \(P_{v}\) is zero in the range \(\lambda_{v}+\Delta-\Delta_{\Omega}\leq E_{F}<\lambda_{v}+\Delta+\Delta_{\Omega}\). Thus, \(P_{v}=1\), which is a pure \(K^{\prime}\) valley polar
Figure 7: Valley-Hall conductivity vs. \(E_{F}\) for \(T=1\) K and \(\Gamma=0\). The other parameters are \(\Delta=0.54\) meV, \(\lambda_{R}=0.56\) meV, and \(\lambda_{v}=1.22\) meV [57]. The green curve is measured in units of \(e/h\) and the blue one in units of \(e/10h\). The inset is a blowup of the region \(-2\) meV \(\leq E_{F}\leq 2\) meV.
ization for \(\Delta_{\Omega}\neq 0\). When we change the polarization of light to \(p=-1\), a pure \(K\) valley polarization is obtained. That is, one can easily reverse the valley polarization by reversing that of the circularly polarized light. This result may be useful in valleytronics applications, such as making valley valves [71].
In Fig. 7 we show the numerically evaluated valley-Hall conductivity \(\sigma^{v}_{yx}\), from Eq. (22), in the inverted \((\Delta_{\Omega}=0)\) and direct \((\Delta_{\Omega}\neq 0)\) band regimes for \(l=l^{\prime}\) with \(s\neq s^{\prime}\), as well as for \(l\neq l^{\prime}\) with \(s=s^{\prime}\) and \(s\neq s^{\prime}\). We used a sufficiently low temperature (\(T=1\) K) to ensure that thermal vibrations of atoms have a negligible contribution to the electron transport. \(\sigma^{v}_{yx}\) is quantized and has the universal value \(2e^{2}/h\) when the Fermi level is in the gap \(-1\) meV \(\leq E_{F}\leq 1\) meV (see green curve, compare with the DOS in Fig. 2). Its absolute value is reduced outside the gap as \(E_{F}\) increases. The two peaks, to the left and right of the gap, at \(E_{F}\approx\pm 1.5\) meV, appear due to the inverted band structure or the Mexican hat-like dispersion as can be seen in the inset of Fig. 7. \(\sigma^{v}_{yx}\) vanishes when \(E_{F}\) is in the gap in the direct band regime \(\Delta_{\Omega}\neq 0\) as the blue curve shows. The reason is that in this case electrons from both valleys flow in opposite directions and their contributions to the valley current exactly cancel each other. A non zero valley-Hall current is produced when \(E_{F}\) crosses the conduction and valence bands. When \(E_{F}\) grows further, the conductivity decreases. It is also worth noticing that the valley conductivity changes sign (not shown) if proximitized graphene is irradiated by a right circularly polarized light (\(p=-1\)).
For \(\Delta_{\Omega}=0\) a quantized valley-Hall conductivity of \(2e^{2}/h\) is obtained in the band gap as can be seen from the green curve in the inset of Fig. 7. On the other hand, for \(\Delta_{\Omega}\neq 0\) the valley-Hall conductivity is quenched to zero within the band gap (see the blue curve of Fig. 7), while a quantized charge Hall conductivity of \(2e^{2}/h\) and \(-2e^{2}/h\) is obtained for the left- and right-handed circularly polarized light, respectively, as shown in Fig. 8. The reason for the change \(2e^{2}/h\rightarrow-2e^{2}/h\) is that this nondiagonal contribution to the conductivity is an odd function of \(\Delta_{\Omega}\).
## IV Summary and conclusion
We investigated the valley-dependent dc transport by employing the linear response formalism and Floquet theory in the high-frequency limit as well as the energy dispersion in the presence of proximity-induced gaps. We derived analytical expressions for the energy dispersion relation of Dirac fermions, the DOS, and the diagonal and nondiagonal parts of the conductivity. We found that a transition occurs from an inverted band regime to a direct one for \(\Delta_{\Omega}>\Delta+\lambda_{v}\) (see Fig. 2). In addition, the energy dispersion shows a complete lifting of the _fourfold_ spin and valley degeneracies in the direct band structure while it has a _twofold_ valley degeneracy in the inverted band phase. We demonstrated that the DOS exhibits a van Hove singularity due to the inverted band structure, which remained unchanged as long as \(\Delta_{\Omega}<\Delta+\lambda_{v}\). The four jumps in the DOS are due to the lifting of the _fourfold_ spin and valley degeneracy in the direct band regime in contrast to pristine graphene, cf. Fig. 3.
We showed that the valley polarization \(P_{v}\) vanishes for \(\Delta_{\Omega}<\Delta+\lambda_{v}\) while for \(\Delta_{\Omega}>\Delta+\lambda_{v}\) it is finite, \(P_{v}\neq 0\); this might be useful in the design of valleytronics devices such as optically controlled valley filters and valves based on proximitized graphene. On the other hand, \(P_{s}\neq 0\) in both band regimes. Further, 100% \(K\) or \(K^{\prime}\) valley polarization is achieved in the range \(\lambda_{v}+\Delta-\Delta_{\Omega}\leqslant E_{F}<\lambda_{v}+\Delta+\Delta_{\Omega}\) when the handedness of the light polarization changes.
We found that, when \(E_{F}\) in the gap, \(\sigma^{v}_{yx}=2e^{2}/h\) in the invert band regime while \(\sigma^{v}_{yx}=0\) in the direct band regime. Peaks are found in the curve of \(\sigma^{v}_{yx}\) versus \(E_{F}\) when \(E_{F}\) crosses the inverted dispersion, see the green curve in Fig. 7. Moreover, for \(\Delta_{\Omega}>\Delta+\lambda_{v}\), we have \(\sigma^{v}_{yx}\neq 0\) when \(E_{F}\) crosses the conduction and valence bands. The valley-Hall conductivity tends to \(\sigma^{v}_{yx}=0\) for both invert and direct band regimes in the limit \(E_{F}\rightarrow\pm\infty\). A last finding is that the charge Hall conductivity is finite for \(\Delta_{\Omega}\neq 0\) and changes sign when the handedness of the light polarization changes.
Our results may be pertinent to developing future spintronics and valleytronics devices such as field-effect tunnelling transistors, memory devices, phototransistors, etc.
## Acknowledgments
M. Z. and P. V. acknowledge the support of the Concordia University Grant No. NGR034 and a Concordia University Merit Fellowship. The work of M. T. was supported by Colorado State University.
|
2305.14659 | InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration
in Improving the Performance of Information Extraction | Learning template based information extraction from documents is a crucial
yet difficult task. Prior template-based IE approaches assume foreknowledge of
the domain templates; however, real-world IE do not have pre-defined schemas
and it is a figure-out-as you go phenomena. To quickly bootstrap templates in a
real-world setting, we need to induce template slots from documents with zero
or minimal supervision. Since the purpose of question answering intersect with
the goal of information extraction, we use automatic question generation to
induce template slots from the documents and investigate how a tiny amount of a
proxy human-supervision on-the-fly (termed as InteractiveIE) can further boost
the performance. Extensive experiments on biomedical and legal documents, where
obtaining training data is expensive, reveal encouraging trends of performance
improvement using InteractiveIE over AI-only baseline. | Ishani Mondal, Michelle Yuan, Anandhavelu N, Aparna Garimella, Francis Ferraro, Andrew Blair-Stanek, Benjamin Van Durme, Jordan Boyd-Graber | 2023-05-24T02:53:22Z | http://arxiv.org/abs/2305.14659v2 | # InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration
###### Abstract
Learning template-based information extraction from documents is a crucial yet difficult task. Prior template-based IE approaches assume foreknowledge of the domain's templates; however, real-world IE don't have pre-defined schemas and it is a figure-out-as you go phenomena. To quickly bootstrap templates in a real-world setting, we need to induce template slots from documents with zero or minimal supervision. Since the purpose of question answering intersect with the goal of information extraction, we use automatic question generation to induce template slots from the documents and investigate how a tiny amount of a proxy human-supervision on-the-fly (termed as _InteractiveIE_) can further boost the performance. Extensive experiments on biomedical and legal documents, where obtaining training data is expensive, reveal encouraging trends of performance improvement using _InteractiveIE_ over _AI-only_ baseline.
## 1 Introduction
The goal of information extraction (IE) is to learn the structure from unstructured documents. Existing information extraction tools help the analysts understand certain patterns or behaviors in the world (Li et al., 2022; Mora et al., 2009). Text from social media could inform municipal officials about city events or relay information about the COVID-19 pandemic (Tran et al., 2021). In a fast-moving real-world situation, information extraction needs are likely to change over time, and it is not possible to have the prior knowledge of required slots beforehand. For instance, in a pandemic, people might be concerned about the mortality rate of the disease at some point, and after sometime, they might be more interested in knowing the immunization steps. Despite this ever-growing need, widely popular supervised IE systems require documents that are annotated according to specified slots and each template typically consists of slot types with their corresponding entity fillers (Chinchor and Marsh, 1998; Pavlick et al., 2016).
A challenge in IE is to induce a structured template from a raw corpus. An unsupervised approach would be ideal for quickly building IE systems in new domains. The model could automatically learn the template without relying on laborious annotation for a set of documents for training purposes. Prior unsupervised approaches are mainly probabilistic from modeling patterns in clauses (Manning et al., 2014; Bamman and Smith, 2014; Chambers and Jurafsky, 2011). Still, template matching accuracy is quite low for these unsupervised methods.
A quick way of defining an information need is by asking a question. Recent work in question generation can automatically output factoid questions conditioned on a given passage (Nagumothu et al., 2022; Genest et al., 2022). Factoid questions typically ask which entity in the passage fulfills a specified semantic role. For instance, if the input is a legal contract, the model can generate questions about the time of agreement, how long would that be effective and who all have agreed upon the contract. In the same example, we can map those generated questions into specific semantic labels such as, "Agreement Date", "Effective Date", "Agreed upon Parties" respectively.
Therefore, in this paper, we propose a method of looking at question generation using state-of-the-art models like T5 (Raffel et al., 2019), BART (Lewis et al., 2020) to induce slots. We cluster the generated questions to find a collection of representative questions that should adhere to slot types that are present across documents. Finally, we evaluate the method of clustering the questions based on the alignment between the answers and the gold entity fillers for each slot type. Through simple grouping of generated questions, we try to ensure such that each cluster should be semantically coherent and should correspond to a slot.
Furthermore, with an aim to assess how a tiny amount of supervision can shape the IE process, we use a proxy human-in-the-loop interactive approach (termed as _InteractiveIE_) to determine if that further improves the performance of such an approach. This set up is analogous with the automatically generated clusters being displayed in front of human users (through an interactive interface) in which they are asked to edit or modify the information extracted for each slot to make each cluster representative of each information slot.
We evaluate our proposed automatic method and proxy-human-AI collaborative approach of unsupervised information extraction on two domain-specific corpora, i.e. on CUAD dataset Hendrycks et al. (2021) for legal contracts and biomedical slot filling Papanikolaou et al. (2022), compare and contrast the performances over existing automatic baselines and human-only IE approaches. Preliminary experiments confirm that proxy-human-AI collaboration is an effective approach that considerably outperforms AI-only approaches.
## 2 Schema Induction Methodology
What motivates us to adopt a new method of template induction and mapping to slots?Unsupervised RE (or OpenRE Yates et al. (2007)) aims to extract relations without having access to a labeled dataset during training. Yates et al. Yates et al. (2007) extract triple candidates using syntactic rules and refine the candidates with a trained scorer. Saha et al. Saha and Mausam (2018) propose to simplify conjunctive sentences to improve triples extraction. More recently, neural networks and word-embedding were applied to solve this task Cui et al. (2018), requiring a general domain annotated dataset to pretrain their model. Finally, Roy et al. Roy et al. (2019) propose an ensemble method to aggregate results of multiple OpenRE models. These triples extraction approaches rely on surface forms, which makes it hard for them to group instances that express the same relation using very different words and syntax.
Although the existing OpenIE based models Hu et al. (2020); Renze et al. (2021); Marcheggiani and Titov (2016); Tran et al. (2020) extract relations from unannotated datasets, we argue that they are not truly unsupervised approaches. The main problem is hyperparameter tuning (these methods rely extensively on hyperparameters that need to be adjusted i.e. the number of epochs, regularization and learning rate). All of these can only be determined from chunks of training data. However, in a real-world scenario it is very difficult to estimate them without access to enough labeled data. Therefore, we argue that the previous methods are not fully unsupervised when it comes to hyperparameter tuning, which we believe, restricts their application in a real-world setting. Recently, Genest et al. (2022) has proposed an unsupervised method by encoding the relations between entities using a relation encoder followed by clustering the relation representations. However, they fix the entities before performing such operation, which is also a bit un
Figure 1: Our approach for inducing templates through question generation. We first generate factoid questions from a corpus. Afterward, we bleach questions by replacing entities with ner tags. Finally, we embed the bleached questions and cluster them into groups that should align with schema slot types.
realistic. _As a result, it motivates us to define the unsupervised IE setting as learning an IE model and tuning its hyperparameters using unannotated data on-the-fly_.
What do we do and how do we do?An information extraction system typically involves defining templates and a set of slot types. Each slot type pertains to a specific semantic role. For example in Figure 1, the goal is to extract as much information as possible for the event _murder_, such as _who did that?_, _when did that happen?_ or _what things were distributed?_. In case of supervised IE, the fixed templates and slots are annotated, whereas our goal is to extract information from documents dynamically on-the-fly without knowledge of pre-defined templates. For capturing the IE needs that change over time, we aim to define a way to quickly bootstrap template schemas with zero to minimal supervision. Since the goal of asking a question nicely intersects with defining an information need (Srihari and Li, 2000), we use widely used Question-Answering systems to refine our dynamic schema. In this section, we define our method of determining templates for documents automatically (_AI-only_). Our pipeline of automatic template induction comprises of the following steps as described below.
### Salient Entity Identification
To find the potential entities (which might be possible slot-fillers for some templates), we make use of both general domain and domain-specific named entities.
### Generating Questions
This paper looks at question generation to induce templates. With advances in neural text generation (Lewis et al., 2020), generating factoid questions has become much easier (Lewis et al., 2021). The models generate questions based on a context passage and an entity mention from the passage. Each generated question inherently describes the information need of an entity mention in the document. Using the same example from Section 2.1, a model may generate the following question about "MRTA": "who distributed leaflets claiming the responsibility for the murder of former defense minister Enrique Lopez Albujar". If we represent "MRTA" with this generated question, we link it to other entities with generated questions that ask "who claimed the responsibility for the murder". A cluster of these questions naturally maps to the _Pertrator_ slot type. This motivates a question-driven approach to slot filling.
### Bleaching Questions
Bleaching refers to replacing arguments in a statement with placeholders (Chen et al., 2020), the purpose of which is to remove document-specific information from the statements. This step is important to filter out unwanted information from the question embeddings, so that document-specific information does not get representated. So we bleach the generated questions by replacing NERs with [MASK].
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Clusters** & **Questions** \\ \hline Cluster 1 & What drug can’t be explained by the subacute effects of cocaine?, What emotion did the rc group correctly identify more slowly than the oc group?, What can be caused by coronary vasospasm?, What drug is associated with myocardial ischemia? \\ \hline Cluster 2 & How many patients received coronary cta after a brief observation period?, What is one of the untoward effects of heparin?, What is the dose of ibuprofen?, What type of brain damage is induced by pilocarpine?, What is the name of the condition that occurs when pilocarpine is used? \\ \hline Cluster 3 & What percent of patients with cocaine-associated chest pain had a normal or nonspecific ecg?, Along with ketoconazole and itraconazole, what drug may inhibit the metabolism of mifepritone?, What does pilocarpine cause in rats? \\ \hline Cluster 4 & What drug can’t be explained by the subacute effects of cocaine?, What is heparin used to treat for more than 50 years?, What suppressed EGF-mediated protein levels of c-Jun and c-Fos? \\ \hline Cluster 5 & What is one drug that has been shown to increase the Cmax and AUC of midazolam?, What was used to decrease the mRNA levels of RANK, What is the cyclic response element binding factor? \\ \hline Cluster 6 & What kind of change in patient’s treatment need to be done? \\ \hline Cluster 7 & What did the patient’s treatment change to focus on? \\ \hline \hline \end{tabular}
\end{table}
Table 1: shows the output of clustering from 3 biomedical documents generated by _AI-only_ Template Induction. It seems to be a bit semantically incoherent, since none of the answers to the slots represent unique information need.
### Embedding Questions
After we bleach the generated questions, we then embed them. In preliminary experiments, we experiment with TF-IDF. A TF-IDF model counts the token frequency but also downweights tokens that appear too frequently. We extend TF-IDF to TF-IDF-scale, where we upweight a few trigger words by a factor of 10 in the TF-IDF embedding.
### Clustering Questions
After embedding the questions, we use \(K\)-Means clustering algorithm to cluster the generated questions into \(k\) groups. The goal is that each cluster can correspond to some slot type.
Slot Mapping EvaluationThen we determine the _representative questions_ for each cluster. Based on cosine similarity among the questions, we select the ones which have the highest average similarity with other questions and choose top-\(k\) questions. For each document, we consider the document specific questions having high cosine similarity with the mean embedding of clusters as "**representative**" questions of that cluster in the document, and treat the answers to those questions as predicted answers, and discard the other "**non-representative**" questions. We consider the gold slots as the relation types or role of entities which are annotated inside the documents. Based on representative questions per cluster, we generate the slot answers by choosing the gold slot based on nearest-neighbour fuzzy matching. Then we compare the predicted answers of the slot per document per cluster with the gold answers for that slot.
## 3 Template Induction using InteractiveIE
The automatic template generation method as described in section 2 has a few pitfalls: 1) The representative questions of each cluster are sometimes ill-formed and too many unwanted questions per document might be considered as representative questions of the cluster, 2) some important questions might be treated as non-representative questions of a cluster. Figure 2 shows that after **Step 1**, i.e. when the _AI-Only_ method (described in section 2) generates cluster outputs, a number of questions marked in red make the overall clusters semantically incoherent. We incorporate a proxy human in-the-loop **(Step 2)** to improve the semantic coherence of each cluster mapping and its actions on doc 1 are provided as supervision to AI model **(Step 3)**, which inturn makes the clusters semantically coherent, and changes are also reflectd in doc 2 as shown in green **(Step 4)**.
Figure 2: shows an example from CUAD to demonstrate the pipeline of _InteractiveIE_ when a proxy-human is incorporated in-the-loop to improve slot mapping. Here cluster 1 approximates “**Agreement Date**”, cluster 2 approximates “**Countries Participating in Agreement**”, cluster 3 approximates “**Countries Participating in Agreement interpretation**” and cluster 4 approximates “**Agreement Termination Date**”.
### User Interface
In Table 1, we found that the clusters are semantically incoherent. Therefore, we ask human to perform edits, addition, deletion of questions from these clusters towards the goal of making them _semantically coherent_. For enabling the users to modify the clusters, we have designed a user interface consisting of two pages that provide two different views into the model as discussed in 3.1.1 and 3.1.2.
#### 3.1.1 Page1: Cluster View
The goal of this cluster view is to let humans get a chance to look at each cluster's representative questions to make sense of its semantic representation (similar to the tables shown in figure 2).
Possible Human Operations on Overall Cluster:The user can perform several types of operations to modify the induced clusters:
Upweight Words and Recluster:This feature allows the users to specify words which will have higher weight during clustering. This is analogous to scaled version of TF-IDF. For instance, for the clusters appearing in table 1, upweighting words like "increase", "decrease" through the interface will split the Cluster 5 into 2 parts and pull 1 question from Cluster 3, with one cluster containing "What is one drug that has been shown to increase the Cmax and AUC of midazolam?", and the other containing "What was used to decrease the mRNA levels of RANK?, Along with ketoconazole and itraconazole", "what drug may inhibit the metabolism of miferpistone?"
Merge Clusters:The users have agencies to merge clusters if the representative questions in all the clusters entail similar kind of information. From Table 1 we find that cluster 6 and cluster 7 are asking for similar kind of information, therefore it is important to merge those. The questions for the human-specified clusters are merged into one, then the representative questions will also get updated for that merged cluster. Besides changing the clusters, the users can also perform the following operations to add or modify the representative questions:
Edit Representative QuestionsIf the representative questions for any cluster do not make sense, the user can either move those around, or even delete any questions. In this way, they can **edit the representative questions** for more than one cluster at a time.
Figure 3: shows the **document view** of the user interface. This page displays the slots and answers (highlighted in the document with colours of each slot type) from each slot type pertaining to a biomedical document;it allows the user to move around the existing questions from one cluster to another and also to edit and add new questions.
Add Representative QuestionsIf the user wants to **add some representative questions** in each cluster, they are free to do that as well. The new question will be answered from all the relevant documents. The following operations take place at the backend: To quantify relevance, we filter those documents which contain the domain-specific entities which are present in the questions. Next use a ROBERTA-based reader approach to answer the representative questions from those documents and then append the question-answer tuple to that cluster of that particular document.
Feedback to the AI SystemBased on the above-specified human actions, the AI model gets updated in the following ways:
1) The questions are re-clustered according to the same algorithm, and the representative questions will get updated for those clusters.
2) Based on the representative questions, the slots will again be regenerated based on fuzzy matching.
3) Now based on the regenerated slots, slot mapping evaluation is carried out.
#### 3.1.2 Page2: Document View
After looking at the clusters in 3.1.1, the human might want to make the clusters in each document semantically more coherent through the interface (Figure 3).
Possible Operations on Document View:The user can perform several types of operations to modify the induced clusters in each document:
Modify position of questions in each clusterIf a question looks unfitting in a cluster and the user feels it better fits another cluster, they can **recluster the questions** from the unfitting to the fitting cluster. As shown in Table 1, we find that the user might want to create distinct clusters containing questions like: 1) "How many patients received coronary cta after a brief observation period?", "What percent of patients with cocaine-associated chest pain had a normal or nonspecific ceg?" 2) "What was used to decrease the mRNA levels of RANK, What is the cyclic response element binding factor?", " What suppressed EGF-mediated protein levels of c-Jun and c-Fos?", "Along with ketoconazole and intra-conazole, what drug may inhibit the metabolism of mifepristone?" 3) "What type of brain damage is induced by pilocarpine?", "What is the name of the condition that occurs when pilocarpine is used?", "What does pilocarpine cause in rats?", "What can be caused by coronary vasospasm?", "What is one of the untoward effects of heparin?" and 4) "What is one drug that has been shown to increase the Cmax and AUC of midazolam?". This reclustering can be made easier with an interface with a document shown in front of the user as shown in figure 3. The representative set of that cluster might be changed, new slot might be mapped and hence re-evaluation needs to be done.
Delete questions in each clusterIf the question looks unfitting to none of the clusters, the user can either **delete that information or move the question from representative block to the non-representative block**. For instance, in figure 3, the question like "What is the difference between cn and rc users?" seems unimportant and it can be either deleted or moved to non-representative block in the interface. Only the representative questions will be updated again, since the unnecessary questions will move to the non-representative section which will not be considered while recalculating the representative set of questions.
Ask questions in each clusterIf the user feels that they need to **ask more questions or even edit an existing question**, they can do that. For instance, in figure 3, the user might want to ask "By which task were the participants assessed on?". Answers to these modified/added questions will be appended along with the new questions and added to the cluster. Then these questions will be added as representative questions, again new slot mapping and evaluation needs to be performed.
### Template Induction with a proxy human-in-the-loop
We hypothesize that the F1-score of slot mapping can be improved in a proxy-human setting, where a proxy-human can use the same user interface (Section 3) and perform operations that a human having a reasonable understanding of the domain to make each cluster representative of a particular slot. Since we aim to understand how human-AI collaboration might help improving the performance of information extraction, we have tried to design a proxy-human experiment whether human intervention would further improve the performance of the model. For using purpose, we use a recently released large language model ChatGPT1 and make it interact with the AI-based baseline (Section 2)
to reach the same goal. Since the inception of ChatGPT, a large volume of studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. While some studies have shown that trust in AI has been significantly improved by the incorporation of ChatGPT (Ye et al., 2023; Wang et al., 2023), there is a generic consensus that humans are far better than ChatGPT (Koubaa et al., 2023). Due to its impressive capabilities in natural language understanding and generation emergence, researchers are increasing becoming curious about how ChatGPT is able to achieve such strength and how far it is from human performance (Guo et al., 2023; Mitrovic et al., 2023; Wang et al., 2023).
Therefore, we **prompt ChatGPT in such a way that it mimics two similar human operations which are possible by humans to perform through the user interface**; in other words, we make use of ChatGPT instead of real-humans to verify our hypothesis that human-AI collaboration might enhance our performance. To better elicit knowledge and reasoning from large language models, many prompting methods have been proposed, such as Chain-of-Thought (Wei et al., 2022), Least-to-Most (Zhou et al., 2022), Program-of-Thought (Chen et al., 2022), Program-Aided Prompting (Gao et al., 2022), Maieutic Prompting (Jung et al., 2022), and Self-Ask Prompting (Press et al., 2022), among others. In this method, we augment ChatGPT using the following capabilities using in-context prompting:
#### 3.2.1 Recluster and Slot Mapping Expert
We use a randomly sampled a set of 10 training examples from each cluster from the train-split of each dataset and map to the gold slot as the prompt. Then we ask the model to infer the slot on a held-out Question-answer pair.
For inference, we use the prompt "_Below are the clusters:_<_include slot mapping Training Examples_>_. What is the closest cluster in which there are questions like : _'<Question>' should belong to? Answer should be in json format and the key of the json should be within one of the keys among: _<include slot names_>_, also include the confidence score_". This can be used as a proxy to recluster the questions in both cluster view (Section 3.1.1) and document-level view (Section 3.1.2). Then we use the confidence score to determine if the question is representative or not.
#### 3.2.2 Add Questions and Slot Mapping Expert
Humans are allowed to ask questions from the document if they feel that some important mention has not been tagged by the model, and hence it has not been mapped to a slot. To mimic this approach, we again prompt ChatGPT to detect the salient mentions given a context passage, and then ask questions about that mention.
We make use of the prompt "_Can you ask questions from the context <context passage_>_ such that each salient mention is present in one question and another salient mention is the answer? Answer should be in the JSON Format Question:Answer. Answer should only be the salient mention. Do not include an entire sentence. Here are a few examples of question-answer pairs generated : _'Training Examples_>_". Finally we use the same prompt as mentioned in **Recluster and Slot Mapping Expert** to determine the correct slot and then evaluate. In this way, we can generate more questions that can be answered from a document.
## 4 Experimental Setup
In this section, we discuss about the datasets and models which are used to evaluate our _AI-only_ and _Proxy Human-AI Collaboration_ techniques to induce slots from the documents with zero to minimal supervision. For the _AI-only_ approach, we elucidate the different models used as salient entity identifiers and question generators along with some unsupervised baseline systems to compare our _AI-only_ approach with. Even though our goal is to capture essential information on-the-fly, we could only evaluate the success of slot mapping on the annotated templates present in the datasets. For instance, if there are five gold slot types, we essentially evaluate whether one or more clusters represent those slots (based on fuzzy matching), and we compare the answers from the questions contained in the cluster with the gold slot template fillers. We measure **Precision, Recall and F1-score** for each slot type. Therefore, it is important to improve the semantic coherence of each generated cluster (all questions correspond to a unique intent).
### Datasets
We discuss two domain-specific IE datasets that we have used and explain why they are a good choice in terms of quickly bootstraping templates in real-world settings.
Biomedical Slot Filling (Papanikolaou et al., 2022)Here each instance contains the <Subject, Relation, Object> triple as well as the text where it was found, thus we can easily transform them in a question answering-like format for slot filling. We consider only 5 types of slots here "_Upregulator_" (**Upreg**), "_Downregulator_" (**Downreg**), "_Cause_", "_Interacts with_" and "_Regulator_". This dataset is useful to extract important relations/slots quickly in case of public health emergency, as it contains different types of relations between drugs, diseases and other medical entities.
**CUAD Legal Dataset (Hendrycks et al., 2021):** It is a contract review Question-Answering corpus on 510 commercial legal contracts that have been labeled by experienced lawyers to identify 41 types of legal clauses. All factoid questions are converted into slot-filler templates to make this dataset suitable for the slot-filling task. We mainly consider those question-answer pairs which are extractive spans in the document: "_On what date is the contract is effective?_" (**Effective**), "_On what date will the contract's initial term expire?_" (**Expiry**), "_What is the date of the contract?_" (**Agreement**), "_What is the notice period required to terminate renewal?_" (**Termination**), "_What is the name of the contract?_" (**Name**)
### Experimental Setup for AI-Only method
#### 4.2.1 Salient Entity Identifiers
For extracting salient entities to be recognized as answers for the generated questions as described in sections 2.1 and 2.2, we make use of both domain-specific and general-purpose entity recognizers. For tagging biomedical entities, we make use of two pre-trained NER models: _en-bc5cdr-trained_ and _en-scibert-trained_. We make use of CONLL based tags (PERSON, LOCATION, DATES) from space and pre-trained Legal-BERT (Chalkidis et al., 2020) for tagging domain-specific entities from legal contracts.
#### 4.2.2 Question Generators
For the answer aware question generation process, we have used off-the-shelf T52 model and BART3 to generate questions.
Footnote 2: [https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap](https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap)
Footnote 3: [https://huggingface.co/voidful/bart-egg-question-generator](https://huggingface.co/voidful/bart-egg-question-generator)
### Comparison with the baselines
Unsupervised OpenIEWe make use of Stanford Open information extraction (OpenIE) (Angeli et al., 2015) to extract triples from the documents, and map each triple to the nearest slot based on fuzzy matching technique.
Random Clustering of QuestionsAfter automatically generating questions pivoting on the identified entities, we group the questions randomly in \(k\) different clusters.
TF-IDF Clustering of Questions:(AI-only TF)After automatically generating questions pivoting on the identified entities, we embed the questions using TF-IDF and group them into \(k\) different clusters.
TF-IDF Clustering of Questions+bleaching:(AI-only TF+bleach)Before embedding the questions using TF-IDF technique, we replace the named entities with [MASK] token so that the entity specific information does not propagate into the semantic embedding of the questions.
TF-IDF Scaled Clustering of Generated Questions: (AI-only TF+bleach+scaled)Here we embed the questions using TF-IDF, scale a few
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Upreg & Downreg & Cause & Interacts with & Regular \\ \hline \hline \multicolumn{6}{l}{_T5+BC5CDR_} \\ (Angeli et al., 2015) & 0 & 0.13 & 0.12 & 0.13 & 0.08 \\ Random & 0.24 & 0.45 & 0.15 & 0.34 & 0.74 \\ (AI-only) & 0.42 & 0.56 & 0.74 & 0.54 & **0.77** \\ (AI-only-black) & 0.41 & 0.56 & 0.73 & 0.52 & 0.74 \\ (AI-only-black) & 0.41 & 0.61 & 0.78 & 0.48 & 0.76 \\ \hline \multicolumn{6}{l}{_T5+SiBERT_} \\ Random & 0.14 & 0.22 & 0.56 & 0.21 & 0.22 \\ (AI-only) & 0.42 & 0.56 & 0.74 & 0.54 & 0.77 \\ (AI-only-black) & 0.40 & 0.45 & 0.67 & 0.50 & 0.74 \\ (AI-only-black) & 0.41 & 0.63 & 0.78 & 0.47 & 0.76 \\ \hline \multicolumn{6}{l}{_BART+BC5CDR_} \\ Random & 0.27 & 0.11 & 0.13 & 0.53 & 0.11 \\ (AI-only) & 0.43 & 0.56 & 0.41 & 0.55 & 0.70 \\ (AI-only-black) & 0.42 & **0.68** & 0.77 & **0.58** & 0.77 \\ (AI-only-black) & 0.43 & 0.65 & **0.79** & 0.55 & 0.76 \\ \hline \multicolumn{6}{l}{_BART+5-biBERT_} \\ Random Clustering & 0.33 & 0.49 & 0.06 & 0.03 & 0.41 \\ (AI-only) & **0.44** & 0.53 & 0.41 & 0.54 & 0.70 \\ (AI-only-black) & 0.42 & 0.66 & 0.77 & **0.58** & **0.77** \\ (AI-only-black) & 0.43 & 0.65 & **0.79** & **0.58** & 0.76 \\ \hline \hline \end{tabular}
\end{table}
Table 2: F1-scores of slots mapped by the Unsupervised OpenIE baseline, Random baselines and other _AI-Only_ Template Induction methods on the Biomedical Slot Filling Dataset. We perform ablation analysis by choosing either _en-bc5cdr-trained_(BC5CDR) and _ens-scibert-trained_(SciBERT) for identifying the biomedical entities and generating questions using either T5 or BART question generator. We refer to bleaching as _bl_ and words upweighting as _sc_. The best F1 scores for each slot are highlighted.
trigger words from documents (frequent verbs) and group them into \(k\) clusters.
### Experiment Setup for _InteractiveIE_:
For making ChatGPT behave as a **Recluster and Slot Mapping Expert**, we prompt it by using training examples from the train split of both biomedical and legal datasets. This approach is widely termed as _in-context learning_. We run the same NER+Question Generator pipeline on the train-split and map to the slot which is marked as a gold slot in the training dataset. Then we manually verify 10 examples mapped to each cluster, thus we have 50 examples in total for 5 different slots. Besides, we also provide 10 human-annotated examples for making ChatGPT behave as an automatic question generator from the documents.
## 5 Results and Analysis
Table 2 and 3 show the performance of our proposed _AI-Only_ template induction method compared to the existing unsupervised baseline systems for both biomedical and legal documents respectively. Our proposed _AI-Only_ method consistently outperforms the existing unsupervised baseline approaches in determining the slot types. On average, our _AI-only_ method beats the random baseline by 25 points in F1-score and Unsupervised OpenIE by 42 points in F1-score. This observation holds true for both legal and biomedical documents.
Choice of NER plays an important role in template induction.While the biomedical NER BC5CDR tags only diseases and drugs, the SciBERT NER extracts all other necessary scientific mentions from the text. We have separately calculated the precision, recall and F1-score between the NER predicted mentions and gold slot mentions (For example: in "_Which disease is caused by the excessive intake of cocaine?_", if the answer is **myocardial infection**, the predicted mentions are _cocaine, myocardial infection_ and the gold triple is <cocaine,cause,myocardial infection> then we consider that 100% accuracy for NER Tagging.) For all the results shown in Table 2, it seems that SciBERT outperforms BC5CDR. This is because the prediction of extra information has led to the boost in the performance. For instance, in the sentence "_heparin_, _first used to prevent the clotting of blood in vitro, has been clinically used to treat **thrombosis** for more than 50 years._", the drug **heparin** and disease **thrombosis** are tagged by biomedical BC5CDR NER, whereas **clotting of blood** is tje extra tag obtained after running SciBERT NER. This mention is an important answer mapped to the slot **Cause**. For the similar reason, CONLL Tags are better at capturing slots for legal domain as shown in Table 3.
**Difficulty of slot prediction varies significantly in both the domains.** In the biomedical dataset, the most challenging slot types are _Upreglator_ (0.44) and _Interacts with_ (0.58) as evident by the performance of the best-performing models. On the other hand, slot types like _Downregulator_ (0.68), _Cause_ (0.79) and _Regulator_ (0.77) seem reasonably easier to be mapped correctly. On the other hand, in case of legal contracts, it seems that even though the slot types like _Agreement_, _Effective_, _Term Expiry_ and _Termination_ are date types, they exhibit varying difficulty levels. For instance, _Agreement Date_ is comparatively more challenging to be mapped compared to the other type of dates. As expected, capturing contract name is much easier compared to the other types of slots.
Scaling with domain-specific words boost performance.As evident from 2 and 3, scaling influences the prediction in most of the cases. Without scaling, the TF-IDF model upweights other entities which do not act as triggers for improving slot prediction. For instance, triggers such as "**ef
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Agreement & Effective & Expiry & Termination & Name \\ \hline _T5-cell_ & & & & \\ (Angeli et al., 2015) & 0.13 & 0.12 & 0.13 & 0.08 \\ Random & 0.16 & 0.27 & 0.10 & 0.01 & 0.05 \\ (A-only) & 0.41 & 0.50 & 0.31 & 0.61 & 0.35 \\ (A-only+bl) & 0.41 & 0.56 & 0.73 & 0.52 & 0.74 \\ (A-only+bl+sc) & 0.41 & 0.61 & 0.77 & 0.66 & 0.76 \\ \hline _T5-LegBERT_ & & & & & \\ Random & 0.25 & 0.33 & 0.01 & 0.05 & 0.15 \\ (A-only) & **0.42** & 0.43 & 0.31 & 0.60 & 0.66 \\ (A-only+bl+sc) & 0.30 & 0.48 & 0.77 & 0.58 & 0.80 \\ (A-only+bl+sc) & 0.31 & 0.41 & 0.76 & 0.58 & 0.81 \\ \hline \hline \end{tabular}
\end{table}
Table 3: F1-scores of slots mapped by Unsupervised OpenIE baseline, Random baselines and other _AI-Only_ Template Induction methods on the CUAD Dataset. We perform ablation analysis by choosing either _LegalBERT_ or _CONLL_ for identifying entities and generating questions using either T5 or BART question generator. We refer to bleaching as _bl_ and words upweighting as _sc_. The best F1 scores for each slot are highlighted.
fective**", **"signed**", **"parties**", **"end**", **"terminate**" made the clusters semantically more coherent using _AI-Only_+Scaling for legal domains. Whereas, in biomedical documents, **"increase**", **"decrease**", "treat" had a positive impact.
How does the Proxy-Human Performance change with varying number of actions?We investigate how much the performance of slot mapping can be improved in a proxy-human setting, where a proxy-human can use the same user interface (Section 3) and perform operations that a human having a reasonable understanding to make each cluster semantically coherent.
Slot Mapping improves with the increasing number of recluster operations by the proxy-human.Figure 4 shows how the performance of each biomedical slot mapping improves with the increasing number of user actions. When the number of action is 0, then we assume that we obtain the performance of slot mapping with 0 actions from the proxy-human and it is the same performance obtained by the automatical slot mapping expert. Then we ask ChatGPT to randomly select a few documents and perform edits on those. We count the total number of edits made and plot the F1-scores of slot mapping after all the edits. In other words, the proxy-human performance is measured in terms of F1-score after making 5, 10, 15 and 20 edits. On average, the F1-score of slot mapping improves over number of actions.
What influences F1-score improvement after proxy-human actions?We took a closer look to investigate the reason of F1-score improvement by the actions taken by proxy-human over time.
Slot Mapping further improves with the increasing number of recluster and add question operations by the proxy-human.First, we observe that while the proxy-human tries to recluster and remap the slots by themselves in each document, they are able to improve the overall F1-score over the AI-only method. However, during the analysis of our AI-only method, we observed that a large number of biomedical mentions are not getting identified by the AI-only method. The humans are definitely able to detect those mentions. Moreover, the existing questions generated by T5 or BART are sometimes ill-formed; on the other hand, ChatGPT as a proxy-human can ask better questions and hence the slot mapping performance of **Add Questions and Slot Mapping Expert** improves over the performance exhibited by the **Recluster and Slot Mapping Expert**.
## 6 Conclusion
This paper shows how to quickly bootstrap templates from documents in a real-world setting where it is required to induce template slots with minimal supervision. We have used automatic question generation to induce template slots initially. To explore how minimal amount of supervision using human-in-the-loop, we use ChatGPT as a _collaborative agent_ with the question-answering driven IE model. We found that it can serve as a competitive complement for expert humans in improving information extraction in a real-world setup. Beyond IE, many real-world tasks such as question answering, natural language inference can be improved by leveraging human-AI complementarity strengths. We hope that our work will motivate future research on how to achieve human-AI complementarity on IE datasets and beyond.
|
2310.10689 | Contrastive Self-Supervised Learning for Spatio-Temporal Analysis of
Lung Ultrasound Videos | Self-supervised learning (SSL) methods have shown promise for medical imaging
applications by learning meaningful visual representations, even when the
amount of labeled data is limited. Here, we extend state-of-the-art contrastive
learning SSL methods to 2D+time medical ultrasound video data by introducing a
modified encoder and augmentation method capable of learning meaningful
spatio-temporal representations, without requiring constraints on the input
data. We evaluate our method on the challenging clinical task of identifying
lung consolidations (an important pathological feature) in ultrasound videos.
Using a multi-center dataset of over 27k lung ultrasound videos acquired from
over 500 patients, we show that our method can significantly improve
performance on downstream localization and classification of lung
consolidation. Comparisons against baseline models trained without SSL show
that the proposed methods are particularly advantageous when the size of
labeled training data is limited (e.g., as little as 5% of the training set). | Li Chen, Jonathan Rubin, Jiahong Ouyang, Naveen Balaraju, Shubham Patil, Courosh Mehanian, Sourabh Kulhare, Rachel Millin, Kenton W Gregory, Cynthia R Gregory, Meihua Zhu, David O Kessler, Laurie Malia, Almaz Dessie, Joni Rabiner, Di Coneybeare, Bo Shopsin, Andrew Hersh, Cristian Madar, Jeffrey Shupp, Laura S Johnson, Jacob Avila, Kristin Dwyer, Peter Weimersheimer, Balasundar Raju, Jochen Kruecker, Alvin Chen | 2023-10-14T17:53:44Z | http://arxiv.org/abs/2310.10689v1 | # Contrastive self-supervised learning for spatio-temporal analysis of lung Ultrasound videos
###### Abstract
Self-supervised learning (SSL) methods have shown promise for medical imaging applications by learning meaningful visual representations, even when the amount of labeled data is limited. Here, we extend state-of-the-art contrastive learning SSL methods to 2D+time medical ultrasound video data by introducing a modified encoder and augmentation method capable of learning meaningful spatio-temporal representations, without requiring constraints on the input data. We evaluate our method on the challenging clinical task of identifying lung consolidations (an important pathological feature) in ultrasound videos. Using a multi-center dataset of over 27k lung ultrasound videos acquired from over 500 patients, we show that our method can significantly improve performance on downstream localization and classification of lung consolidation. Comparisons against baseline models trained without SSL show that the proposed methods are particularly advantageous when the size of labeled training data is limited (e.g., as little as 5% of the training set).
Li Chen*1, Jonathan Rubin*+1, Jiahong Ouyang1, Naveen Balaraju1, Shubham Patil1, Courosh Mehanian2, Sourabh Kulhare2, Rachel Millin2, Kenton W. Gregory3, Cynthia R. Gregory3, Meihua Zhu3, David O. Kessler4, Laurie Malia4, Almaz Dessie4, Joni Rabiner4, Di Coneybeare4, Bo Shopsin5, Andrew Hersh6, Cristian Madar7, Jeffrey Shupp8, Laura S. Johnson8, Jacob Avila9, Kristin Dwyer10, Peter Weimersheimer11, Balasundar Raju1, Jochen Kruecker1, Alvin Chen1
Footnote 1: [https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/jj/journals/j/journals/j/journals/journals/jjournals/j/journals/jj/journals/j/journals/jjournals/j/journals/j/journals/j/journals/j/journals/jj/journals/j/jjournals/j/journals/jj/journals/j/journals/jj/journals/jjournals/jj/journals/jjournals/jj/journals/jj/journals/jjournals/jjj/journals/jjjjournals/](https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/jj/journals/j/journals/j/journals/journals/jjournals/j/journals/jj/journals/j/journals/jjournals/j/journals/j/journals/j/journals/j/journals/jj/journals/j/jjournals/j/journals/jj/journals/j/journals/jj/journals/jjournals/jj/journals/jjournals/jj/journals/jj/journals/jjournals/jjj/journals/jjjjournals/)
## 2 Related Work
The application of SSL for visual tasks has primarily been reported on natural images. Recent works [4, 5, 6] have shown that well-trained SSL models are competitive with models trained via full supervision. State-of-the-art SSL techniques include contrastive methods utilizing positive-negative pairs (SIMCLR [8] and MoCo [9]); contrastive learning based on asymmetry (BYOL [7] and SIMSAM [10]); learning visual pretext tasks ([13] and [14]); and learning via redundancy-reduction (Barlow Twins [11] and W-MSE [12]).
SSL methods have been demonstrated on medical images, including ultrasound, for example by leveraging supervision from radiological follow-up scans [15] or reconstructing high-resolution ultrasound images from high- and low-resolution pairs [16]. SSL has also been applied to image synthesis, for example to learn mappings from ultrasound to MR by assuming a shared representation in latent space [17]. More closely related to this work, a self-supervised model was trained to learn visual representations by correcting the order of reshuffled fetal ultrasound videos containing limited numbers of frames per scan and predicting the geometric transformation applied to the videos [18]. Finally, fetal ultrasound imagery was used to train a 2D self-supervised model based on context restoration to facilitate downstream classification, localization, and segmentation [19].
Unlike many of the existing SSL methods applied to medical data, contrastive learning methods, as in [7, 8], do not require specific constraints on the input data, such as needing follow-up scans [15], positive-negative pairs [16, 17], or very short videos (e.g., less than one cardiac cycle) [18]. Instead, contrastive methods leverage asymmetry in the learning update resulting from paired augmentations. For this work, contrastive SSL methods were adapted for 2D+time video through the introduction of domain-specific spatio-temporal augmentations appropriate for medical ultrasound.
## 3 Methods
### Data
An extensive retrospective, multi-center clinical dataset of 27,063 lung ultrasound videos were used in this work (Table 1). The data were acquired from 528 patients with suspicion of lung consolidation or other related pathology (e.g., pneumonia, pleural effusion). The data were collected from 8 U.S. clinical sites between 2017 and 2020. The videos were at least 3 seconds in length and contained at least 60 frames.
To assess model classification performance, 1669 videos were annotated for presence or absence of lung consolidation. Annotation was carried out by a multi-center team of expert physicians with training in lung ultrasound. Each ultrasound video was annotated by two experts and adjudicated by a third expert when disagreement between the first two experts occurred. The annotated videos were then divided at patient level into training, validation, and test sets. The remaining 25,394 videos served as unlabeled data for SSL training.
### Proposed contrastive self-supervised learning method
The proposed contrastive SSL method is shown in Fig. 1. The first step is to generate meaningful visual representations of the ultrasound video so that the visual representations can be used in downstream tasks. In this work, we adopted BYOL ([7]) as a state-of-the-art asymmetry-based contrastive method to learn meaningful visual representations. The success of the method depends on the proper application of two different spatio-temporal augmentation instantiations applied to the same input video during training. The spatio-temporal augmentation parameters are detailed in Table 2.
During training, augmented videos are passed through two neural networks, an online network and a target network. The online network has a 3D encoder (modified Darknet-53 [20] with 3D convolutions as backbone to support video input) to generate visual representations, and a projection head (a multilayer perceptron with one hidden layer of 4096 dimension) to project the embedding features for computation of the loss function. The target network has the same backbone network structure as the online network, but its weights are an exponential moving average (EMA, with momentum update ratio of 0.99) of the online network parameters instead of being backpropagated from later layers.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Dataset details** & **N** \\ \hline Number of sites & 8 \\ Number of patients & 528 \\ \hline Total videos & 27,063 \\ Unlabeled videos & 25,394 \\ \hline Videos labeled with lung consolidation & 1,669 \\ (Train/val/test set) & (1,296/120/253) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Retrospective, multi-center lung ultrasound dataset.
Figure 1: Flowchart of proposed contrastive SSL method.
The parameters of the online network are trained to maximize agreement between the embedding features from both augmented videos. The parameters of the target network are then updated by EMA. After training, the projection head is discarded, and the encoder and its visual representation are used for downstream tasks (e.g., video classification). The learning rate was set to 3e-4 for all experiments.
### Self-supervised learning for saliency map generation
As a downstream task, we use the encoder network from SSL training to generate meaningful saliency maps, which tend to highlight regions of each video frame that are likely to contain pathology. In this work, we used one of the popular saliency map generation methods, the Occlusion algorithm [20] on the encoder neural network, although other methods of saliency map generation could be chosen.
To quantify the overlap of saliency map with ground-truth bounding boxes defining regions of pathology (lung consolidation), we use a weighted IOU as the evaluation metric. For this, a threshold mask is applied on the saliency map to retain the top 10% of image pixels based on intensity. Minimum-encompassing prediction boxes are then generated for each connected foreground region and compared with the ground-truth boxes to assess pathology detections.
### Self-supervised learning for video classification
We also applied the SSL trained encoder network to the downstream task of video classification. To achieve this, we appended a fully-connected layer (with number of neurons equals to the number of classes) to the visual representation layer within the encoder neural network. The fully-connected layer is subsequently trained on a (smaller) labeled dataset, i.e., via traditional supervised learning.
When training the fully-connected classification layer, the weights of the SSL pre-trained encoder backbone may either remain fixed or allowed to update. We evaluated both approaches in our study. That is, we compared the performance of a classifier in which only the fully-connected layer was tuned based on labeled data ("SSL Feature Extractor") to a classifier in which both the pre-trained encoder network and the fully-connected layer were tuned based on labels ("SSL Fine-Tuned").
We also compared classification performance against an equivalent fully-supervised baseline model without SSL pretraining ("Fully-Supervised"), i.e., initialized with random weights and trained entirely based on labeled data. Finally, to evaluate the importance of the feature extraction layers (encoder network) relative to the fully connected classification layers, we show the results of a naive model with a fixed, random encoder where only the fully-connected layer is trainable ("Random Feature Extractor").
### Saliency map generation
Representative examples of SSL generated saliency maps on lung ultrasound videos containing regions of pathology (lung consolidation) are shown in Fig. 2. For these experiments, the SSL models were initially trained using the 25,394 unlabeled videos and 1,269 annotated training videos with fixed ("SSL Feature Extractor") or trainable ("SSL Fine-Tuned") backbone. Baseline models without SSL were trained using only labelled data. As seen in Fig. 2, saliency maps generated by the "Random Feature Extractor" and "Fully-Supervised" models are noisy and cannot clearly localize regions of pathology. In contrast, saliency maps generated from the proposed SSL methods are more specific to the pathology.
### Fractional training with limited labeled data
Fig. 3 compares baseline ("Fully-Supervised") and proposed ("SSL Feature Extractor" and "SSL Fine-Tuned") models with decreasing proportional amounts of labeled training data. Specifically, we incrementally reduced the training set from 100% (all 1,296 annotated training videos included) to 5% (65 annotated videos randomly selected for training).
When the amount of labeled training data is sufficient, the baseline "Fully-Supervised" model shows comparable performance to the SSL-based models, and the effect of pre-training with unlabeled data is diminished (0.82 vs 0.84 accuracy, 0.91 vs 0.92 AUC).
On the other hand, when the labeled training set is reduced, the effect of SSL pre-training becomes evident. In particular, we observe that when the proportion of labeled training data falls below 30% of the initial training set size, the accuracy and AUC of the baseline "Fully-Supervised" model decreases dramatically. In contrast, the SSL-based models maintain consistent accuracy and AUC throughout Table 3 Performance comparison between two adapted contrastive learning SSL algorithm applied on lung ultrasound data.
## 7 Conclusions
In summary, we extend state-of-the-art contrastive learning SSL methods to 2D+time medical ultrasound video data by introducing a modified encoder and augmentation method to learn meaningful spatio-temporal representations, without added constraints on the input data. We applied the method to the clinically relevant task of video classification of lung consolidations in ultrasound. The results of the study suggest that the proposed SSL methods 1) learn more informative visual representations (saliency maps); 2) outperform baseline models trained without self-supervision; and 3) demonstrate consistent performance even when labeled training data are extremely limited.
Figure 3: Effect of labeled dataset size on supervised versus self-supervised model performance. When the proportion of labeled training data falls below 30% of the combined labeled and unlabeled training size, the accuracy and AUC of the baseline supervised models decreases dramatically. In contrast, the proposed SSL-based models maintain consistent performance throughout the low data regime.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Trainable parameters**} & \multirow{2}{*}{**Accuracy**} & \multirow{2}{*}{**Sensitivity**} & \multirow{2}{*}{**Specificity**} & \multirow{2}{*}{**AUC**} \\ & **(out of 133,880 total)** & & & & \\ \hline
**SSL feature extractor** & 38 & 0.78 & 0.69 & 0.86 & 0.86 \\ (Proposed method with SSL algorithm [7]) & 133,880 & 0.84 & 0.74 & 0.92 & 0.91 \\ \hline
**SSL feature extractor** & & & & & \\ (Proposed method with alternative SSL algorithm [8]) & 38 & 0.76 & 0.55 & 0.94 & 0.87 \\ \hline
**SSL fine-tuned** & & & & & \\ (Proposed method with alternative SSL algorithm [8]) & 133,880 & 0.79 & 0.68 & 0.89 & 0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison between two adapted contrastive learning SSL algorithm applied on lung ultrasound data.
## 8 Acknowledgments
We would like to acknowledge the contributions from the following people for their efforts in data curation and annotations: Zohreh Laverriere, Xinliang Zheng (Lia), Annie Cao, Katelyn Hostetler, Yuan Zhang, Amber Halse, James Jones, Jack Lazar, Devjani Das, Tom Kennedy, Lorraine Ng, Penelope Lema, Nick Avitabile.
|
2307.13997 | Adiabatic Cooper pair splitter | Recent experiments have observed Cooper pair splitting in quantum dots
coupled to superconductors, and efficient schemes for controlling and timing
the splitting process are now called for. Here, we propose and analyze an
adiabatic Cooper pair splitter that can produce a regular flow of
spin-entangled electrons in response to a time-dependent and periodic gate
voltage. The splitting process is controlled by moving adiabatically back and
forth along an avoided crossing between the empty state and the singlet state
of two quantum dots that are coupled to a superconductor, followed by the
emission of the split Cooper pairs into two normal-state drains. The scheme
does not rely on fine-tuned resonance conditions and is therefore robust
against experimental imperfections in the driving signal. We identify a range
of driving frequencies, where the output currents are quantized and
proportional to the driving frequency combined with suppressed low-frequency
noise. We also discuss the main sources of cycle-missing events and evaluate
the statistics of electrons emitted within a period of the drive as well as the
distribution of waiting times between them. Realistic parameter estimates
indicate that the Cooper pair splitter can be operated in the gigahertz regime. | Fredrik Brange, Riya Baruah, Christian Flindt | 2023-07-26T07:08:59Z | http://arxiv.org/abs/2307.13997v2 | # Adiabatic Cooper Pair Splitter
###### Abstract
Recent experiments have observed Cooper pair splitting in quantum dots coupled to superconductors, and efficient schemes for controlling and timing the splitting process are now called for. Here, we propose and analyze an adiabatic Cooper pair splitter that can produce a regular flow of spin-entangled electrons in response to a time-dependent and periodic gate voltage. The splitting process is controlled by moving back and forth along an avoided crossing between the empty state and the singlet state of two quantum dots that are coupled to a superconductor, followed by the emission of the split Cooper pairs into two normal-state drains. The scheme does not rely on fine-tuned resonance conditions and is therefore robust against experimental imperfections in the driving signal. We identify a range of driving frequencies, where the output currents are quantized and proportional to the driving frequency combined with suppressed low-frequency noise. We also discuss the main sources of cycle-missing events and evaluate the statistics of electrons emitted within a period of the drive as well as the distribution of waiting times between them. Realistic parameter estimates indicate that the Cooper pair splitter can be operated in the gigahertz regime.
_Introduction.--_ Cooper pair splitters are experiencing a surge of interest as several promising experiments have brought the field closer to the ultimate goal of detecting and exploiting the non-local entanglement of split Cooper pairs [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. Recently, Cooper pair splitting has been observed with charge detectors [16; 20] and dispersive readout [23], correlations between spin currents have been measured [21], and Cooper pair splitters with triplet pairing have been realized [22; 24]. The thermoelectric properties of Cooper pairs splitter have also been explored in theory [26; 27; 28; 29] and experiment [15]. Cooper pair splitters have been implemented in a variety of architectures based on nanowires [3; 5; 8; 9; 11; 14; 18; 19; 21; 22; 23], carbon nanotubes [4; 6; 7; 13], graphene [10; 12; 15; 17], semiconductor quantum dots [23], and metallic islands [16; 20]. Very recently, setups with several quantum dots and superconductors have also been experimentally realized [25].
These experimental advances have reduced the gap between experiment and theory, and several theoretical ideas for future experiments may soon be within reach. As an example, the distribution of waiting times was already measured following a recent suggestion [16; 30]. There are also proposals for observing the entanglement of the split Cooper pairs by either violating a Bell inequality [31; 32; 33; 34] or by using an entanglement witness formulated in terms of cross-correlation measurements of the outgoing spin currents [35; 36; 37]. In addition, several Cooper pair splitters may be combined to create a Kitaev chain with Majorana bound states forming at the ends [22; 38; 39; 40; 25]. Moreover, while experiments have focused on static devices, there are also proposals to control the splitting of Cooper pairs using time-dependent drives [41; 42]. In this context, it is an open question how one should design the driving scheme in the best way.
In this Letter, we propose and analyze an adiabatic Cooper pair splitter that operates by driving two quantum dots coupled to a superconductor back and forth along an avoided crossing between the empty state and the singlet state of the quantum dots, see Fig. 1. Each time the dots are filled by a split Cooper pair from the superconductor, the system is taken back to the empty state as the electrons are emitted into the drain electrodes, and the process can repeat. The avoided crossing occurs because of the coupling to the superconductor, and the driving can be implemented with an external
Figure 1: Adiabatic Cooper pair splitter. (a) The device consists of a nanowire with gate-defined quantum dots coupled to a superconductor (\(S\)). The amplitude for Cooper pair splitting is denoted by \(\gamma\), while \(\Gamma\) is the rate at which electrons are emitted into the normal-state electrodes (\(N\)). A time-dependent gate voltage, \(V_{g}(t)\), is used to control the left quantum dot level. (b) The superconducting gap is denoted by \(\Delta\), and \(\varepsilon_{L/R}\) are the tunable level positions. (c) Current as a function of the level positions for a static device with \(\hbar\Gamma/\gamma=0.01\). In the adiabatic scheme, we move the left level back and forth across the peak in the current. (d) Specifically, we move back and forth along an avoided crossing between the singlet state \(|S\rangle\) and the empty state \(|0\rangle\). After each crossing, a split Cooper pair is emitted into the normal-state electrodes.
gate. When operated adiabatically, the Cooper pair splitter delivers a regular and low-noise flow of split Cooper pairs as shown in Fig. 2. The scheme does not rely on fine-tuned resonance conditions or accurate timing and may be realized based on recent experiments.
_Adiabatic Cooper pair splitter.--_ Figure 1(a) shows the Cooper pair splitter consisting of two single-level quantum dots coupled to a superconductor. We here consider a setup based on dots along a nanowire, but our proposal would also work for other architectures. With a large superconducting gap, the dynamics of the quantum dots can be described by the effective Hamiltonian [41; 42; 43; 44; 45]
\[\hat{H}=\sum_{\ell\sigma}\varepsilon_{\ell}\hat{d}_{\ell\sigma}^{\dagger}\hat{d }_{\ell\sigma}-\gamma(\hat{d}_{S}^{\dagger}+\hat{d}_{S})-\kappa\sum_{\sigma}( \hat{d}_{L\sigma}^{\dagger}\hat{d}_{R\sigma}+\text{h.c.}), \tag{1}\]
where \(\varepsilon_{\ell}\) are the energy levels of the quantum dots, \(\ell=L,R\), which can be tuned by external gates to control the splitting of Cooper pairs. The amplitudes for Cooper pair splitting and elastic cotunneling are denoted by \(\gamma\) and \(\kappa\), respectively. The operator \(\hat{d}_{\ell\sigma}^{\dagger}\) creates electrons with spin \(\sigma=\uparrow,\downarrow\) in either of the dots, while \(\hat{d}_{S}^{\dagger}\equiv(\hat{d}_{L\downarrow}^{\dagger}\hat{d}_{R\uparrow} ^{\dagger}-\hat{d}_{L\uparrow}^{\dagger}\hat{d}_{R\downarrow}^{\dagger})/ \sqrt{2}\) describes a singlet state that is delocalized between them. Here, we consider a conventional \(s\)-wave superconductor, but our proposal would work equally well for other types of superconductivity. Strong Coulomb interactions on the quantum dots prevent each of them from being doubly occupied, which ensures that the electrons from a split Cooper pair tunnel into different quantum dots. We work with a large detuning of the dot levels, \(|\varepsilon_{L}-\varepsilon_{R}|\gg\kappa\), to suppress elastic cotunneling between them. The empty state of the quantum dots, \(|0\rangle\), with zero energy is coherently coupled to the singlet state, \(|S\rangle=\hat{d}_{S}^{\dagger}|0\rangle\), with energy \(\varepsilon_{L}(t)+\varepsilon_{R}\) by the amplitude for Cooper pair splitting, \(\gamma\), and the energy of the singlet state is controlled by a time-dependent gate voltage on the left quantum dot.
As shown in Fig. 1(b), large negative voltages are applied to the normal-state electrodes, so that they function as drains for the dots. Without a time-dependent drive, the (particle) currents running into the drains are
\[I_{L/R}=\frac{2\Gamma\gamma^{2}}{(\varepsilon_{L}+\varepsilon_{R})^{2}+(\hbar \Gamma)^{2}+4\gamma^{2}}, \tag{2}\]
where \(\Gamma\) is the tunneling rate into the drains, which we assume to be the same for the two drains to keep the discussion simple [44; 45; 46]. In Fig. 1(c), we show the current as a function of the level positions, and we see a peak along the diagonal \(\varepsilon_{R}=-\varepsilon_{L}\), where the singlet state is on resonance with the empty state. A similar dependence was observed in the recent experiments of Refs. [24; 22].
_Driving scheme.--_ To describe the adiabatic driving scheme, we show in Fig. 1(d) the energy of the empty state and the singlet state as a function of \(\varepsilon_{L}\), and we observe an avoided crossing between them at \(\varepsilon_{L}=-\varepsilon_{R}\) because of the coupling \(\gamma\). Thus, if we start with a large value of \(\varepsilon_{L}\), the empty state will have the lowest energy, and as we move through the avoided crossing by decreasing \(\varepsilon_{L}\), the quantum dots will eventually become occupied by a split Cooper pair. At the same time, the probability increases for the electrons to leave the dots via the drains. The system thereby returns to the empty state, which now has a higher energy than the singlet state. After that, we increase the energy of the singlet state, and we again move from the empty state to the singlet state, but this time following the excited state of the system. Eventually, the quantum dots are again occupied by a split Cooper pair, and once again the system is taken back to the empty state as the electrons tunnel into the drains. By doing so periodically, two split Cooper pairs should be produced per period of the drive.
Figure 2: Adiabatic driving scheme, average current, and low-frequency noise. (a) The period of the drive is divided into two splitting phases and two emissions phases, each of duration \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), respectively, such that \(\mathcal{T}=2(\mathcal{T}_{1}+\mathcal{T}_{2})\). During the splitting phases, the left level is moved across the resonance, \(\varepsilon_{L}=-\varepsilon_{R}\), and a Cooper pair is split. During the emission phases, the levels are kept far off resonance, and the split Cooper pair tunnels into the drains. (b) Average current as a function of the driving frequency. The parameters are \(\kappa=\gamma\), \(\varepsilon_{1}=50\gamma\), \(\varepsilon_{2}=100\gamma\), \(\varepsilon_{R}=-100\gamma\), and \(\hbar\Gamma=0.001\gamma\) (red), \(0.002\gamma\) (green), and \(0.003\gamma\) (blue), and we have defined \(f_{0}=\alpha\gamma/2\pi\hbar\) with \(\alpha=\mathcal{T}_{1}/\mathcal{T}=0.01\). The adiabatic regime, where the current should taken on the value \(I_{\ell}=2f\), is indicated for the red curve by the shaded area according to Eq. (3). (c) The Fano factor, \(F_{\ell}=S_{\ell}/I_{\ell}\), as a function of the driving frequency. The three circles in panel (b) indicate the frequencies used in Figs. 3 and 4.
The driving scheme in Fig. 2(a) is now designed with the following requirements in mind. To formulate them, we divide the period of the drive into four phases, two splitting phases, each of duration \(\mathcal{T}_{1}\), and two emission phases, each of duration \(\mathcal{T}_{2}\). The period of the drive is then \(\mathcal{T}=2(\mathcal{T}_{1}+\mathcal{T}_{2})=1/f\), where \(f\) is the driving frequency. Our requirements for the drive are now:
1. _Adiabatic splitting:_ The splitting phase should start off resonance, so that \(\gamma/\varepsilon_{1}\ll 1\), and the drive should be slow, so that \(\gamma T_{1}/\hbar\times\gamma/\varepsilon_{1}\gg 1\), where \(\varepsilon_{1}=\varepsilon_{L}+\varepsilon_{R}\) is the singlet energy at the onset [47, 48].
2. _No leakage:_ To make sure that no electrons are emitted during the splitting phase, we need \(\Gamma\mathcal{T}_{1}\ll 1\).
3. _Emission:_ To ensure that the electrons are emitted during the emission phase, we need \(\Gamma\mathcal{T}_{2}\gg 1\). Also, during the emission phase, Cooper pair splitting should be off resonance, so that \(|\varepsilon_{L}+\varepsilon_{R}|\gg\gamma\).
These requirements can be combined into the inequality
\[\alpha\Gamma\ll f\ll\min\{\alpha\gamma^{2}/\hbar\varepsilon_{1},\Gamma/2\}, \tag{3}\]
which specifies the range of possible driving frequencies for the adiabatic Cooper pair splitter given a fixed ratio of the splitting time over the period of the drive, \(\alpha=\mathcal{T}_{1}/\mathcal{T}\). To provide realistic estimates, we note that the amplitude for Cooper pair splitting can be on the order of \(\gamma=40\)\(\mu\)eV together with tunneling rates of \(\hbar\Gamma=4\)\(\mu\)eV (or 1 GHz). Taking the duration of the splitting phase so that \(\alpha=0.1\), combined with a singlet energy of \(\varepsilon_{1}=100\)\(\mu\)eV at the onset, the inequality (3) predicts adiabatic frequencies in the range \(50\) MHz \(\ll f\ll 500\) MHz. For example, with a driving frequency of \(f=100\) MHz, we would expect currents of about 20 pA, since two electrons are emitted into each drain per period of the drive. We may also take \(\varepsilon_{2}=100\)\(\mu\)eV in Fig. 2(a) to suppress Cooper pair splitting during the emission phase.
_Average current._-- To illustrate the operation of the Cooper pair splitter, we calculate the drain currents. To this end, we consider the density matrix of the dots, \(\hat{\rho}(t)\), whose dynamics obeys the Lindblad equation [44, 45, 49]
\[\frac{d}{dt}\hat{\rho}(t)=\mathcal{L}(t)\hat{\rho}(t)=\frac{1}{i\hbar}[\hat{H} (t),\hat{\rho}(t)]+\mathcal{D}\hat{\rho}(t). \tag{4}\]
Here, tunneling to the drains is described by the term
\[\mathcal{D}\hat{\rho}(t)=\Gamma\sum_{\ell\sigma}\big{(}\hat{d}_{\ell\sigma} \hat{\rho}(t)\hat{d}_{\ell\sigma}^{\dagger}-\frac{1}{2}\{\hat{\rho}(t),\hat{ d}_{\ell\sigma}^{\dagger}\hat{d}_{\ell\sigma}\}\big{)}, \tag{5}\]
and the Hamiltonian \(\hat{H}(t)\) is given by Eq. (1) with time-dependent levels. Because of the large negative voltages, the temperature of the drains drops out of the problem. Single-electron excitations above the gap are exponentially suppressed in the ratio of the superconducting gap over the temperature as \(\exp(-\Delta/k_{B}T)\), allowing us to ignore such excitations. Realistically, the gap can be up to \(\Delta\simeq 1\) meV (corresponding to a temperature of about 10 K), which indeed is much higher than typical experimental temperatures of around \(T=100\) mK.
Figure 2(b) shows the current as a function of the driving frequency [46]. When operated in the adiabatic regime, the device should deliver two split Cooper pairs per period of the drive, and the drain currents should take on the quantized value \(I_{\ell}=2f\). This expectation is confirmed by our calculations, which show a quantized current in the adiabatic regime defined by Eq. (3). At higher frequencies, the number of emitted electrons drops off, since the driving becomes too fast, and a Cooper pair is not split in each crossing of the resonance. The current does not vanish at low frequencies, since the system is biased, and a current will run even without the drive. Experimentally, the plateau in Fig. 2(b) would demonstrate the adiabatic splitting of Cooper pairs.
_Noise and Fano factor._-- To further analyze the splitting of Cooper pairs, we show in Fig. 2(c) the low-frequency noise \(S_{\ell}\) of the drain currents, quantified by
Figure 3: Time-dependent currents. We show the time-dependent currents corresponding to the three points marked with circles in Fig. 2. (a) At low frequencies, more than one electron is emitted per half-period, and emissions occur already in the splitting phase, see inset. (b) In the adiabatic regime, one electron is emitted at every half-period, and the leakage current in the splitting phase is suppressed. (c) At high frequencies, the regularity is gradually lost, and there is always a finite current running.
the Fano factor, \(F_{\ell}=S_{\ell}/I_{\ell}\)[46, 50, 51, 52, 53, 54, 55]. In the adiabatic regime, we expect a strong suppression of the noise, which indeed is confirmed by our calculations. By contrast, at lower frequencies, the Fano factor increases and comes closer to the values for a static device [45]. At high frequencies, the splitting of Cooper pairs becomes rare and uncorrelated, and the Fano factor approaches one. We also observe oscillations in the current and the Fano factor, which can be attributed to an interplay between the amplitude of Cooper pair splitting and the driving frequency. However, for our purposes, we focus on the noise in the adiabatic regime, which provides another experimental signature of the regular splitting of Cooper pairs. Unlike the current, which should be measured over a range of frequencies to observe the plateau in Fig. 2(b), the low noise can be measured at just a single frequency.
_Cycle-missing events._-- As the driving frequency is increased beyond the adiabatic regime, we expect cycle-missing events to occur because of non-adiabatic excitations [47, 48]. In particular, the system may make transitions between the instantaneous eigenstates, if we move too fast along the avoided crossing. Also, if the unloading phase is too short compared with the escape time to the drains, a split Cooper pair may not be emitted into the drains, and it might be transferred back into the superconductor. If we denote the small probability of a cycle-missing event by \(p\ll 1\), the current will be reduced to \(I_{\ell}=2f(1-p)\), while the noise increases from zero to \(S_{\ell}=2fp\)[56]. The Fano factor can then be approximated as \(F_{\ell}\simeq p\), showing that it directly measures the probability of cycle-missing events. In Fig. 2(c), the Fano factor becomes as small as one percent, noting that we are aiming for a periodic emitter of entangled electrons rather than metrological applications, which often require error rates below parts per million [57].
_Time-dependent current._-- It is instructive also to consider the time-dependent currents \(I_{\ell}(t)\), which provide information about the statistics of electrons emitted within a period of the drive. In Fig. 3, we show the time-dependent currents for the three points marked with circles in Fig. 2. At low frequencies, we enter the quasi-static regime, where the current approaches the static result in Eq. (2) with the time-dependent level position inserted. By contrast, in the adiabatic regime, the time-dependent current shows how the quantum dots are periodically filled by a split Cooper pair, followed by the emission of the electrons into the drains. At higher frequencies, the driving becomes non-adiabatic, such that the quantum dots are not filled or emptied in every half-period, and there is always a finite current running.
_Distribution of waiting times._-- Finally, we turn to the distribution of electron waiting times [58, 59], which were recently measured for a static Cooper pair splitter [16, 30]. Here, we consider the distribution of the time that passes between electrons tunneling into one of the drains [42, 46, 30, 58]. In Fig. 4, we show distributions for the three points marked with circles in Fig. 2. At low frequencies, a peak develops at short times, corresponding to several emissions occurring as the current resonance is crossed. On the other hand, in the adiabatic regime, a single peak at half the period shows that Cooper pairs are split periodically, with the width of the peak given by the tunneling rate to the drains. Finally, in the non-adiabatic regime, peaks appear at multiples of the half-period, since cycle-missing events start to occur.
_Conclusions._-- We have proposed and analyzed an adiabatic Cooper pair splitter that operates by moving a quantum dot level back and forth along an avoided crossing. Each time the resonance is crossed, a Cooper pair is split and emitted from the quantum dots. When operated in the adiabatic regime, the device generates a regular flow of spin-entangled electrons with currents that are proportional to the driving frequency combined with vanishing low-frequency noise. Our proposal appears feasible in the light of recent experiments, and it can be extended in many directions. For example, it may be possible to increase the driving frequency with a shortcut to adiabaticity [60]. Moreover, in materials like InAs, one may use the spin-orbit coupling combined with time-dependent gates to rotate the spins in the dots [61, 62]. One may
Figure 4: Distribution of waiting times. We show distributions corresponding to the three points marked with circles in Fig. 2. (a) At low frequencies, a peak develops at short waiting times. (b) In the adiabatic regime, a single peak at half the period shows that Cooper pairs are being split periodically. (c) At high frequencies, cycle-missing events give rise to several peaks.
also envision Cooper pair splitters that are coupled to ballistic conductors so that the entangled electrons can be transferred to other parts of a solid-state circuit for further operations, manipulation, and read-out.
_Acknowledgements.--_ We acknowledge support from the Nokia Industrial Doctoral School in Quantum Technology and the Research Council of Finland through the Finnish Centre of Excellence in Quantum Technology (grant number 352925) and grant number 331737.
## References
* Lesovik _et al._ [2001]G. B. Lesovik, T. Martin, and G. Blatter, Electronic entanglement in the vicinity of a superconductor, Eur. Phys. J. B **24**, 287 (2001).
* Recher _et al._ [2001]P. Recher, E. V. Sukhorukov, and D. Loss, Andreev tunneling, Coulomb blockade, and resonant transport of nonlocal spin-entangled electrons, Phys. Rev. B **63**, 165314 (2001).
* Hofstetter _et al._ [2009]L. Hofstetter, S. Csonka, J. Nygard, and C. Schonenberger, Cooper pair splitter realized in a two-quantum-dot Y-junction, Nature **461**, 960 (2009).
* Herrmann _et al._ [2010]L. G. Herrmann, F. Portier, P. Roche, A. L. Yeyati, T. Kontos, and C. Strunk, Carbon Nanotubes as Cooper-Pair Beam Splitters, Phys. Rev. Lett. **104**, 026801 (2010).
* Hofstetter _et al._ [2011]L. Hofstetter, S. Csonka, A. Baumgartner, G. Fulop, S. d'Hollosy, J. Nygard, and C. Schonenberger, Finite-Bias Cooper Pair Splitting, Phys. Rev. Lett. **107**, 136801 (2011).
* Schindele _et al._ [2012]J. Schindele, A. Baumgartner, and C. Schonenberger, Near-Unity Cooper Pair Splitting Efficiency, Phys. Rev. Lett. **109**, 157002 (2012).
* Herrmann _et al._ [2012]L. G. Herrmann, P. Burset, W. J. Herrera, F. Portier, P. Roche, C. Strunk, A. Levy Yeyati, and T. Kontos, Spectroscopy of non-local superconducting correlations in a double quantum dot, arXiv:1205.1972.
* Das _et al._ [2012]A. Das, R. Ronen, M. Heiblum, D. Mahalu, A. V. Kretinin, and H. Shtrikman, High-efficiency Cooper pair splitting demonstrated by two-particle conductance resonance and positive noise cross-correlation, Nat. Commun. **3**, 1165 (2012).
* Fulop _et al._ [2014]G. Fulop, S. d'Hollosy, A. Baumgartner, P. Makk, V. A. Guzenko, M. H. Madsen, J. Nygard, C. Schonenberger, and S. Csonka, Local electrical tuning of the nonlocal signals in a Cooper pair splitter, Phys. Rev. B **90**, 235412 (2014).
* Tan _et al._ [2015]Z. B. Tan, D. Cox, T. Nieminen, P. Lahteenmaki, D. Golubev, G. B. Lesovik, and P. J. Hakonen, Cooper Pair Splitting by Means of Graphene Quantum Dots, Phys. Rev. Lett. **114**, 096602 (2015).
* Fulop _et al._ [2015]G. Fulop, F. Dominguez, S. d'Hollosy, A. Baumgartner, P. Makk, M. H. Madsen, V. A. Guzenko, J. Nygard, C. Schonenberger, A. Levy Yeyati, and S. Csonka, Magnetic Field Tuning and Quantum Interference in a Cooper Pair Splitter, Phys. Rev. Lett. **115**, 227003 (2015).
* Borzenets _et al._ [2016]I. V. Borzenets, Y. Shimazaki, G. F. Jones, M. F. Craciun, S. Russo, M. Yamamoto, and S. Tarucha, High Efficiency CVD Graphene-lead (Pb) Cooper Pair Splitter, Sci. Rep. **6**, 23051 (2016).
* Bruhat _et al._ [2018]L. E. Bruhat, T. Cubaynes, J. J. Viennot, M. C. Dartiailh, M. M. Desjardins, A. Cottet, and T. Kontos, Circuit QED with a quantum-dot charge qubit dressed by Cooper pairs, Phys. Rev. B **98**, 155313 (2018).
* Baba _et al._ [2018]S. Baba, C. Junger, S. Matsuo, A. Baumgartner, Y. Sato, H. Kamata, K. Li, S. Jeppesen, L. Samuelson, H. Q. Xu, C. Schonenberger, and S. Tarucha, Cooper-pair splitting in two parallel InAs nanowires, New J. Phys. **20**, 063021 (2018).
* Tan _et al._ [2021]Z. B. Tan, A. Laitinen, N. S. Kirsanov, A. Galda, V. M. Vinokur, M. Haque, A. Savin, D. S. Golubev, G. B. Lesovik, and P. J. Hakonen, Thermoelectric current in a graphene Cooper pair splitter, Nat. Commun. **12**, 138 (2021).
* Ranni _et al._ [2021]A. Ranni, F. Brange, E. T. Mannila, C. Flindt, and V. F. Maisi, Real-time observation of Cooper pair splitting showing strong non-local correlations, Nat. Commun. **12**, 6358 (2021).
* Pandey _et al._ [2021]P. Pandey, R. Danneau, and D. Beckmann, Ballistic Graphene Cooper Pair Splitter, Phys. Rev. Lett. **126**, 147701 (2021).
* Scherubl _et al._ [2022]Z. Scherubl, G. Fulop, J. Gramich, A. Palyi, C. Schonenberger, J. Nygard, and S. Csonka, From Cooper pair splitting to nonlocal spectroscopy of a Shiba state, Phys. Rev. Res. **4**, 023143 (2022).
* Kurtossy _et al._ [2022]O. Kurtossy, Z. Scherubl, G. Fulop, I. E. Lukacs, T. Kanne, J. Nygard, P. Makk, and S. Csonka, Parallel InAs nanowires for Cooper pair splitters with Coulomb repulsion, npj Quantum Mater. **7**, 88 (2022).
* Ranni _et al._ [2022]A. Ranni, E. T. Mannila, A. Eriksson, D. S. Golubev, J. P. Pekola, and V. F. Maisi, Local and Nonlocal Two-Electron Tunneling Processes in a Cooper Pair Splitter, Phys. Rev. Lett. **129**, 207703 (2022).
* Bordoloi _et al._ [2022]A. Bordoloi, V. Zannier, L. Sorba, C. Schonenberger, and A. Baumgartner, Spin cross-correlation experiments in an electron entangler, Nature **612**, 454 (2022).
* Wang _et al._ [2022]G. Wang, T. Dvir, G. P. Mazur, C.-X. Liu, N. van Loo, S. L. D. ten Haaf, A. Bordin, S. Gazibegovic, G. Badawy, E. P. A. M. Bakkers, M. Wimmer, and L. P. Kouwenhoven, Singlet and triplet Cooper pair splitting in hybrid superconducting nanowires, Nature **612**, 448 (2022).
* de Jong _et al._ [2021]D. de Jong, C. G. Prosko, L. Han, F. K. Malinowski, Y. Liu, L. P. Kouwenhoven, and W. Pfaff, Controllable single Cooper pair splitting in hybrid quantum dot systems, arXiv:2208.05154.
* Wang _et al._ [2015]Q. Wang, S. L. D. ten Haaf, I. Kulesh, D. Xiao, C. Thomas, M. J. Manfra, and S. Goswami, Triplet Cooper pair splitting in a two-dimensional electron gas, arXiv:2211.05763.
* Bordin _et al._ [2021]A. Bordin, X. Li, D. van Driel, J. C. Wolff, Q. Wang, S. L. D. ten Haaf, G. Wang, N. van Loo, L. P. Kouwenhoven, and T. Dvir, Crossed Andreev reflection and elastic co-tunneling in a three-site Kitaev chain nanowire device, arXiv:2306.07696.
* Cao _et al._ [2015]Z. Cao, T.-F. Fang, L. Li, and H.-G. Luo, Thermoelectric-induced unitary Cooper pair splitting efficiency, Appl. Phys. Lett. **107**, 212601 (2015).
* Sanchez _et al._ [2018]R. Sanchez, P. Burset, and A. L. Yeyati, Cooling by Cooper pair splitting, Phys. Rev. B **98**, 241414 (2018).
* Hussein _et al._ [2019]R. Hussein, M. Governale, S. Kohler, W. Belzig, F. Gianzotto, and A. Braggio, Nonlocal thermoelectricity in a Cooper-pair splitter, Phys. Rev. B **99**, 075429 (2019).
* Kirsanov _et al._ [2018]N. S. Kirsanov, Z. B. Tan, D. S. Golubev, P. J. Hakonen, and G. B. Lesovik, Heat switch and thermoelectric effects based on Cooper-pair splitting and elastic counneling,
Phys. Rev. B **99**, 115127 (2019).
* Walldorf _et al._ [2018]N. Walldorf, C. Padurariu, A.-P. Jauho, and C. Flindt, Electron Waiting Times of a Cooper Pair Splitter, Phys. Rev. Lett. **120**, 087701 (2018).
* Kawabata [2001]S. Kawabata, Test of Bell's Inequality using the Spin Filter Effect in Ferromagnetic Semiconductor Microstructures, J. Phys. Soc. Jap. **70**, 1210 (2001).
* Sauret _et al._ [2005]O. Sauret, T. Martin, and D. Feinberg, Spin-current noise and Bell inequalities in a realistic superconductor-quantum dot entangler, Phys. Rev. B **72**, 024544 (2005).
* Braunecker _et al._ [2013]B. Braunecker, P. Burset, and A. Levy Yeyati, Entanglement Detection from Conductance Measurements in Carbon Nanotube Cooper Pair Splitters, Phys. Rev. Lett. **111**, 136806 (2013).
* Busz _et al._ [2017]P. Busz, D. Tomaszewski, and J. Martinek, Spin correlation and entanglement detection in Cooper pair splitters by current measurements using magnetic detectors, Phys. Rev. B **96**, 064520 (2017).
* Klobus _et al._ [2014]W. Klobus, A. Grudka, A. Baumgartner, D. Tomaszewski, C. Schonenberger, and J. Martinek, Entanglement witnessing and quantum cryptography with nonideal ferromagnetic detectors, Phys. Rev. B **89**, 125404 (2014).
* Brange _et al._ [2017]F. Brange, O. Malkoc, and P. Samuelsson, Minimal Entanglement Witness from Electrical Current Correlations, Phys. Rev. Lett. **118**, 036804 (2017).
* Tam _et al._ [2021]M. Tam, C. Flindt, and F. Brange, Optimal entanglement witness for Cooper pair splitters, Phys. Rev. B **104**, 245425 (2021).
* Leijnse and Flensberg [2012]M. Leijnse and K. Flensberg, Parity qubits and poor man's Majorana bound states in double quantum dots, Phys. Rev. B **86**, 134528 (2012).
* Sau and Sarma [2012]J. D. Sau and S. D. Sarma, Realizing a robust practical Majorana chain in a quantum-dot-superconductor linear array, Nat. Commun. **3**, 964 (2012).
* Fulga _et al._ [2013]I. C. Fulga, A. Haim, A. R. Akhmerov, and Y. Oreg, Adaptive tuning of Majorana fermions in a quantum dot chain, New J. Phys. **15**, 045020 (2013).
* Hiltscher _et al._ [2011]B. Hiltscher, M. Governale, J. Splettstoesser, and J. Konig, Adiabatic pumping in a double-dot Cooper-pair beam splitter, Phys. Rev. B **84**, 155403 (2011).
* Brange _et al._ [2021]F. Brange, K. Prech, and C. Flindt, Dynamic Cooper Pair Splitter, Phys. Rev. Lett. **127**, 237701 (2021).
* Eldridge _et al._ [2010]J. Eldridge, M. G. Pala, M. Governale, and J. Konig, Superconducting proximity effect in interacting double-dot systems, Phys. Rev. B **82**, 184507 (2010).
* Sauret _et al._ [2004]O. Sauret, D. Feinberg, and T. Martin, Quantum master equations for the superconductor-quantum dot entangler, Phys. Rev. B **70**, 245313 (2004).
* Walldorf _et al._ [2020]N. Walldorf, F. Brange, C. Padurariu, and C. Flindt, Noise and full counting statistics of a Cooper pair splitter, Phys. Rev. B **101**, 205422 (2020).
* [46]The Supplemental Material contains the technical details of our calculations.
* Shevchenko _et al._ [2010]S. Shevchenko, S. Ashhab, and F. Nori, Landau-Zener-Stuckelberg interferometry, Phys. Rep. **492**, 1 (2010).
* Ivakhnenko _et al._ [2023]O. V. Ivakhnenko, S. N. Shevchenko, and F. Nori, Nonadiabatic Landau-Zener-Stuckelberg-Majorana transitions, dynamics, and interference, Phys. Rep. **995**, 1 (2023).
* Hazelzet _et al._ [2001]B. L. Hazelzet, M. R. Wegewijs, T. H. Stoof, and Yu. V. Nazarov, Coherent and incoherent pumping of electrons in double quantum dots, Phys. Rev. B **63**, 165313 (2001).
* Blanter and Buttiker [2000]Ya. Blanter and M. Buttiker, Shot noise in mesoscopic conductors, Phys. Rep. **336**, 1 (2000).
* Bagrets and Nazarov [2003]D. A. Bagrets and Yu. V. Nazarov, Full counting statistics of charge transfer in Coulomb blockade systems, Phys. Rev. B **67**, 085316 (2003).
* Pistolesi [2004]F. Pistolesi, Full counting statistics of a charge shuttle, Phys. Rev. B **69**, 245409 (2004).
* Flindt _et al._ [2005]C. Flindt, T. Novotny, and A.-P. Jauho, Full counting statistics of nano-electromechanical systems, EPL **69**, 475 (2005).
* Benito _et al._ [2016]M. Benito, M. Niklas, and S. Kohler, Full-counting statistics of time-dependent conductors, Phys. Rev. B **94**, 195433 (2016).
* Potanina _et al._ [2019]E. Potanina, K. Brandner, and C. Flindt, Optimization of quantized charge pumping using full counting statistics, Phys. Rev. B **99**, 035437 (2019).
* Albert _et al._ [2010]M. Albert, C. Flindt, and M. Buttiker, Accuracy of the quantum capacitor as a single-electron source, Phys. Rev. B **82**, 041407 (2010).
* Pekola _et al._ [2013]J. P. Pekola, O.-P. Saira, V. F. Maisi, A. Kemppinen, M. Mottonen, Yu. A. Pashkin, and D. V. Averin, Single-electron current sources: Toward a refined definition of the ampere, Rev. Mod. Phys. **85**, 1421 (2013).
* Brandes [2008]T. Brandes, Waiting times and noise in single particle transport, Ann. Physik **17**, 477 (2008).
* Albert _et al._ [2011]M. Albert, C. Flindt, and M. Buttiker, Distributions of Waiting Times of Dynamic Single-Electron Emitters, Phys. Rev. Lett. **107**, 086805 (2011).
* Guery-Odelin _et al._ [2019]D. Guery-Odelin, A. Ruschhaupt, A. Kiely, E. Torrontegui, S. Martinez-Garaot, and J. G. Muga, Shortcuts to adiabaticity: Concepts, methods, and applications, Rev. Mod. Phys. **91**, 045001 (2019).
* Flindt _et al._ [2006]C. Flindt, A. S. Sorensen, and K. Flensberg, Spin-Orbit Mediated Control of Spin Qubits, Phys. Rev. Lett. **97**, 240501 (2006).
* Golovach _et al._ [2006]V. N. Golovach, M. Borhani, and D. Loss, Electric-dipole-induced spin resonance in quantum dots, Phys. Rev. B **74**, 165319 (2006).
# Supplemental Material for "Adiabatic Cooper Pair Splitter"
Fredrik Brange
Department of Applied Physics, Aalto University, 00076 Aalto, Finland
Riya Baruah
Department of Applied Physics, Aalto University, 00076 Aalto, Finland
Christian Flindt
Department of Applied Physics, Aalto University, 00076 Aalto, Finland
###### Abstract
## I Lindblad Equation & Vectorization
As explained in the main text, the Cooper pair splitter can be described by the Lindblad equation [1; 2]
\[\frac{d}{dt}\hat{\rho}(t)=\mathcal{L}(t)\hat{\rho}(t)=\frac{1}{i\hbar}[\hat{H} (t),\hat{\rho}(t)]+\Gamma\sum_{\ell\sigma}\big{(}\hat{d}_{\ell\sigma}\hat{\rho }(t)\hat{d}_{\ell\sigma}^{\dagger}-\frac{1}{2}\{\hat{\rho}(t),\hat{d}_{\ell \sigma}^{\dagger}\hat{d}_{\ell\sigma}\}\big{)},\] (S1)
where
\[\hat{H}(t)=\sum_{\ell\sigma}\varepsilon_{\ell}(t)\hat{d}_{\ell\sigma}^{ \dagger}\hat{d}_{\ell\sigma}-\gamma(\hat{d}_{S}^{\dagger}+\hat{d}_{S})-\kappa \sum_{\sigma}(\hat{d}_{L\sigma}^{\dagger}\hat{d}_{R\sigma}+\text{h.c.}),\] (S2)
is the effective Hamiltonian of the two quantum dots with time-dependent level positions, \(\varepsilon_{\ell}(t)\). To carry out our calculations, we vectorize the density matrix of the quantum dots and implement a matrix representation of the Liouvillian. Here, we do not explicitly consider the spin-degrees of freedom, and it therefore suffices to express the density matrix and the Liouvillian in the charge basis only. In this representation, the density matrix takes the form
\[\hat{\rho}=\left(\begin{array}{cccc}\rho_{00}&0&0&\rho_{S0}\\ 0&\rho_{LL}&\rho_{RL}&0\\ 0&\rho_{LR}&\rho_{RR}&0\\ \rho_{0S}&0&0&\rho_{SS}\end{array}\right),\] (S3)
where \(\rho_{\ell\ell^{\prime}}=\sum_{\sigma}\rho_{\ell\sigma,\ell^{\prime}\sigma}\) are given by traces over the spins. The density matrix can be written on the vectorized form
\[\hat{\rho}=(\rho_{00},\rho_{LL},\rho_{RR},\rho_{SS},\rho_{0S},\rho_{S0},\rho_ {LR},\rho_{RL})^{T},\] (S4)
where the first four elements are the populations, and the others are the coherences. The Liouvillian then becomes
\[\mathcal{L}(\chi,t)=\left(\begin{array}{cccccccc}0&\Gamma e^{i\chi}&\Gamma &0&-i\gamma&i\gamma&0&0\\ 0&-\Gamma&0&\Gamma&0&0&-i\kappa&i\kappa\\ 0&0&-\Gamma&\Gamma e^{i\chi}&0&0&i\kappa&-i\kappa\\ 0&0&0&-2\Gamma&i\gamma&-i\gamma&0&0\\ -i\gamma&0&0&i\gamma&i\epsilon(t)-\Gamma&0&0&0\\ i\gamma&0&0&-i\gamma&0&-i\epsilon(t)-\Gamma&0&0\\ 0&-i\kappa&i\kappa&0&0&0&-i\delta(t)-\Gamma&0\\ 0&i\kappa&-i\kappa&0&0&0&0&i\delta(t)-\Gamma\end{array}\right),\] (S5)
where we have included a counting field, \(\chi\), that couples to transitions into the left lead, and we have defined the detuning and the sum of the energy levels, \(\delta=\varepsilon_{L}-\varepsilon_{R}\) and \(\epsilon=\varepsilon_{L}+\varepsilon_{R}\), with \(\hbar,e=1\) from now on. We note that a matrix representation of the spin-resolved Liouvillian can be found in the appendix of Ref. [2].
## II Time-dependent & Period-averaged Current
In the main text, we show results for the time-dependent current running into the left lead, given by the expression
\[I_{L}(t)=\text{tr}\{\mathcal{J}_{L}\hat{\rho}_{C}(t)\}\] (S6)
in terms of the jump operator \(\mathcal{J}_{L}\hat{\rho}\equiv\Gamma\sum_{\sigma}\hat{d}_{L\sigma}\hat{\rho }\hat{d}_{L\sigma}^{\dagger}\) and the periodic state of the system with the property \(\hat{\rho}_{C}(t)=\hat{\rho}_{C}(t+\mathcal{T})\). To find the periodic state, we need the time-evolution operator
\[\mathcal{U}(t,t_{0})=\hat{T}\left\{e^{\int_{0}^{t}c(t^{\prime})dt^{\prime}} \right\}\simeq\prod_{i}e^{\mathcal{L}(t_{i})\Delta t},\] (S7)
where \(\hat{T}\) is the time-ordering operator, and \(\mathcal{L}(t)\) is the Liouvillian without the counting field. We also show how we evaluate the time-evolution operator by discretizing the interval \([t_{0},t]\) in small steps of size \(\Delta t\), during which the Liouvillian is roughly constant. The periodic state can be found from the eigenproblem, \(\mathcal{U}(t+\mathcal{T},t)\hat{\rho}_{C}(t)=\hat{\rho}_{C}(t)\), and the trace operation in Eq. (S6) is implemented by summing over the first four elements of \(\mathcal{J}_{L}\hat{\rho}_{C}(t)\) in its vectorized form. Moreover, the period-averaged current can be obtained as \(I_{L}=\int_{0}^{\mathcal{T}}dtI_{L}(t)/\mathcal{T}\). Without the driving, one can analytically find the stationary state, defined by \(\mathcal{L}\hat{\rho}_{S}=0\), and Eq. (2) of the main text then follows as \(I_{L}=\mathrm{tr}\{\mathcal{J}_{L}\hat{\rho}_{S}\}\).
## III Low-frequency noise
We find the low-frequency noise using techniques from full counting statistics by including a counting field as in Eq. (S5) [3; 4; 5; 6; 7]. The moment generating function for the number of emitted electrons after \(N\) periods then reads
\[M(\chi,N)=\mathrm{tr}\left\{[\mathcal{U}(\chi,\mathcal{T},0)]^{N}\hat{\rho}_{ C}(0)\right\},\] (S8)
where we have defined \(\mathcal{U}(\chi,t,t_{0})=\hat{T}\{e^{\int_{t_{0}}^{t}\mathcal{L}(\chi,t^{ \prime})dt^{\prime}}\}\). The cumulant generating function of the current is given as
\[F(\chi)=\lim_{N\to\infty}\ln[M(\chi,N)]/N\mathcal{T}=\ln[\max_{i}\{\lambda_{i }(\chi)\}]/\mathcal{T}\] (S9)
in terms of the eigenvalue of \(\mathcal{U}(\chi,\mathcal{T},0)\) with the largest absolute value. Moreover, the zero-frequency cumulants of the current are given by derivatives with respect to the counting field as \(\langle\langle I_{L}^{n}\rangle\rangle=\partial_{i\chi}^{n}F(\chi)|_{\chi=0}\). Specifically, the average current and the noise are the first and second cumulants of the current, \(I_{L}=\partial_{i\chi}F(\chi)|_{\chi=0}\) and \(S_{L}=\partial_{i\chi}^{2}F(\chi)|_{\chi=0}\).
## IV Waiting time distribution
To evaluate the distribution of waiting times between electrons tunneling into the left lead, we use the expression
\[\mathcal{W}_{L}(\tau)=\overline{\mathrm{tr}\{\mathcal{J}_{L}\mathcal{U}_{L}(t +\tau,t)\mathcal{J}_{L}\hat{\rho}_{C}(t)\}}/I_{L},\] (S10)
where the overline denotes an average over a period of the drive, while \(\mathcal{U}_{L}(t,t_{0})=\hat{T}\{e^{\int_{t_{0}}^{t}\left(\mathcal{L}(t^{ \prime})-\mathcal{J}_{L}\right)dt^{\prime}}\}\) is the time-evolution operator, which excludes electron tunneling into the left drain [8; 9; 10]. By evaluating this expression, we obtain the waiting time distributions presented in the main text.
|
2304.14570 | Building the MSR Tool Kaiaulu: Design Principles and Experiences | Background: Since Alitheia Core was proposed and subsequently retired, tools
that support empirical studies of software projects continue to be proposed,
such as Codeface, Codeface4Smells, GrimoireLab and SmartSHARK, but they all
make different design choices and provide overlapping functionality. Aims: We
seek to understand the design decisions adopted by these tools--the good and
the bad--along with their consequences, to understand why their authors
reinvented functionality already present in other tools, and to help inform the
design of future tools. Method: We used action research to evaluate the tools,
and to determine a set of principles and anti-patterns to motivate a new tool
design. Results: We identified 7 major design choices among the tools: 1)
Abstraction Debt, 2) the use of Project Configuration Files, 3) the choice of
Batch or Interactive Mode, 4) Minimal Paths to Data, 5) Familiar Software
Abstractions, 6) Licensing and 7) the Perils of Code Reuse. Building on the
observed good and bad design decisions, we created our own tool architecture
and implemented it as an R package. Conclusions: Tools should not require
onerous setup for users to obtain data. Authors should consider the conventions
and abstractions used by their chosen language and build upon these instead of
redefining them. Tools should encourage best practices in experiment
reproducibility by leveraging self-contained and readable schemas that are used
for tool automation, and reuse must be done with care to avoid depending on
dead code. | Carlos Paradis, Rick Kazman | 2023-04-28T00:06:58Z | http://arxiv.org/abs/2304.14570v1 | # Building the MSR Tool Kaiaulu: Design Principles and Experiences
###### Abstract
Background: Since Alitheia Core was proposed and subsequently retired, tools that support empirical studies of software projects continue to be proposed, such as Codeface, Codeface4Smells, GrimoireLab and SmartSHARK, but they all make different design choices and provide overlapping functionality. Aims: We seek to understand the design decisions adopted by these tools-the good and the bad-along with their consequences, to understand why their authors reinvented functionality already present in other tools, and to help inform the design of future tools. Method: We used action research to evaluate the tools, and to determine a set of principles and anti-patterns to motivate a new tool design. Results: We identified 7 major design choices among the tools: 1) Abstraction Debt, 2) the use of Project Configuration Files, 3) the choice of Batch or Interactive Mode, 4) Minimal Paths to Data, 5) Familiar Software Abstractions, 6) Licensing and 7) the Perils of Code Reuse. Building on the observed good and bad design decisions, we created our own tool architecture and implemented it as an R package. Conclusions: Tools should not require onerous setup for users to obtain data. Authors should consider the conventions and abstractions used by their chosen language and build upon these instead of redefining them. Tools should encourage best practices in experiment reproducibility by leveraging self-contained and readable schemas that are used for tool automation, and reuse must be done with care to avoid depending on dead code.
Keywords:mining software repositories design choices action research.
## 1 Introduction
Research into quality dimensions of software project requires the analysis of large quantities of data. For researchers this typically means mining data from multiple open source software projects. Pre-processing data, calculating metrics and flaws, and synthesizing composite results from a large corpus of project artefacts is a tedious and error prone task lacking immediate scientific value [10]--it is seen merely as a means to an end. This was the motivation for the Alitheia Core [10], which was made available in 2009 for the software engineering community. It provided features for data collection, integration and analysis services and
emphasized an easy to use extension mechanism. Yet, as of today, Alitheia Core is a dormant (read-only) project in GitHub1 and several other tools replicate at least some of its functionality.
Footnote 1: [https://github.com/islab/Alitheia-Core](https://github.com/islab/Alitheia-Core)
What went wrong? Why have many tools re-implemented the same "tedious and error prone" tasks the Alitheia Core? And do the current tools live up to the promise of Alitheia Core? In this work, we revisit lessons learned by the Alitheia Core authors and the design choices made by the other more recent tools using an action research [8] approach.
Our contributions in this paper are twofold: first, we present a set of key design decisions derived from an analysis of the aforementioned tools which either facilitated or hindered reusability, reproducibility, interoperability and extension of functionality. Second, we present our tool, Kaiaulu2, which builds upon the design decisions made from these prior tools, and which we believe fills a gap in the existing mining software repositories ecosystem.
Footnote 2: The documentation for the tool can be found at [https://github.com/sailuh/kaiaulu](https://github.com/sailuh/kaiaulu)
## 2 Studied Tools and Lessons Learned
The tools that we studied are Codeface [11], Codeface4Smells [23], GrimoireLab [18, 7] SmartSHARK, [25, 24] and PyDriller [22]. We now present our observations regarding the strengths and weaknesses of these tools in terms of their design choices and note, throughout the work, lessons learned by the authors of Alitheia Core [10] presented in [16]. Many of these lessons are applicable and worthy of consideration in new tools with similar intents. We employed an action research methodology in studying these tools, but do not describe the details of that research here, due to space limitations.
### Abstraction Debt
We have observed different levels of abstraction employed in the surveyed tools, ranging from applications that are built as monoliths to those built from smaller components. This is consistent with what has been noted in machine learning systems as abstraction debt [21], i.e. a lack of key abstractions to support the functions and growth of MSR tools.
Codeface was created as a monolithic application, in which an entire project's Git log or mailing list is analyzed. It abstracts a complete end-to-end pipeline, implemented by a command line interface (CLI), and outputs a database dump of a project. It is therefore difficult for other applications to build on some of its unique features, for example, using its Git log parser that parses at function (rather than file) granularity.
Both GrimoireLab and SmartSHARK define several components, each with its own CLI, but the component abstractions they employ are not the same. To
provide a point of comparison, Grimoire's Lab Perceval provides a CLI to obtain data from many data sources (e.g. GitHub, Git, Bugzilla, Jira, mailing lists, etc), serving as a single interface for data collection. In contrast, SmartSHARK defines its abstraction per data source type and, in the case of data acquisition, at a more fine-grained level than Perceval. For example, consider issueShark and vcsShark, two components of SmartShark. IssueShark defines abstractions for different types of issues tracker sources, and vcsShark for different types of version control systems. SmartSHARK's abstractions facilitate defining additional features specific to a data source type, such as separating static vs. dynamic data in issue trackers (e.g. creation time of the issue vs. comments), regardless of its underlying implementation (e.g. Jira or Bugzilla)5.
Footnote 5: [https://github.com/smartshark/issueSHARK#introduction](https://github.com/smartshark/issueSHARK#introduction)
Pydriller is a single component and is smaller in scope as it only abstracts Git repositories. However it is different from the other tools in that it provides an API instead of a CLI. Its motivation is also different: it wraps around PythonGit, which in itself provides a Pythonic API to nearly all features of Git, to provide an API catered towards mining software repositories only. In providing just a subset of Git functionality, it exposes functionality catering specifically to the needs of mining repositories.
The decision between choosing a CLI or API has tradeoffs. An issue with command line only interfaces occurs when an end-user may be interested in a different abstraction of the data not preconceived by the authors. However an API requires the user to be familiar with the programming language the tool was built on top of, whereas a CLI does not.
_From the above we derive the following lessons learned: End-to-end pipelines such as Codeface's limit the ability of other researchers to build on top of them. Defining more specific abstractions per data type, whether via CLI or API as issueShark and PyDriller do, facilitates building additional functionality specific to a particular data type, or audience. Moreover, CLIs can be built on top of a well-defined API, providing the benefit of both interfaces, as we do in Kaiaulu._
### Tool Configuration Files vs Project Configuration Files
In [24], the authors of SmartSHARK noted that one of their goals was to support replication through the storage of data in a single harmonized schema. Replication, it is argued, is supported by a common dataset. However, we have observed that replication is also being done within configuration files in Codeface.
Codeface uses a concept we named project configuration files. These files provide a single compact source where parameters associated with the acquisition and manipulation of a dataset can be stored. Project configuration file parameters are required for tool execution, and they are a pragmatic, lightweight and human-readable way to specify reproducible results. Project configuration files also save time when a project is re-analyzed in other studies, as some project-specific information may not be obvious from the dataset alone.
Of all the tools we have reviewed, only Codeface provides users with a means to specify project configuration files. This led to a large collection of project configurations that have been versioned in Codeface over time6. This information, which supports repeatability, may otherwise not have been possible (or at least easy) to reconstruct if all that was shared was the data.
Footnote 6: See [https://github.com/siemens/codeface/tree/master/conf](https://github.com/siemens/codeface/tree/master/conf) and [https://github.com/maelstromdat/codeface4smells_TR/tree/master/Configurations](https://github.com/maelstromdat/codeface4smells_TR/tree/master/Configurations) for Codeface and Codeface4Smells respectively
We note that externalizing parameter choices in data acquisition and manipulation tasks has been more prominent in machine learning frameworks, for example to define experiments in configuration files7, which include machine learning model selection and choice of model hyper-parameters [19].
Footnote 7: [https://xnmt.readthedocs.io/en/latest/experiment_config_files.html](https://xnmt.readthedocs.io/en/latest/experiment_config_files.html)
_From the above, we derive the following lessons: integrating configuration files that are human-readable and leveraged by the tool can enable reproducibility, without the hurdles of sharing large quantities of primary data._
### Batch Mode, Interactive Mode, and Literate Programming
As we noted before, with the exception of PyDriller, every tool defines a CLI, but not an API. This means the only way to interact with these tools is batch mode. Meanwhile, PyDriller does not offer a CLI, only an API, which confers its users the ability to leverage Python's interactive mode to _explore_ the data. However, it does not include a CLI for batch mode processing, for out-of-the-box data acquisition, processing or data analysis. What we observe then is that existing tools decide on either CLI or API, but not both. We believe, however, that the mining of software repositories requires a tool capable of both, supporting an iterative process of data exploration, and when concluded, a way to enact batch processing to scale up.
To illustrate our claim--as no existing tool provides both capabilities--we provide a few examples: in a recent socio-technical study, we needed to do identity matching, applying heuristics that have been published by other authors (e.g. [4, 27]) to assign identities to developers who use different names and e-mails in version control systems and mailing lists. Consider the case where we chose the simplest method, where developers whose name or e-mail match are assigned the same id. At first glance, this seems like a reasonable assumption. However, it was due to experimenting interactively with the identity matching API that we discovered that all core developers, due to the use of an issue tracking system, ended up sharing the same e-mail address. We noted this case as a unit test until a better heuristic could be found, and then examined the data for other cases until we were satisfied with the results. We then saved the observed parameters in a project configuration file, and used it to deploy a batch process to collect various computationally intensive architectural metrics.
We have had similar experience in determining and testing heuristics to filter files in a repository, or determining the method that developers adopt to annotate
issue numbers in commit messages. Because each project may apply its own conventions, tools that offer an experimentation capability, and then defer mass data processing to batch more efficiently support the full workflow of a researcher in mining software repositories.
The described interactive data explorations could certainly have been done in a Python or R session, but it is better to leverage literate programming using, for example, Python or R Notebooks, so that the rationale of the design experiment is not lost. However, care must be taken to not extensively rely on notebooks without further refactoring functionality into the code base, leading to dead experimental code paths [21].
_Our learned lessons here were: existing tools choose either APIs or CLIs (supporting batch or interactive modes). However, making both interfaces available will better support users in their various research efforts in mining software repositories. The use of Notebooks to illustrate and explain the API complements the API, provided functionality is not entirely written in Notebooks. In Kaiaulu, we leverage both APIs and Notebooks, which is a common practice in R packages, therefore avoiding abstraction debt._
### Minimal Paths to Data
According to [16, p.233], the effort required to learn how infrastructure code works has to be proportional to the gains and account for deprecation. We agree with this observation. Let us look at how existing tools manage this concern.
When using GrimoireLab components (in particular Perceived) the minimal path to data is surprisingly short. Provided with a Git repository URL, or a local copy, it will output a JSON file to stdout. Likewise, provided with a URL to a website mbox or local file, it will also provide a JSON file to stdout. A developer can easily integrate wrappers to its CLI, and users can easily obtain data for a project of interest. In this ecosystem, a database is available, but it is optional: users need not to concern themselves with learning GrimoireLab's Elastic Search database to obtain data.
This is in contrast to Codeface and SmartSHARK, both of which require user familiarity with MySQL and MongoDB respectively, along with their data model schemas to obtain the equivalent version control system and mailing list data. The minimal path to data in these cases is much longer, including the setup overhead and integration with other tools.
When data integration is sought in the database, GrimoireLab retains its approach of keeping the data closest to source, and not harmonizing it in a schema that facilitates integration [24]. Codeface's MySQL and SmartSHARK's MongoDB provide a harmonized schema, which makes it easier for users to store the various types of data.
In the case of PyDriller, which provides an API, the minimal path to data requires familiarity with the Python programming language. This offers the convenience of reshaping the data to the user's final need, but adds an overhead to the user for familiarization with the API, instead of just the raw data schema from the source of interest (which the user is likely already familiar with for
their research purposes). One researcher [9, p.39] who extended Codeface4Smells identified a problem of Pipeline Jungles [21], due to heavy reliance on a folder hierarchy and file name conventions.
_Our lessons learned here were: databases need not be a requirement to provide users with various data sources. This also simplifies component reuse by other tools and decreases the likelihood of reinventing the wheel. Providing a minimal path does not exclude providing a database for researchers, as Perceived shows. However providing a harmonized schema can save researchers from having to re-implement code to integrate the same kinds of infrastructure over and over. Lastly, providing an API gives some flexibility to users to reshape the data with the tool. But user familiarity with the programming language and API is a kind of overhead and this does not seem ideal, as the data could be provided directly via a CLI leaving a task for the researcher to adapt it in their own programming language. As such, we believe having available a CLI that outputs the data as Perceived does, and a harmonized schema as in Codeface and SmartSHARK, provides the best combination._
### Other Design Decisions
We briefly mention here other (more minor) design decisions that we believe may cause difficulties in adoption.
**Familiar Software Abstractions.** Both Perceval and PyDriller leverage a common interface for end-users. They are both Python libraries, and provide the expected interactions for CLI and API respectively. In Perceval's CLI, provided with a list of parameters and flags, data is output to stdout. PyDriller exposes an API, an extension to a programmer's familiar programming paradigm. This is in contrast to ecosystems that define a different abstraction, such as SmartSHARK, where detailed instructions must be followed to extend its functionality 8. Extension instructions are also not available for Perceval or Codeface.
Footnote 8: [https://smartshark.github.io/plugin/tutorial/python](https://smartshark.github.io/plugin/tutorial/python)
**Licensing.** Another important consideration in reusing a code component is how permissive its license is. For example, stringr, an R package to manipulate strings used by XGBoost, a popular machine learning algorithm, was replaced by stringi, another R package to manipulate strings, solely based on the difference in licenses.9 Similar reasoning also led an R package that represents data tables efficiently to adopt a different license because the existing license "could be interpreted as preventing closed-source products from using data.table"10. Lack of clarity on interactions of open source licenses has been reported by [1]. Among the tools we studied, we have observed the following licenses: Codeface adopts GPL 2.0, PyDriller Apache 2.0, SmartSHARK Apache 2.0, and Grimoire's Lab GPL 3.0 and LGPL 3.0.
Footnote 9: [https://github.com/dmlc/xgboost/issues/1338](https://github.com/dmlc/xgboost/issues/1338)
**Perils of Code Reuse.** With the availability of package managers such as CRAN and PyPi which greatly facilitate code reuse, you can declare dependencies on others' code instead of copying it into your own project, taking
advantage of their functionality without assuming the burden of maintenance. However code interdependence also poses risks [26], such as dependencies going extinct [6]. Hence, care has to be taken to avoid dependencies to non-maintained third-party code.
An interesting example occurs in mecoSHARK11 through a chain of dependencies which exemplifies the concern posed here. mecoSHARK is a component that serves as a wrapper for OpenStaticAnalyzer12, with a last commit date of July 13, 2018. In turn, OpenStaticAnalyzer also wraps several other dependencies, including FindBugs 13, last released in March 15, 2015. In its bug tracker14, FindBugs requests for bugs to no longer be reported, noting that SpotBugs15, FindBugs' successor, should be used instead. This confirms that the mecoSHARK wrapper, which provides OpenStaticAnalyzer functionality to SmartSHARK,is now dependent on dead code, further increasing the burden of the SmartSHARK ecosystem maintainers. Nonetheless, SmartSHARK's approach to wrap black-box packages into common APIs is considered good practice [21].
Footnote 11: [https://github.com/smartshark/mecoSHARK](https://github.com/smartshark/mecoSHARK)
Footnote 12: [https://github.com/sed-inf-u-szeged/OpenStaticAnalyzer](https://github.com/sed-inf-u-szeged/OpenStaticAnalyzer)
Footnote 13: [http://findbugs.sourceforge.net/](http://findbugs.sourceforge.net/)
Footnote 14: [https://sourceforge.net/p/findbugs/bugs/1487/](https://sourceforge.net/p/findbugs/bugs/1487/)
Footnote 15: [https://github.com/spotbugs/spotbugs](https://github.com/spotbugs/spotbugs)
Footnote 16: [https://ropensci.org/about/](https://ropensci.org/about/)
Footnote 17: [https://chaoss.community/](https://chaoss.community/)
Footnote 18: [https://devguide.ropensci.org/softwarereviewintro.html#whysubmit](https://devguide.ropensci.org/softwarereviewintro.html#whysubmit)
Footnote 19: [https://www.r-project.org/](https://www.r-project.org/)
As a means to mitigate this risk, relying on and contributing work to open source communities that more carefully assess the health of projects and try to maintain them, such as the Apache Software Foundation, ROpenSci16, and CHAOSS17 may be an important consideration. For example, ROpenSci accepts R packages via a streamlined peer review process and, for accepted packages, provides community support, package promotion, and fast-track publication to journals18.
Footnote 16: [https://ropensci.org/about/](https://ropensci.org/about/)
## 3 Design Principles in Kaiaulu
In this section, we discuss how our design principles are translated into Kaiaulu's specific design decisions. In the following section, we fully flesh out Kaiaulu's modules and features.
**Batch Mode, Interactive Mode, and Literate Programming in Kaiaulu**. We chose to use the R language19, due to the familiarity of the authors with the language and a preference for its package architecture.
Footnote 19: [https://chaoss.community/](https://chaoss.community/)
Minimally, the structure of an R package consists of the package metadata and its API. In addition, the R ecosystem encourages and promotes best practices to include documentation packages called vignettes, which leads R users to expect an API and R Notebooks when installing packages from CRAN (The
Comprehensive R Archive Network).20 CRAN treats R Notebooks as first class citizens in an R package21 showing on each package's website any R Notebooks available. Because of R package structure, complying with familiar software abstractions (see Section 2.5) automatically brings the benefits of literate programming (see Section 2.3).
Footnote 20: [https://cran.r-project.org/web/packages/](https://cran.r-project.org/web/packages/)
Footnote 21: See for example under Vignettes: [https://cran.r-project.org/web/packages/ggplot2/index.html](https://cran.r-project.org/web/packages/ggplot2/index.html)
**Abstraction Debt in Kaiaulu.** R natively supports tables and vectors as data types, which is a familiar abstraction for data analysts. To capitalize on this, Kaiaulu's _parse_ functions map most data sources (Git logs, mailing lists, file dependencies, software vulnerability feeds, metrics, etc.) as tables with standardized column naming, which allows for quick identification of what data can be combined. Kaiaulu also offers various _transform_to_network_ functions to represent and interactively visualize these networks22 which in turn enable more complex socio-technical analyses at different granularities: functions, files, classes, etc.
Footnote 22: [https://github.com/sailuh/kaiaulu/blob/master/R/network.R](https://github.com/sailuh/kaiaulu/blob/master/R/network.R)
**Tool Configuration Files vs Project Configuration Files in Kaiaulu.** Following the design choice of Codeface (see Section 2.2), and building on best practices for machine learning configuration files [21] we implemented project configuration files using YAML. Because we externalize all parameters in project configuration files, an important concern is that the file does not grow overly complex, requiring documentation of its own. That is, we do not wish the minimal path to data to increase as new features are added, as we discuss next.
**Minimal Path to Data in Kaiaulu.** As discussed in Section 2.4, it is important that the path to data remains as simple and short as possible. We again build upon familiar concepts, specifically with the intent of applying the rule of least surprise [20, Ch.11]23 i.e. 'do the least surprising thing'. In an R package, it is expected that R Notebooks provide examples of how to leverage the API to accomplish a task by combining multiple functions, while individual functions provide self-contained examples, which can be obtained in the R environment at any time by preceding a function name with a question mark, e.g. '_?parse_gitlog_'.
Footnote 23: Also publicly available at: [http://www.catb.org/~esr/writings/taoup/html/ch1is01.html](http://www.catb.org/~esr/writings/taoup/html/ch1is01.html)
To build upon this we: 1) _Do not create_ any dependency between configuration files and the API: functions take, as input, parameters which are familiar to any programmer; 2) _Use_ project configuration files only in the first code block in R Notebooks to load the variables required to use the functions of the API, similar to how best practices in static programming languages encourage variable definitions at the beginning of a program; 3) _Create_ a dependency between the CLI and the project configuration files, to facilitate batch processing and reproducibility.
Our intent is that users will first observe the R Notebooks to get a better understanding of the API for a particular task of interest, and in doing so will
familiarize themselves with both the relevant portion of the API and the project configuration file. If the interest is only, for example, to understand how to parse Git logs, using for example the Git log R notebook, then users should not be concerned with specifying the mailing list. When comfortable, users can then use their newfound understanding to scale the analysis to the entire project using the configuration file for the CLI, build their own analyses as vignettes, or define new CLI interfaces. This design is consistent with a mining software repositories workflow, in which a researcher should first explore the data qualitatively to assess threats to validity, before scaling up data processing in batch mode without clarity of what assumptions the tool is making using default parameters or arbitrary thresholds.
Kaiaulu also further decreases the minimal path to data in terms of how it handles third party dependencies. Users need only concern themselves with installing dependencies for their task of interest. For example, if the interest is only to parse Git logs, they need only set up Perceived, and provide its binary path as a parameter to Kaiaulu's _parse_gitlog_ to obtain the parsed data. More generally, the _parse_ API minimizes effort to researchers by transforming various tool-specific data formats, if the researcher so desires, into tables, and performing minimal processing on potentially inconsistent fields, such as file paths, to make them internally consistent.
## 4 The Kaiaulu R Package
Based on the above observations and lessons learned, we now describe the realized modules and features resulting from the design decisions behind the Kaiaulu R package.
Mining software repositories often requires the handling of multiple data sources to analyze a project's ecosystem. Minimally, a researcher is required to understand the data source in its native form, acquire it (typically using an API), and parse and save it (e.g. as a table of data). Overheard is incurred if a tool needs to be purpose-built to accomplish these steps. In the best case, the acquisition and parsing steps can be accomplished by using an existing tool. When designing Kaiaulu, we asked ourselves how to emphasize the minimal path to data (as discussed in Sec. 2.4). To illustrate our rationale, Figure 1 revisits some of the tools' design decisions we discussed earlier.
In Figure 1, Perceived (left) provides a single CLI interface for acquisition of various data sources. For example, a project's issues can be fetched by using the 'jira' endpoint, while 'git' may be used to parse repositories. Pydriller (center-left), provides functions via a Python API. Users of these tools gain flexibility in parsing the data, at the cost of a higher learning curve and familiarity with the language. SmartShark (center-right) provides similar functionality to Percival, but endpoints such as 'jira' and 'git' are now realized as entirely separate tools, orchestrated by another tool. Finally, Codeface (right) provides a single CLI interface, like Perceval. But most of its functionality is executed in batch mode and output into a single database dump, offering the least flexibility in
terms of what analyses to execute. Unique to these tools, Codeface stores project parameters in reusable configuration files.
Using Figure 1 as a basis for comparison, Kaiaulu's design is shown in Figure 2, separated into parts 1) through 4). Kaiaulu borrows from the design of PyDriller by defining an API and a set of functions (2). The use of configuration files, inspired by Codeface (3), is done at the R Notebook level (rather than at the function level). That is, project configuration parameters are read into an R Notebook, and appropriate parameters are then passed to functions. This allows us to decouple configurations from function signatures, to tell best practice stories (interspersed with code) of how parameters are used in various analyses [14, 5, 4], and offer a reusable end-to-end pipeline for specific exploratory analysis. For example, the social smells notebook24 emphasizes care in assessing project's communication, which are often fragmented over multiple archives. More importantly, R notebooks enable easy manual inspection of intermediate data, such as the use of identity match heuristics. We found this use of'reusable data stories' particularly useful to familiarize undergraduate and graduate research assistants to common pitfalls.
Footnote 24: [https://github.com/sailuh/kaiaulu/blob/master/vignettes/social_smell_showcase.Rmd](https://github.com/sailuh/kaiaulu/blob/master/vignettes/social_smell_showcase.Rmd)
We borrowed the use of CLIs (4) from Perceived and Smartshark. The CLI serves to accommodate users who are unfamiliar with R; it also supports scaling analyses defined and prototyped in Notebooks, so that they can be run in batch mode. To build upon (2), the CLI simply utilizes the defined API behind the scenes, which simplifies code maintenance. As with the Notebooks (3), the CLI parses project configuration files, which also facilitate server-side reuse.
Kaiaulu was designed by combining these concepts from (1-4). It was implemented as an R package, building upon familiar software abstractions. Typically R packages are defined as an API of functions as found in PyDriller and R Notebooks. By showcasing project configuration files where users are expected to learn about the package, users can familiarize themselves with project configuration files and the command line interface, which is less commonly found in R packages, and entirely optional. In the following subsections, Kaiaulu's major processing elements are discussed.
Figure 1: Conceptual diagram of interface, input, and output of tools showcasing differences in design.
### Parsers
In the _Parsers_ module, our goal was to minimize a user's effort, in terms of acquiring and parsing project source code. These functions were combined into a single interface with consistent nomenclature.
Each of Kaiaulu's parsers is defined as a function (e.g. _parse_mbox_, _parse_gitlog_), which are also accessible via a CLI. We wanted parsers in Kaiaulu to reflect Perceval's philosophy of minimal paths to data, with a small learning curve. That is, given a data source, we would like users to quickly be able to see the data without spending excessive time on setup. As such, each parser function is given a single responsibility: to display a data source as a table with a standardized column nomenclature (in case multiple sources referred to the same data with different names). Unlike Perceval, since an API option is also available, users can interactively prototype and analyze the data in the R environment. Having tables as the default output option minimizes the time spent learning what fields are available in the source. The standardized nomenclature allows for intuitive joining operations across the outputs of different parsers.
To account for _the perils of code reuse_, Kaiaulu limits its interface only to third party software that have CLI interfaces. Parsers with third party dependencies simply contain in their signatures an additional parameter for path to the required binary. This dependency mechanism allows users to bypass setting up third party tools which they do not directly need to use. Additionally, users benefit from using the Kaiaulu function to obtain a tabulated and standardized data input. For example, the _parse_gitlog(git_repo_path,perceval_path)_ function requires, as input, Perceval's binary to tabulate its JSON output. In this way
Figure 2: Conceptual diagram of interface, input, and output of tools showcasing how Kaiaulu coompares to the tools.
parsers can build upon third party functionality to implement new features. For example, _parse_gitlog_entity(git_repo_path,utags_path,project_git_log,kinds)_ implements a git log parser capable of tabulating developer changes from git logs at the granularity of functions rather than files (inspired by Joblin et al [12]). In addition, the assumptions and threats to validity in the cited work are provided in the Notebook 25.
Footnote 25: [https://github.com/sailuh/kaiaulu/blob/master/vignettes/blamed_line_types_showcase.Rmd](https://github.com/sailuh/kaiaulu/blob/master/vignettes/blamed_line_types_showcase.Rmd)
While simple in concept, we note that existing tools do not offer this functionality. For instance, Codeface [12] offers an implementation of a function-based git log parser, but since it has an 'all-in-all-out' interface, this function can not be reused elsewhere. The same is true for SmartShark. Perceval, while containing a shorter path to data, still requires tabulation and standardization of the collected results. Lastly, PyDriller does not adopt the philosophy used here for extending functionality based on third party software, as it limits its scope to Git.
Kaiaulu currently employs a variety of parsers, providing the ability to parse git logs, mailing list archives (e.g. pipermail, Apache's mod_mbox), issue trackers (e.g. Jira, GitHub), static parsers (file and function dependencies), evolutionary parsers (file and function changes), commit hashes (e.g. to identify issue ids from commit messages), and software vulnerability feeds. Parsers which contain filepaths also contain optional regular expression filters to whitelist or blacklist files based on their extensions or naming conventions. For example, we use this to remove test files from analyses as these files could compromise code metrics.
### Transformers, Graphs and Networks
Kaiaulu's Transformer, Graph, and Network modules are grounded on the observation that most software and social metrics are graph-based (e.g. co-change, fan-in, fan-out, communication). These modules _transform_ the data provided by various kinds of parsers that parse the raw project data. Transformers reformat the data provided by parsers into lists of nodes and edges which are then represented as networks, using _graph_ data structures. In this way we can more easily visualize and explore the _networks_ of relationships among a software project's elements.
Kaiaulu represents the socio-technical network for each snapshot as a graph \(G_{st}=(V,E)\), where the set of nodes \(V=V_{a}\cup V_{f}\cup V_{t}\) comprises authors \(V_{a}\), source files \(V_{f}\) and e-mail threads \(V_{t}\). The set of edges E = \(E_{comm}\cup E_{chg}\) models communication and collaboration between authors, where communication is done via \(E_{comm}\subseteq V_{a}\times V_{t}\), and file changes via \(E_{chg}\subseteq V_{a}\times V_{f}\). Observe by this construction, the socio-technical network is in fact two bi-modal bipartite networks \(G_{st}=G_{chg}\cup G_{comm}\). Both \(G_{chg}\) and \(G_{comm}\) are also weighted (representing an author's count of changes to a file within a user specified time window (e.g. 3 months), and the number of replies submitted to an e-mail thread respectively), and undirected (the direction is irrelevant in this case because it
could only go in one direction in each bipartite network). Likewise, the CVE and File Networks are weighted, undirected, bipartite graphs. The definition of various transformations are encapsulated separately in functions, consistent to the overall architecture.
**Projection Transformations.** Familiar software engineering metrics can be derived from graph projections. Intuitively, a graph projection operation eliminates one set of the 'colored' nodes in a bipartite graph, and connects the adjacent black nodes together, where the resulting edge weight is the sum of the eliminated edges. For instance, in a bipartite network represented by file and commit nodes, eliminating the commit nodes would result in a file's co-change metric, revealing _indirect collaboration_. For example, if five authors modified the same file within a given time period, then the projection operation shows that _all five authors indirectly collaborated_, irrespective of the order of their changes (note that the derived uni-modal networks are undirected). We define this as the projection transformation to go from bi-modal to uni-modal networks.
**Temporal Transformations.** Let us now consider a second approach to obtain uni-modal networks. In [12], the authors define one method to construct uni-modal networks from the same data by defining indirect collaboration using the notion of incremental contributions through the timestamps on commits. For example, if author A modifies a file, and very next change to the same file is performed by author B, then B is said to have _indirectly collaborated with A_. A similar intuition and transformation could be used to categorize e-mail replies. We define this method, to go from the bipartite network to the uni-modal network, a _temporal_ transformation (as it relies on the timestamps). Observe in this case that the derived uni-modal networks will be directed graphs (which indicate the flow of time). The edge's weight is defined as the sum of lines of code added by both developers in their respective file changes.
Which method to derive uni-modal networks should we choose? This decision is encapsulated in Kaiaulu by the choice of functions. By swapping projection and temporal transformation functions, users can experiment with, and visualize, their various implications.
An example of both projection and temporal networks is shown in figure 5 (the name of each node's developer has been blurred). In the projection network, developers are connected if in a given time window they modified any file in common. In the temporal network, the direction displays which developers changed files after which in the given time window. For example, a bidirectional arrow means two developers change the same file together. A uni-directional arrow suggests another developer may have "taken over" during that time window. In both cases, we could derive hypothesis of the nature of collaboration, and derive hypothesis to be tested in the exploratory analysis.
**File, Functions, and Entities.** Another consideration encapsulated in Kaiaulu's functions is which entities are analyzed to derive indirect collaboration. For example, consider Figure 4. As we can see, the choice of granularity will also affect the number of edges generated, where a file granularity generates more edges than function granularity. A larger number of edges, in turn, may
impact the social smell metrics, as the existence of connections between developers in one network, and the absence of edges in another network, may inflate the count of metrics, such as social smells (which we define in the next section). The authors in [12] used a combination of temporal transformation and function granularity. In Kaiaulu, we implemented both the file granularity, and generalized the function granularity to _entities_, where an sub-file unit can be any source code block region of interest (e.g. functions, classes, or language specific features like structs in C).
### Identity
A critical component of conducting socio-technical analysis in open source communities is assigning a consistent identity to users who may employ multiple variants of their name and e-mail addresses in their project interactions. Several approaches to match identities have been proposed (e.g. [4], [27]). Exact name matching (either names or e-mails) or partial matching (e.g. based on edit distance) are two commonly used schemes.
Figure 3: Temporal vs Projection Networks, format adapted from [13].
Our identity matching was designed as a 3 step pipeline: formatting, name-email separation and pair-wise matching. Formatting includes the removal of symbols such as '\(<\) \(>\)', commas or replacing 'at' with '@', while avoiding modifying a name, such as Matt. Name and e-mail separation handles cases where first or last or both names are not provided, multiple word names, etc. Finally, pair-wise matching handles comparisons of name and e-mail, or reversed names. In total, the steps of formatting, name separation, and name matching amounted to 31 test cases, which were successfully implemented. At the end of this step, users in the version control system, issue tracker, and mailing list who matched via the tests we implemented were assigned an appropriate ID. Thus, given the name, and optionally the e-mail, other information sources can be matched.
An example of the utility of identity matching is shown in figure 6. Here, project communication occurs in parallel in both the Jira issue tracker and the project's mailing list. We fuse these information sources into a single "Reply Network".
In summary, the transformer API provides users with flexibility with respect to both temporal assumptions and sub-file granularity. Because all networks are annotated graphs, community detection algorithms in Kaiaulu can be used to
Figure 4: File vs Sub-file (e.g. Function) Networks, adapted from [13].
identify important patterns. For example, if applied to a file-commit network over a fixed period of time, co-changed file clusters can be identified. Similarly, if the file network is derived from file to file dependencies, clusters related to modularity measures can be derived. Developer networks can be used to detect communities.
In figure 7, we apply community detection to a temporal network such as the one illustrated in figure 5. The result is displayed by re-coloring the black nodes. Darker blue and lighter blue nodes represent two communities of developers as determined by the files that they changed in common. Developers in black represent boundary nodes, which participate of both communities. In the interactive format, researchers can "zoom in" on these nodes to identify who are the common developers, and can use this information to draw further hypotheses.
### Metrics
In the metrics module, we define some commonly used metrics, such as number of bugs, churn, LOC, as well as the less well-known social metrics. Demographics are also provided to help contextualize the previously presented social networks, such as the number of developers modifying files and exchanging e-mails, number of files, threads, and different timezones26.
Figure 5: Temporal Network. Nodes indicate developers. Edges represent common changed files. The edge direction indicates the temporal order of change.
Figure 6: Reply networks combine communication networks. Here dark blue nodes are issue comments, light blue nodes are mailing list comments, and black nodes are developers. Red nodes are developers who communicate in both the mailing list and the issue tracker.
For bug counts, rather than simply interpreting these as metrics, we make it easy for users to observe their topology. This is evident in figure 8, where a single issue is associated with multiple files (top right). While some other files may have a lower count of issues, more complex structure (such as we can observe at the bottom left of the figure) would be missed if only metrics were employed. Furthermore, feature issues can be discerned from bugs by examining the issue labels.
We devote this section to briefly discuss the social metrics as they leverage the previously discussed modules. For the social metrics, we adopt the definitions of social smells defined by [23]. Social smells reflect recurring sub-optimal organizational structure patterns connected to organizational behavior patterns, e.g., sub-optimal knowledge sharing, recurrent sharing delays, misguided collaboration and more. We chose to integrate three of these smells--Organizational Silo, Missing Link and Radio Silence--and two related metrics: socio-technical congruence and missing communicability [23]. Here we explain one of the social smells; additional details about these metrics can be found in [23].
Figure 7: Community detection applied to temporal projection.
In Figure 9, the collaboration network projection is shown in green to the left. The communication network is shown in blue to the right. The intent behind developer networks is to capture developers who modified the same file in a given snapshot, and also communicated via a common e-mail thread in a user-specified time window (e.g. 3 months). In this example, we can see to the left highlighted in red that developers (A,B), (B,E), (B,G), and (D,G) collaborated (i.e. they have a red edge in the green graph), but do not communicate (they do not have an edge in the blue graph). Therefore, the missing link social smell is counted 4 times for this snapshot.
### Configuration
Currently, the configuration module is minimal and exists embedded in R Notebooks. As noted at the start of this section, we borrowed from Codeface the idea of project configuration files. And we include more parameters in the configuration file as compared to Codeface. For example, Codeface hardcodes the set of acceptable file extensions, whereas Kaiaulu defers this choice to the user in the project configuration file.
Figure 8: Issue Network. Blue nodes represent issues, and yellow nodes files
More generally, we adopt the philosophy that project configuration files should serve as a distilled representation of assumptions and analysis choices. Rather than just serving as a repository of configuration choices for reproducibility, it is readable as plain text, and so can be easily exchanged and discussed. In Kaiaulu, a project configuration file is written in YAML. An example of project configuration file can be found in the public tool repository 'conf' folder27.
Footnote 27: [https://github.com/sailuh/kaiaulu/tree/master/conf](https://github.com/sailuh/kaiaulu/tree/master/conf)
As with every design decision in Kaiaulu, the full project configuration file needs not be specified. Indeed, every R Notebook, at the beginning, clarifies which parameters are required. In future work, we plan to expand the Configuration module to tabulate multiple configuration files. For example, it is often common in software engineering literature to analyze multiple projects and present a summary statistics table of the projects to assess generalization of results. Such tables could be generated on the fly from the files. Ideally, project configuration files should suffice as supplementary material, alongside Kaiaulu's version for full reproducibility.
## 5 Conclusions and Future Work
In this paper, through an action research approach, we have determined a set of key design decisions mined from existing tools. Based on these lessons learned we iteratively developed Kaiaulu, an R package for mining software repositories. Our goal in creating Kaiaulu was to simplify most of the boring, repetitive, and error-prone tasks in mining software repositories, leaving the user free to focus on the true goals of their research.
In Kaiaulu we have implemented and released a comprehensive set of capabilities to mine, analyze, and visualize software repositories, including social smells [23], architecture smells and metrics [17], and bug timelines based on prior work by other authors. Kaiaulu is licensed under MPL 2.0.
While we have derived the principles for Kaiaulu from our action research, our future work is to take a more disciplined approach to Kaiaulu's design, based on the quality attributes that represent its architectural drivers. Following the approach outlined in [2] and [3] we can, in the future, attempt to more
Figure 9: Missing Link Social Smell [15]
systematically collect architectural drivers, make reasoned design decisions, and support these decisions with well-established design rationale.
|
2306.05204 | The Qupit Stabiliser ZX-travaganza: Simplified Axioms, Normal Forms and
Graph-Theoretic Simplification | We present a smorgasbord of results on the stabiliser ZX-calculus for odd
prime-dimensional qudits (i.e. qupits). We derive a simplified rule set that
closely resembles the original rules of qubit ZX-calculus. Using these rules,
we demonstrate analogues of the spider-removing local complementation and
pivoting rules. This allows for efficient reduction of diagrams to the affine
with phases normal form. We also demonstrate a reduction to a unique form,
providing an alternative and simpler proof of completeness. Furthermore, we
introduce a different reduction to the graph state with local Cliffords normal
form, which leads to a novel layered decomposition for qupit Clifford
unitaries. Additionally, we propose a new approach to handle scalars formally,
closely reflecting their practical usage. Finally, we have implemented many of
these findings in DiZX, a new open-source Python library for qudit
ZX-diagrammatic reasoning. | Boldizsár Poór, Robert I. Booth, Titouan Carette, John van de Wetering, Lia Yeh | 2023-06-08T13:59:50Z | http://arxiv.org/abs/2306.05204v2 | The Qupit Stabiliser ZX-travaganza: Simplified Axioms, Normal Forms and Graph-Theoretic Simplification
###### Abstract
We present a smorgasbord of results on the stabiliser ZX-calculus for odd prime-dimensional qudits (i.e. _qupits_). We derive a simplified rule set that closely resembles the original rules of qubit ZX-calculus. Using these rules, we demonstrate analogues of the spider-removing local complementation and pivoting rules. This allows for efficient reduction of diagrams to the _affine with phases_ normal form. We also demonstrate a reduction to a unique form, providing an alternative and simpler proof of completeness. Furthermore, we introduce a different reduction to the _graph state with local Cliffords_ normal form, which leads to a novel layered decomposition for qubit Clifford unitaries. Additionally, we propose a new approach to handle scalars formally, closely reflecting their practical usage. Finally, we have implemented many of these findings in DiZX, a new open-source Python library for qudit ZX-diagrammatic reasoning.
## 1 Introduction
A helpful tool to reason about quantum computation is the _ZX-calculus_[23, 22], a graphical language which can represent any qubit computation. It has been used, for example, in measurement-based quantum computing [37, 54, 4], error-correcting codes [35, 38, 30], quantum circuit optimisation [8, 34, 51], classical simulation [52, 10, 53], quantum natural language processing [21, 55], quantum chemistry [62], and quantum machine learning [68, 75].
All the above results use the _qubit_ ZX-calculus, but recent years have seen a surge of interest in studying quantum computation using \(d\)-dimensional systems, called _qudits_. Qudit-based quantum computation has been experimentally realised in a variety of physical systems, such as ion traps [61, 46], photonic devices [19], and superconducting devices [12, 60, 72, 45, 41].
On the theory side, there has been work in translating work on qubits to qudits in quantum algorithms [68], fault-tolerant quantum computing [42, 15], quantum communication [25], and more [31, 38, 12, 55].
This raises the question of how we can use the ZX-calculus to reason about qudit systems. There exist several variations of the ZX-calculus that extend it to higher-dimensional qudits. Many have focused on the specific case of qutrit systems [65, 39, 65, 62], with applications in quantum computation [70, 63], and complexity theory [62]. Recent papers have focused on the stabiliser fragment of odd prime dimensional qudits, including Ref. [24] that explores error correction and detection in this context, and also Ref. [13] mentioned below. Some proposals capture all finite or infinite dimensions [59, 66, 57, 30], but lack many of the nicer features of the qubit calculus. Of particular importance to our paper is Ref. [13], which constructs a calculus for odd prime dimensions while retaining many of these desirable properties and establishing completeness for the stabiliser fragment. Despite these advancements, practical utilisation of the rewrites in these calculi has received limited attention, leaving room for further exploration and development.
To understand the usefulness of rewrite rules, we can take a look at the original qubit calculus. In qubit ZX, we can distinguish between'standard' rules -- spider fusion, identity removal, state copying, bialgebra, and colour change -- and 'harder' rules -- supplementarity, Euler angle colour permutation, and the rules dealing with the triangle generator. The standard rules, with minor modifications, were those originally discovered [21], and they are the most commonly used in practice. For instance, all the rewrites used in the PyZX compiler [49] can be proved using just these standard rules [33]. These rules are sufficient to prove completeness for the _stabiliser fragment_ of the ZX-calculus [1], while the harder rules were developed to prove completeness for larger fragments. This suggests that carefully studying the qudit stabiliser fragment could be a fruitful avenue for developing useful qudit ZX rewrite rules.
Recall that the stabiliser fragment corresponds to Clifford computation, which is an efficiently simulable subset of quantum computation [41] that forms the basis of many quantum protocols, such as error-correcting codes [48, 47], superdense coding [10], quantum teleportation [9], and quantum key distribution [8]. Completeness of the qubit stabiliser fragment of ZX was proved in [1], while for qutrits it was proved in [65]. Recently, completeness was proved for the stabiliser fragment for any odd-dimensional prime qudit dimension in [13]. The proofs of all these results work essentially the same way: first, they show that any state diagram can be reduced to a Graph State with Local Cliffords (GSLC), and then they show that any pair of GSLCs implementing the same state can be rewritten to a common reduced form.
In this paper, we take this last complete calculus for prime-dimensional qudits [13] as a starting point, and extend it in several ways:
1. We simplify the rules to a smaller set that has a clearer relation to the original qubit stabiliser calculus, and for most of which we can prove the necessity.
2. We incorporate a well-tempered axiomatisation for our calculus following the convention of [27], removing most of the scalars in our rewrite rules, and thus, simplifying our calculations.
3. We introduce a new approach to handle scalars, formalising the often-used convention of writing scalar numbers alongside diagrams.
4. We discover the qupit versions of the spider-removing _local complementation_ and _pivoting_ rules found in [33] and generalised to qutrits in [62]. These rules serve as the foundation
for optimisation and simulation strategies in the qubit setting [33, 50, 7, 49]. Our findings demonstrate that these strategies can be adapted to work for prime-dimensional qudits, thus extending their applicability beyond qubits.
5. Using these rewrite rules, we simplify the original completeness proof of [13] by reducing the number of case distinctions required.1 Specifically, we demonstrate that these rewrites reduce diagrams to a normal form that we call the _affine with phases_ (AP) form, which originally appeared in [32]. Then, given an AP-form diagram, we show how to reduce it further to a unique form, resulting in completeness.2 Footnote 1: In addition to being aesthetically and ergonomically preferable, reducing the number of case distinctions also makes the proof more easily verifiable. During the preparation of this manuscript, we identified and communicated several errors and omissions in [13], which were subsequently fixed.
6. Additionally, we demonstrate how to rewrite diagrams into a _graph-state with local Cliffords_ (GSLC) form, which yields a layered decomposition for Clifford unitaries similar to the one proposed for qubits in [33].
Our findings highlight that qubit stabiliser diagrams share many familiar properties with their qubit counterparts. Furthermore, many results regarding optimisation and normal forms extend seamlessly to the odd prime-dimensional qudit setting.
Finally, we have implemented many of these findings in DiZX, a new open-source Python library for qudit ZX-diagrammatic reasoning based on PyZX[49].3
Footnote 3: A similar normal form for qubits was independently found in [53]. It is worth noting that our formulation was already employed for qubits in the Oxford Quantum Software course prior to the preprint [53] appeared online.
Related workSubsequent to submission, we were made aware of a related, parallel work, Ref. [28], which also concerns well-tempered axiomatisations for qudit ZX-calculi.
## 2 The qubit Clifford ZX-calculus
In this section, we introduce the qudit stabiliser ZX-calculus for odd prime dimensions.
We let \(p\) denote an arbitrary odd prime, and \(\mathbb{Z}_{p}=\mathbb{Z}/p\mathbb{Z}\) the ring of integers modulo \(p\). Since \(p\) is prime, \(\mathbb{Z}_{p}\) is a field, implying that every non-zero element in \(\mathbb{Z}_{p}\) has a multiplicative inverse. We denote the group of units (i.e. invertible elements) as \(\mathbb{Z}_{p}^{*}\coloneqq\mathbb{Z}_{p}\setminus\{0\}\). We also define the Legendre symbol, for \(x\in\mathbb{Z}_{p}^{*}\), as follows:
\[\left(\frac{x}{p}\right)=\begin{cases}1&\text{if}\quad\exists y\in\mathbb{Z}_ {p}^{*}\text{ s.t. }x=y^{2};\\ -1&\text{otherwise};\end{cases} \tag{1}\]
The Hilbert space of a qubit is \(\mathcal{H}=\operatorname{span}\{\left|m\right\rangle\mid m\in\mathbb{Z}_{p} \}\cong\mathbb{C}^{p}\). Letting \(\omega\coloneqq e^{i\frac{2\pi}{p}}\) be a \(p\)-th primitive root of unity, we can write down the following standard operators \(Z\) and \(X\), occasionally known as the _clock_ and _shift_ operators: \(Z\left|m\right\rangle\coloneqq\omega^{m}\left|m\right\rangle\) and \(X\left|m\right\rangle\coloneqq\left|m+1\right\rangle\) for any \(m\in\mathbb{Z}_{p}\). Notably, \(ZX=\omega XZ\).
A _Pauli operator_ is defined as any operator of the form \(\omega^{k}X^{a}Z^{b}\) for \(k,a,b\in\mathbb{Z}_{p}\). We consider Pauli operator _trivial_ if it is proportional to the identity. Each Pauli operator has a spectrum given by \(\{\omega^{k}\mid k\in\mathbb{Z}_{p}\}\), and we denote \(\left|k:Q\right\rangle\) as the eigenvector of a Pauli operator \(Q\) associated with the eigenvalue \(\omega^{k}\). It follows from the definition of \(Z\) that we can identify \(\left|k:Z\right\rangle=\left|k\right\rangle\).
The collection of all Pauli operators is denoted \(\mathscr{P}_{1}\) and called the _Pauli group_. For \(n\in\mathbb{N}^{*}\), the _generalised Pauli group_\(\mathscr{P}_{n}\) is defined as \(\bigotimes_{k=1}^{n}\mathscr{P}_{1}\). Of particular importance to us are the _(generalised) Clifford groups_. These groups are defined for each \(n\in\mathbb{N}^{*}\) as the (unitary) normaliser of \(\mathscr{P}_{n}\). In other words, a unitary operator \(C\) on \(\mathcal{H}^{\otimes n}\) belongs to the Clifford group if, for any \(P\in\mathscr{P}_{n}\), the conjugation \(CPC^{\dagger}\) is also an element of \(\mathscr{P}_{n}\). While every Pauli operator is Clifford, there exist non-Pauli Clifford operators.
In the case of prime qudit dimensions, the group of Clifford unitaries can be generated by three gates: the _Hadamard gate_ defined as \(H\coloneqq\sum_{k\in\mathbb{Z}_{p}}|k:Z\rangle\!\langle k:X|\), the \(S\) gate defined as \(S\coloneqq\sum_{k\in\mathbb{Z}_{p}}\omega^{2^{-1}k(k-1)}\,|k:Z\rangle\!\langle k:Z|\), and the \(CX\) gate defined as \(CX\coloneqq\sum_{j,k\in\mathbb{Z}_{p}}|j,j+k:Z\rangle\!\langle j,k:Z|\)[42]. Note that in this context the Hadamard gate is sometimes also just called the _Fourier transform_.
Stabiliser quantum mechanics is operationally described as a fragment of quantum mechanics where the allowed operations include initialisations and measurements in the eigenbases of Pauli operators, as well as unitary operations from the generalised Clifford groups.
### Generators
We define the symmetric monoidal category \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\) as having objects \(\mathbb{N}\) and morphisms generated by the following diagrams, for any \(x,y\in\mathbb{Z}_{p}\) and \(s\in\mathbb{C}\):
In addition to the "standard" generators of \(\mathsf{ZX}\), we have introduced a new generator represented by a light-grey bubble with a scalar written inside it, which we refer to as an _explicit scalar_. These explicit scalars offer a convenient way to streamline the often cumbersome reasoning related to scalars that is typically involved in many graphical completeness papers. Note that the presence of the red \(\mathsf{X}\)-spider as a generator is in principle unnecessary since the \(\mathsf{Z}\)-spider surrounded by Hadamard boxes is equivalent to it. However, our goal is not to provide a minimal set of generators, but rather a convenient one.
Diagrams in our framework can be composed in two ways: sequentially, by connecting output wires to input wires, or vertically, by "stacking" diagrams, corresponding to the tensor product operation which is defined as \(n\otimes m=n+m\) on objects.
### Interpretation
The interpretation of a \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\)-diagram is defined on objects as \(\llbracket m\rrbracket\coloneqq\mathbb{C}^{p^{m}}\), and on the generators as:
\[\begin{split}\llbracket m\rrbracket=&\,p^{\frac{n+m- 2}{4}}\sum_{k\in\mathbb{Z}_{p}}\omega^{2^{-1}(xk+yk^{2})}\,|k:Z\rangle^{ \otimes n}\,\langle k:Z|^{\otimes m}\qquad\llbracket\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
There are a couple of things we should remark about this interpretation. First, the definition of the X-spider does not follow the standard convention. It is defined in such a way that it maps X-eigenstates to their additive inverse (modulo ). This definition is used in order to satisfy the property of _flexsymmetry_[17, 18], which allows us to treat diagrams as undirected graphs. Second, note that the interpretation of phases on the spiders has an additional factor which is necessary for the later stated Euler and Gauss axioms to be sound. This factor is considered modulo, so for instance, we have. Finally, the spiders are defined with a global scalar factor of to follow the _well-tempered normalisation_ convention of [28]. This allows us to present the axioms later on with significantly fewer scalar factors floating around.
While the conventional qudit ZX-calculus represents spiders using a -dimensional vector [50], we employ a different approach by leveraging a useful property of the Clifford group for prime-dimensional qudits: the phases of its spiders are -th roots of unity raised to polynomial functions with a maximum degree of 2 [27]. This property enables us to capture the essence of Clifford spiders using only two parameters: the coefficients of the linear and square terms. As a result, we develop a more elegant and intuitive framework for reasoning about stabiliser maps, requiring only two parameters in any odd-prime dimension. To establish a connection between our convention and the original qudit ZX-calculus, we define a mapping where a spider with phase parameter corresponds to the spider described in [50] with parameter, where correspond to the single quit Pauli and, respectively. Similarly, the diagrams and correspond to Clifford unitaries for any. As a result, we designate spiders with a phase as _Pauli spiders_, and spiders with a phase as _Clifford spiders_. Furthermore, spiders with a phase are referred to as _purely-Clifford spiders_, while spiders with a phase where are termed _strictly-Clifford spiders_. When the parameters of a spider are all zero, i.e., we call the spider _phase-free_ and we denote it without label as, and similarly for the X-spider. Lastly, we designate the phase-free X-spider as the _antipode_ since it implements the map.
Contrary to the qubit case, the qudit Hadamard gate is not self-inverse. Instead, it follows the property that four successive applications of the Hadamard gate results in the identity, that is. Therefore, the inverse of the Hadamard gate is given by. To maintain the clarity and simplicity of diagrams, we introduce the shorthand notation to represent the inverse of the Hadamard box.
### Axioms
We present the axioms of our calculus in Figure 1. In addition to these concrete rules, our calculus also follows the structural rules of a compact-closed PROP. This property implies that "only connectivity matters", allowing us to treat our diagrams as undirected graphs while preserving their interpretation as linear maps.
These rewrite rules are essentially a simplified version of the complete set of rewrite rules found in [14]. We can show these rules are equivalent to those found in that paper, by deriving the missing axioms.
**Proposition 1**.: For any \(z\in\mathbb{Z}_{p}^{*}\) and \(a,c,d\in\mathbb{Z}_{p}\), \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\) proves the following axioms from [13]:
Note that all the proofs in the paper can be found in the appendices.
We also change the presentation of scalars, but we can rely on the reduction in [13] of the scalar fragment to the elementary scalar fragment:
**Definition 2**.: An _elementary scalar_ is a diagram \(A\in\mathsf{ZX}_{p}^{\mathrm{Stab}}[0,0]\) which is a (possibly empty) tensor product of diagrams from.
**Lemma 3**.: \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\) is complete for elementary scalars. Explicitly, if \(s:0\to 0\) is an elementary scalar, then.
With these results, we can see that every derivation of [13] is also valid in our calculus, so that the rules of Figure 1 are complete. For this reason, we freely use the lemmas of [13] in the rest of this paper.
In deriving Mult and Shear in Proposition 1, as well as in the reduction to AP-form of Section 3, we make extensive use of the following "strictly-Clifford" state colour-change rules:
Figure 1: The rewrite rules of the qudit stabiliser ZX-calculus for any odd prime dimension \(p\). Here \(a,b,c,d\in\mathbb{Z}_{p}\), \(z\in\mathbb{Z}_{p}^{*}\) and \(\lambda,\mu\in\mathbb{C}\). \(\left(\frac{b}{p}\right)\) is the Legendre symbol, as defined in Equation (1). The dotted square in One depicts the empty diagram.
**Lemma 4**.: Strictly-Clifford states can all be represented both using Z- and X-spiders: for any \(a\in\mathbb{Z}_{p}\) and \(b\in\mathbb{Z}_{p}^{*}\),
\[\raisebox{-0.0pt}{\includegraphics[width=14.226378pt]{figs/Z-p-1-
of completeness. On the other hand, the GSLC form is particularly useful for rewriting and decomposing stabiliser unitaries.
### Graph simplifications
Before reducing the diagrams to our normal forms, we first need to simplify them into a _graph-like_ form. In this form, the diagrams consist only of Z-spiders and _H-edges_. To define the qupit graph-like diagrams, we first define _H-boxes_ as:
where is the _weight_ of the H-box. Unlike the _multipliers_ in [14], H-boxes are undirected, thus, we can treat diagrams that contain only generators and H-boxes as undirected (weighted) graphs.
**Proposition 6**.: \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\) proves the following equations:
Since edges that contain H-boxes are central to the subsequent proofs, we define _H-edges_, similarly to the qubit case, as a blue dashed line with the corresponding weight on top:
(2)
**Definition 7**.: A ZX-diagram is _graph-like_ when:
1. All spiders are Z-spiders.
2. Z-spiders are only connected via H-edges.
3. There are no self-loops.
4. Every input or output is connected to a Z-spider.
5. Every Z-spider is connected to at most one input or output.
Using standard techniques [40], it is evident that any ZX-diagram can be transformed into a graph-like form. This transformation involves several steps: performing a colour change on all X-spiders, fusing all Z-spiders, removing self-loops, and introducing identity elements to ensure that each input and output is correctly connected to a Z-spider. Once in graph-like form, the diagram can be represented as an open, weighted graph, where the edge weights are elements of \(\mathbb{Z}_{p}\) and each vertex is labelled by a phase.
Now that we have a graph-like diagram, we can differentiate between _boundary_ spiders, those directly connected to an input or output, and _interior_ spiders, those that are only connected to other spiders. Subsequently, we demonstrate that many of the internal spiders can be removed from a diagram using similar techniques to the qubit case [40].
The local complementation simplification enables the removal of a strictly-Clifford interior spider by introducing phases and wires to the spiders it is connected to. This technique is analogous to the qubit version described in [40].
**Lemma 8** (Local complementation simplification).: For any \(z\in\mathbb{Z}_{p}^{*}\) and for all \(a,\alpha_{i},\beta_{i},e_{i},w_{i,j}\in\mathbb{Z}_{p}\) where \(i,j\in\{1,\ldots k\}\) such that \(i<j\) we have:
Here \(\gamma_{i}=\alpha_{i}-e_{i}az^{-1}\), \(\delta_{i}=\beta_{i}-z^{-1}e_{i}^{2}\), and \(g_{i,j}=w_{ij}-z^{-1}e_{i}e_{j}\).
We also have an analogue of the pivot rewrite rule. This rule enables us to eliminate connected interior Pauli spiders by introducing additional phases and connections to the spiders they are connected to.
First, we prove a simplified version of pivoting:
**Lemma 9**.: The following version of pivoting is derivable in \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\):
Here \(\epsilon\in\mathbb{Z}_{p}^{*}\) and all the other variables are allowed arbitrary values.
Then the general version can be derived from that:
**Lemma 10** (Pivoting simplification).: General pivoting is derivable in \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\):
Here again \(\epsilon\in\mathbb{Z}_{p}^{*}\) with every other variable on the left-hand side allowed arbitrary values. On the right-hand side \(\gamma_{i}=\alpha_{i}-\epsilon^{-1}(af_{i}+be_{i})\), \(\delta_{i}=\beta_{i}-2\epsilon^{-1}e_{i}f_{i}\), and \(g_{i,j}=-\epsilon^{-1}(e_{i}f_{j}+e_{j}f_{i})\).
### AP-form
The above results suggest that through the application of local complementation and pivoting, it is possible to transform any state diagram (a diagram without inputs) into a graph-like diagram where only Pauli spiders remain internal spiders, and they are exclusively connected to boundary spiders. This is achieved through a two-step process. Firstly, any internal spider that
is Clifford is eliminated through local complementation. This ensures that only Pauli spiders remain internal. Secondly, given that the diagram contains only Pauli internal spiders, any connected pair of internal spiders can be removed using pivoting. We give a name to this type of diagram:
**Definition 11**.: We say that a graph-like diagram is in _Affine with Phases form_ (AP-form) when:
* There are no inputs;
* The internal spiders are Pauli spiders;
* Internal spiders are only connected to boundary spiders.
We refer to this class of diagrams as "Affine with Phases" because they correspond to states described by an affine subspace of basis states, with an additional phase function applied to the output. This characterisation is supported by the following lemma:
**Lemma 12**.: A general non-zero \(n\)-qubit diagram in AP-form is described by the diagram:
(3)
where \(a_{l},\alpha_{i},\beta_{i},e_{h,i},f_{i,j}\in\mathbb{Z}_{p}\) with \(l\in\{1,\ldots,k\}\) and \(i,j\in\{1,\ldots,n\}\) such that \(i<j\). The interpretation of this diagram is (up to some non-zero scalar) equal to a state
\[\sum_{E\vec{x}=\vec{a}}\omega^{\phi(\vec{x})}\ket{\vec{x}} \tag{4}\]
where \(E\) is the weighted bipartite adjacency matrix of the internal and boundary spiders, \(\vec{a}\) describes the Pauli phases of the internal spiders, and \(\phi\) is a phase function that describes the connectivity and phases of the boundary spiders:
\[E=\begin{bmatrix}e_{1,1}&\cdots&e_{1,n}\\ e_{2,1}&\cdots&e_{2,n}\\ \vdots&&\vdots\\ e_{k,1}&\cdots&e_{k,n}\end{bmatrix}\;,\qquad\vec{a}=\begin{bmatrix}a_{1}\\ \vdots\\ a_{k}\end{bmatrix}\;,\qquad\phi(\vec{x})=\sum_{\begin{subarray}{c}i,j\in\{1, \ldots,n\}\\ i<j\end{subarray}}2^{-3}x_{i}\alpha_{i}+2^{-2}x_{i}^{2}\beta_{i}-2^{-3}f_{i,j}x _{i}x_{j}\]
Notably, states described by AP-form diagrams correspond to the stabiliser normal forms described in Ref. [65].
With AP-form diagrams, we can prove a qupit version of the Gottesman-Knill theorem, which states that we can efficiently sample from the probability distribution of a stabiliser computation. Let us consider an AP-form diagram represented by \((E,\vec{b},\phi)\). When we measure this state in the computational basis, we observe that the phase function \(\phi\) has no impact on the measurement
outcomes, allowing us to disregard it. Hence, we can describe the state as \(N\sum_{E\vec{x}=\vec{a}}|\vec{x}\rangle\), where \(N\) is a normalisation constant. This state represents a uniform superposition of the states \(|\vec{x}\rangle\) that satisfy the equation \(E\vec{x}=\vec{a}\).
To sample from such states, we need to generate solutions to this equation uniformly at random. Efficiently achieving this involves finding any solution \(E\vec{x}^{\prime}=\vec{a}\) and then obtaining a basis \(\vec{v}_{1},\ldots,\vec{v}_{\ell}\) for the linear space \(\{E\vec{x}=\vec{0}\}\). We can then return \(\vec{x}^{\prime}+\sum_{i}^{\ell}b_{i}\vec{v}_{i}\), where the \(b_{i}\in\mathbb{Z}_{p}\) are chosen uniformly at random.
AP-form diagrams also enable us to provide an alternative, more direct proof of the completeness of \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\) through reduction to a unique normal form. In the context of graphical calculi, completeness means that the rewrite rules of the calculus can prove any true equation. In other words, if \(\llbracket A\rrbracket=\llbracket B\rrbracket\), then it is possible to rewrite diagram \(A\) into diagram \(B\).
We say that a diagram in AP-form defined by \((E,\vec{a},\phi)\) is in _reduced AP-form_ if it is either zero, or it is non-zero and satisfies the following conditions:
* \(E\) is in reduced row echelon form (RREF), i.e., it is fully reduced using Gaussian elimination.
* \(E\) contains no fully zero rows.
* \(\phi\) only contains free variables from the equation system of \(E\), i.e., variables that do not correspond to _pivot_ columns in \(E\).
For any non-zero state \(|\psi\rangle\), there is at most one triple \((E,\vec{a},\phi)\) satisfying the conditions of reduced AP-form such that:
\[|\psi\rangle\approx\sum_{E\vec{x}=\vec{a}}\omega^{\phi(\vec{x})}\ket{\vec{x}}\]
Therefore, a diagram in reduced AP-form is unique.
Now, our objective is to demonstrate that we can rewrite a ZX-diagram in AP-form in a manner that transforms its biadjacency matrix \(E\) into \(\mathrm{RREF}\). Additionally, we need to show that we can modify the diagram so that the corresponding phase function \(\phi\) only includes free variables from the equation system \(E\vec{x}=\vec{a}\). Put simply, we need to prove that we can perform primitive row operations on a ZX-diagram in AP-form as well as eliminate any phase or Hadamard edge from a pivot spider.
We can perform primitive row operations on a ZX-diagram in AP-form, i.e., we can "add" one inner spider to another. For any \(k,a,b,e_{i},f_{j}\in\mathbb{Z}_{p}\) where \(i\in 1,\ldots,n\) and \(j\in 1,\ldots,m\):
Using this result, we can apply primitive row operations to \(E\) in AP-form diagram and hence reduce it to \(\mathrm{RREF}\). Through diagrammatic rewrites, we can show that when \(E\) is in \(\mathrm{RREF}\), we can eliminate all the phases and H-edges associated with the non-free variables of \(E\).
**Lemma 16**.: If an AP-form diagram has its biadjacency matrix \(E\) in RREF, we can rewrite the diagram so that the boundary spiders corresponding to non-free variables of \(E\) have zero phases, and there are no H-edges connecting them to other boundary spiders.
**Lemma 17**.: Any diagram in \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\) can be converted into one in reduced AP-form.
The completeness result follows immediately from the above lemma.
**Theorem 18** (Completeness).: For any pair of ZX-diagrams \(A,B\in\mathsf{ZX}_{p}^{\mathrm{Stab}}\), if \(\llbracket A\rrbracket=\llbracket B\rrbracket\), we can provide a sequence of rewrites that transforms \(A\) into \(B\).
### GSLC form
The AP-form is advantageous as it can be directly transformed into a unique normal form, and allows for straightforward classical sampling. However, it may be less suitable for other applications. For instance, when applying the algorithm described above to a diagram originating from a Clifford unitary, it becomes challenging to establish a clear relationship between the resulting simplified diagram and a corresponding quantum circuit.
In this section, we introduce the qupit version of the well-known qubit GSLC-form diagrams.
**Definition 19**.: We say a diagram is in _GSLC form_ (Graph State with Local Cliffords) when it is graph-like, up to Hadamards on input and output wires, and it has no internal spiders.
The algorithm for reducing a diagram to AP-form may still yield diagrams with internal spiders, specifically Pauli spiders connected to boundaries. However, we can eliminate these internal spiders by using a _boundary pivot_.
**Lemma 20**.: The following boundary pivot rule is derivable in \(\mathsf{ZX}_{p}^{\mathrm{Stab}}\):
Here \(g_{ij}\coloneqq-\epsilon^{-1}e_{i}f_{j}\) and \(h_{i}\coloneqq-\epsilon^{-1}e_{i}\). This rule holds for all choices of phases as long as \(\epsilon\neq 0\).
To observe how this rewrite aids in eliminating internal spiders, consider that the spider with a phase of \((b,c)\) now becomes an internal spider connected to an internal Pauli spider. Consequently, if \(c=0\), we can eliminate the pair using standard pivoting. On the other hand, if \(c\neq 0\), we can employ a local complementation to remove the \((b,c)\) spider. This alteration modifies the phase of its sole neighbour, subsequently enabling its removal through another local complementation.
Lemma 20 can be straightforwardly modified, similar to Lemma 10, to accommodate arbitrary connectivity between the internal spider and the boundary. By incorporating additional
spider unfusions, we can extend the application of Lemma 20 to boundary spiders that are connected to multiple inputs or outputs. It is worth noting that when applying Lemma 20 multiple times to the same boundary, different powers of the Hadamard gate may appear on the input or output wire. For instance, applying it twice yields \((H^{3})^{2}=H^{2}\), and another iteration reverts back to \(H\).
Hence, we can observe that it is indeed possible to eliminate all internal spiders from a diagram, allowing for an efficient reduction of diagrams to GSLC form. This is particularly significant for diagrams derived from unitaries, as we can then rewrite them in the following manner:
Here, the boxes labelled with \(H\)? represent a possible power of a Hadamard gate acting on the qubit. By applying spider unfusion and colour change operations, we observe that the diagram can be decomposed into several layers consisting of Hadamard gates, Z phase gates, CZ gates, and a middle portion represented by a weighted biadjacency matrix \(A\). This part of the circuit implements a map of the form \(\ket{\vec{x}}\mapsto\ket{A\vec{x}}\), where \(\vec{x}\in\mathbb{Z}_{p}^{n}\) and \(A\) is an \(n\times n\) matrix over \(\mathbb{Z}_{p}\). Since we assume the entire map to be unitary, \(A\) must also be invertible. Consequently, such a 'linear' qupit map can always be implemented through a series of CX gates, transforming \(\ket{x,y}\) to \(\ket{x,x+y}\) (the decomposition is achieved via standard Gaussian elimination over \(\mathbb{Z}_{p}\)). Thus, we arrive at the following result.
**Theorem 21**.: Any odd-prime-dimensional qudit Clifford unitary can be efficiently decomposed into a quantum circuit consisting of the following layers:
H--Z--S--CZ--CX--H--CZ--Z--S--H
To the best of our knowledge, such a Clifford normal form for qudits has not been described before in the existing literature. It is worth noting, though, that this result bears a striking resemblance to the qubit normal form for Clifford circuits outlined in [34].
## 4 Conclusion
We presented a simplified version of the qudit ZX-calculus for odd prime dimensions based on the work in Ref. [14]. This version includes fewer rules and a new scalar gadget to bring the reasoning about scalars more in line with practice. We also extended the spider-removing versions of local complementation and pivoting to qubits. This extension enabled us to reduce diagrams efficiently to AP-form and its unique version, the reduced AP-form. As a result, we obtained a new completeness proof for the qupit stabiliser fragment, which is more straightforward compared to previous proofs. Additionally, we discovered a reduction to GSLC form, leading to a novel layered decomposition of qupit Clifford unitaries. To support these developments, we implemented our rewrites into DiZX, a port of PyZX that now supports qudit stabiliser diagrams of arbitrary dimension.
For future work, it would be interesting to investigate whether our techniques can be applied to develop a useful circuit optimisation pipeline for qudits. It would also be valuable to identify specific circuits that would benefit from such optimisation.
**Acknowledgements**: We would like to thank Razin A. Shaikh for his contributions to the development of DiZX. LY is supported by an Oxford - Basil Reeve Graduate Scholarship at Oriel College with the Clarendon Fund. Some of this work was done while BP was a student at the University of Oxford. The results of Sections 3.1 to 3.2, Lemma 4, and the explicit scalars are also presented in his Master's thesis [59]. TC was supported by the ERDF project 1.1.1.5/18/A/020 "Quantum algorithms: from complexity theory to experiment".
|
2304.04763 | Distributed Estimation with Decentralized Control for Quadruple-Tank
Process | This paper proposes the design of quadruple-tank process due to the unique
multivariable MIMO system under minimum and non-minimum scenario with respect
to the valve ratio. This model is then implemented the distributed estimation
algorithm with decentralized control. The inputs are set in divergent gains of
pumps while the four-tank process is interconnected so that the stability
properties are different, making the usage of decentralized control is
reasonable. The number of outputs is designed the same as those of inputs which
are also that of distributed Luenberger observer with the continuous linearized
dynamical system. This distributed comprises local estimates only in certain
output, meaning that it would lead to insufficiency so that the neighbouring
links under some network topologies are required in the dynamical system. This
concept fortunately works in two different characteristic stability of the tank
process regarding estimating the states. This success leads to the further
research of the more large-scale complex system. | Moh Kamalul Wafi, Bambang L. Widjiantoro | 2023-04-09T02:59:40Z | http://arxiv.org/abs/2304.04763v1 | # Distributed Estimation with Decentralized Control for Quadruple-Tank Process
###### Abstract
This paper proposes the design of quadruple-tank process due to the unique multivariable MIMO system under minimum and non-minimum scenario with respect to the valve ratio. This model is then implemented the distributed estimation algorithm with decentralized control. The inputs are set in divergent gains of pumps while the four-tank process is interconnected so that the stability properties are different, making the usage of decentralized control is reasonable. The number of outputs is designed the same as those of inputs which are also that of distributed Luenberger observer with the continuous linearized dynamical system. This distributed comprises local estimates only in certain output, meaning that it would lead to insufficiency so that the neighbouring links under some network topologies are required in the dynamical system. This concept fortunately works in two different characteristic stability of the tank process regarding estimating the states. This success leads to the further research of the more large-scale complex system.
Decentralized Control, Distributed Estimation, Quadruple-Tank Process, Sensor Networks +
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
+
Footnote †: email: [email protected]
## I Introduction
The design dealing with complex multivariable dynamical systems have been attracting a lot of interest in the field of control theory, such as quadruple-tank process [1] and [2]. This scaled structure is particularly suitable in measuring the performance limitation according to the identification batch-algorithm model [3] of the complex control system with the non-minimum mentioned [4] as the elaboration of [5]. Since the system is interconnected meaning that one could influence another, this also leads to the importance of guaranteeing the poles in the left-plane. Several control theories have been proposed to handle this with the mathematical model built in from the sliding-mode [6], robust control [7], or the more advanced predictive control as done in [8]. Furthermore, it could be generalized with the structure of decentralized control as stated in [9] and [10] with the capable of linearizing the non-linear dynamics so that the location of the stability could be well-administered. This quadruple tanks refers to what was done by [2] with two divergent scenarios of the stable minimum and the difficult with the non-minimum phase. Notwithstanding, this plant is applied to test based on the estimation concept of the proposed filtering module [11] and distributed estimation based on the classical Luenberger observer as conducted in [12] and [13] for the linear system. This distributed algorithm currently has been widely studied as a new window in the control field to locally predict the states through neighbouring links. The history of distributed is succeeded by the decentralized done in [14] with the interconnected system based on the classical Kalman filtering and its distributed in [15]. Furthermore, the track fusion applying the cross covariance was also initiated by [16] with the evolution of the maximum likelihood (ML) as [17]. The consensus of the distributed is well-defined in [18] whereas the consensus filtering is conducted in [19] with the same Kalman filtering and its pseudo estimates [20] and decoupling control [21], even with the augmented estimates from the fusion itself [22]. The ideas behind the research conducted in [12] are used further in
[23] being inspired by the estimator in the domain of discrete-time applying deep elaboration of the observability connection as stated in [24]. The required conditions are suggested in [24] regarding the necessary and sufficient to build the augmented observer with appearance of the distributed estimation using the concept of detectability [5] that for certain node \(i\) paralleling with the output \(i\), it needs the information from the connected node from the topology. The construction of the paper is then initiated with the mathematical modelling along with the decentralized control. The following is the distributed observer and the numerical scenarios to show the proposed ideas under some criteria regarding the limitation ended by the conclusion.
## II Mathematical description
The scheme of quadruple-tank process comprising four interconnected tanks being driven by two pumps as depicted in Fig. (**1**). This tank process includes multivariable-input multivariable-output (MIMO) plant with two inputs and outputs, constituting the input (\(u\)) voltages to both pumps \((v_{1},v_{2})\) influencing the whole tanks and the output \((y)\) voltages from both level measurement devices \((y_{1},y_{2})\) in tank 1 and 2. Since the measurement devices are located only in the bottom two tanks, the objective is to maintain the level \((h_{i})\) of the tanks working in certain design of set-point with inlet flow rates. While the pumps run, they are then divided into two directions using the three-way valve, which each of them operates only to the two diagonal position tanks.
The voltage being implemented to pump \(n\) with \(n=1,2\) is \(v_{n}\) and the corresponding outlet flow from \(n\)-th pump is \(q_{p}(n)\), which equals to \(k_{n}v_{n}\) where \(k\) is the constant pump and \(v_{n}\) is the velocity rate going through the pump. Another important scenario is the position or behaviour of the valves, affecting the distribution to those four tanks, with the ratio of \((\gamma_{1},\gamma_{2})\in\) [0,1]. This means that if from the \(i\)-th pump, the ratio to tank 1 is \((\gamma_{1})\), with flow rate \(\gamma_{1}k_{1}v_{1}\), then the counterpart ratio of tank 4 is \((1-\gamma_{1})\), with flow rate \((1-\gamma_{1})k_{1}v_{1}\). Similarly, this concept also administers the rest two tanks with another ratio \((\gamma_{2})\) from another pump. The dynamic of the tanks refers to this paper [2] and the mathematical models are presented as follows. Firstly, it is required to consider the mass balance theorem and the law of Bernoulli's, saying that the accumulation rate of mass in a system \((m_{T})\) equals to the difference between mass of inlet flow \((m_{i})\) and the outlet \((m_{o})\) to the system.
\[\frac{dm_{T}}{dt}=m_{i}-m_{o} \tag{1}\]
and the Eq. (1) could be altered into the non-linear process depending on the fluids, therefore
\[A\frac{dh}{dt}=\rho q_{i}-\rho q_{o} \tag{2}\]
Since the fluid is the same with \(\rho_{1}=\rho_{2}=\rho_{3}=\rho_{4}\), Eq. (2) is simplified with Eq. (3)
\[A_{i}\frac{dh_{i}}{dt}=(q_{i})_{i}-(q_{o})_{i} \tag{3}\]
where for certain tank \(i=1,...,4\), the variables of \(A_{i},h_{i},q_{i}\), and \(q_{o}\) represent the cross-sectional area of certain tank, the fluid level, the inlet and the outlet flow of the tanks respectively. Moreover, the inlet flow for the whole tanks \(q_{i_{1}},...,q_{i_{4}}\) affected by the ratio of the valve \(\gamma_{n}\) is described as follows,
\[\begin{array}{l@{\quad}l}q_{i_{1}}=\gamma_{1}k_{1}v_{1};\quad\quad&q_{i_{3} }=(1-\gamma_{2})k_{2}v_{2};\quad\\ q_{i_{2}}=\gamma_{2}k_{2}v_{2}\quad\quad&q_{i_{4}}=(1-\gamma_{1})k_{1}v_{1} \end{array} \tag{4}\]
whereas the outlet flow from a tank \(q_{o}(i)\) is denoted in Eq. (5) with \(a_{i}\) and \(g\) are the open cross-section of the bottom-outlet flow and the gravitational acceleration in turn
\[q_{o_{i}}=a_{i}\sqrt{2gh_{i}} \tag{5}\]
Taking the whole dynamics (inlet-outlet) of the tanks, the non-linear dynamic of the quadruple-tank from Fig. (**1**) is shown below. Keep in mind that there exist two inputs from lower tanks, flowing from the pipe and the upper outlet tanks which are affected by the diagonal-term of the corresponding pump
\[\begin{array}{l@{\quad}l}A_{1}\frac{dh_{1}}{dt}=q_{i_{1}}+q_{o_{3}}-q_{o_{1 }}\\ =\gamma_{1}k_{1}v_{1}+a_{3}\sqrt{2gh_{3}}-a_{1}\sqrt{2gh_{1}}\\ A_{2}\frac{dh_{2}}{dt}=q_{i_{2}}+q_{o_{4}}-q_{o_{2}}\\ =\gamma_{2}k_{2}v_{2}+a_{4}\sqrt{2gh_{4}}-a_{2}\sqrt{2gh_{2}}\\ A_{3}\frac{dh_{3}}{dt}=q_{i_{3}}-q_{o_{3}}\\ =(1-\gamma_{2})k_{2}v_{2}-\ a_{3}\sqrt{2gh_{3}}\\ A_{4}\frac{dh_{4}}{dt}=q_{i_{4}}-q_{o_{4}}\\ =(1-\gamma_{1})k_{1}v_{1}-a_{4}\sqrt{2gh_{4}}\end{array} \tag{6}\]
The Bernoulli's law in Eq. (6) could be then reconstructed in Eq. (7) for the sake of the state-space representation, therefore
Figure 1: The design of quadruple-tank process
\[\begin{split}\frac{dh_{1}}{dt}&=-\frac{a_{1}}{A_{1}} \sqrt{2gh_{1}}+\frac{a_{3}}{A_{1}}\sqrt{2gh_{3}}+\frac{\gamma_{1}k_{1}}{A_{1}}v _{1}\\ \frac{dh_{2}}{dt}&=-\frac{a_{2}}{A_{2}}\sqrt{2gh_{2} }+\frac{a_{4}}{A_{3}}\sqrt{2gh_{4}}+\frac{\gamma_{2}k_{2}}{A_{2}}v_{2}\\ \frac{dh_{3}}{dt}&=-\frac{a_{3}}{A_{3}}\sqrt{2gh_{3} }+\frac{(1-\gamma_{2})k_{2}}{A_{3}}v_{2}\\ \frac{dh_{4}}{dt}&=-\frac{a_{4}}{A_{4}}\sqrt{2gh_{4} }+\frac{(1-\gamma_{1})k_{1}}{A_{4}}v_{1}\end{split} \tag{7}\]
Eq. (7) could be also simplified for the so-called conductance \(K_{i}\). The variables being used in the laboratory-scale process are written in Table (1) regarding the upper \((A_{i})\) and lower \((a_{i})\) open cross-sectional for each tank \((i)\) along with the ratio of the measured gain signals \((k_{c})\) whilst Table (1) asserts the condition of the couple operating points of the quadruple-tank process comprising the initial values of level \((h_{i}^{0})\) and velocity \((v_{i}^{0})\). Furthermore, those are defined as \(P_{-}\) and \(P_{+}\) declaring the minimum-phase and the counterpart of non-minimum-phase scenario in turn
\[K_{i}=\frac{a_{i}}{A_{i}}\sqrt{2g} \tag{8}\]
The non-linear model in Eq. (7) could be changed into the linear approximation by proposing the following variables \(x_{i}=h_{i}-h_{i}^{0},u_{i}=v_{i}-v_{i}^{0}\) from Table (1). With the standard state-space design of \(\dot{x}=Ax+Bu\) and the output \(y=Cx\), the complete equation is presented in Eq. (9). Moreover, the certain values of time-constant for each tank \(T_{i}\) is influenced by the initial level \(h_{i}^{0}\) and the static variables, as shown in Eq. (10), such that
\[T_{i}=\frac{A_{i}}{a_{i}}\sqrt{\frac{2h_{i}^{0}}{g}} \tag{10}\]
From Eq. (10) and Table (1), the time-constant for each operating-point is shown in Table (3) which is used in the state-space matrices Eq. (9) and the transfer functions Eq. (11),
For the particular \(h_{i}^{0}\), the transfer function in Eq. (11) is utilized to yield the stationary control signal from Eq. (9) with specific \((c_{i})\). Keep in mind that the valve ratio \(\gamma_{n}\) for the non-minimum \(P_{+}\) and minimum \(P_{-}\) is set with \(0<(\gamma_{1}+\gamma_{2})<1\) and \(1<(\gamma_{1}+\gamma_{2})<2\) as written in Table (2).
The transfer function in Eq. (11) is affected by the variables working in two different operating points. Moreover, this means the transfer function results in two divergent physical modelling as reported in Eq. (12). More specifically, \(G_{-}(s)\) represent the minimum phase whereas the \(G_{+}(s)\) constitutes the non-minimum scenario. Transfer functions in Eq. (11) and (12) have zero locations leading to the physical representation of the system with respect to certain ratio of \(\gamma_{n}\). The zeros in Eq. (11) are then supposed to be the numerator of the following characteristic rational formula as written in Eq. (13). These zero results furthermore in the analysis of either left- or right-half plane. From Eq. (14), it can be inferred that the analysis of determining the scale of \(\gamma_{1}\) and \(\gamma_{1}\) is if the \(\eta\to 0\), the two zeros are approaching the negative of either a \(T_{3}\) or \(T_{4}\) while as \(\eta\to\infty\), those would be then in the extremely asymptotically \((-/+)\) of \(\infty\), such that
\[\eta:=\frac{(1-\gamma_{1})(1-\gamma_{2})}{\gamma_{1}\gamma_{2}} \tag{14}\]
Recalling the parameters of minimum and non-minimum, the first accounts for \((\gamma_{1}+\gamma_{2})=1.30>1\) which means that the flow going to the two bottom tanks is greater than that of the two top tanks and by contrast, \((P_{+})\), the flow to the lower tanks would be smaller compared to the upper. This also indicates that controlling the two bottom tanks is much easier than the left (1 & 3) or the right (2 & 4) tanks. Beyond that, the zeros location is not the only consideration, rather the direction. Likewise, the transfer function \((G)\) is having the zero direction by the following equation Eq. (15) and (16).
Another concept is what was proposed by [25] regarding the relative gain array (RGA) denoting how the MIMO control system is measured. This is defined as \(\Upsilon=(G)_{0}*(G^{-\dagger})_{0}\) where the symbol of \((*)\) describes the multiplication by element \((-\dagger)\) with the inverse transpose of matrix. The RGA of this system is given as follow depending solely on the valve ratio, therefore
\[\lambda=\frac{\gamma_{1}\gamma_{2}}{\gamma_{1}+\gamma_{2}-1}\qquad\qquad \widetilde{\lambda}=\frac{(1-\gamma_{1})(1-\gamma_{2})}{1-\gamma_{1}-\gamma_{ 2}} \tag{17}\]
For the decentralized scenario with the non-minimum, the RGA then is designed as Eq. (17) indicating that \(\widetilde{\lambda}>0\) and this is preferable. Moreover, the stability property is also considered for the input gain flow \(v_{n}^{0}\) if Eq. (18)
\[\begin{bmatrix}\gamma_{1}k_{1}&(1-\gamma_{2})k_{2}\\ (1-\gamma_{1})k_{1}&\gamma_{2}k_{2}\end{bmatrix} \tag{18}\]
is a non-singular matrix with \(\gamma_{1}+\gamma_{2}\neq 1\)
## III Decentralized Control
Since the quadruple-tank is the multivariable control system, the decentralized control is proposed with \(u=\mathrm{diag}[C_{1}\quad C_{2}]\,e\) as depicted in Fig. (2) for the specific proportional-integral (PI) control law as written in Eq. (20). Decentralized control requires the parallel dimension of input-output system and the positive diagonal element of RGA \(G(0)\) with this decentralized makes it easy to be controlled, otherwise, with negative diagonal element, it leads to be the unstability
Inside the \(G(s)\), the transfer function is designed as in Eq. (19) and the control gain parameters being used in the simulation are obtained from the root-locus calculation. Those gain \(K_{p},K_{i}\) values for certain pump \(n\) are different and this research focuses on the minimum phase \(P_{-}\) only to be implemented using the distributed estimation explained in the next chapter.
\[G(s) =\begin{bmatrix}G_{1}&G_{2}\\ G_{3}&G_{4}\end{bmatrix}\] \[=\begin{bmatrix}\dfrac{\Phi_{1}}{\varphi_{1}s+\xi_{1}}&\dfrac{ \Phi_{2}}{\vartheta_{2}s^{2}+\varphi_{2}s+\xi_{2}}\\ \dfrac{\Phi_{3}}{\vartheta_{3}s^{2}+\varphi_{3}s+\xi_{3}}&\dfrac{\Phi_{4}}{ \varphi_{4}s+\xi_{4}}\end{bmatrix} \tag{19}\] \[C_{n}=K\left(1+\dfrac{1}{(T_{i})_{n}s}\right)\to n=1,2 \tag{20}\]
\[\begin{bmatrix}\psi_{1}\\ \psi_{2}\end{bmatrix}^{T}\begin{bmatrix}\dfrac{\gamma_{1}c_{1}}{1+zT_{1}}& \dfrac{(1-\gamma_{2})c_{1}}{(1+zT_{3})(1+zT_{1})}\\ \dfrac{(1-\gamma_{1})c_{2}}{(1+zT_{4})(1+zT_{2})}&\dfrac{\gamma_{2}c_{2}}{1+zT_ {2}}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}^{T} \tag{15}\]
\[\mathrm{RGA}\,\Upsilon=\begin{bmatrix}\dfrac{\gamma_{1}\gamma_{2}}{\gamma_{1}+ \gamma_{2}-1}&\dfrac{-(1-\gamma_{1})(1-\gamma_{2})}{\gamma_{1}+\gamma_{2}-1}\\ \dfrac{-(1-\gamma_{1})(1-\gamma_{2})}{\gamma_{1}+\gamma_{2}-1}&\dfrac{\gamma_{ 1}\gamma_{2}}{\gamma_{1}+\gamma_{2}-1}\end{bmatrix}\quad\to\quad\begin{bmatrix} \lambda&1-\lambda\\ 1-\lambda&\lambda\end{bmatrix} \tag{16}\]
Figure 2: Decentralized control design with the coupling of \(C_{1}\) and \(C_{2}\)
## IV.Distributed estimation
Since the area of control systems have been increasing upon the demand of the more complex systems, one is to estimate the state from networked system. To deal with this, the usage of distributed estimation with switching its localization into key neighbourhood communication [12] and [13] attracts the most as portrayed in Fig. (3)
Suppose the following time-domain linear system as in Eq. (21), where \(x\in\mathbb{R}^{n}\) and \(y\in\mathbb{R}^{p}\) are the state and measurement in turn. This distributed enables the \(y\) as \(\mathrm{col}(y_{1},...,y_{N})\) and \(H_{i}\) as \(\mathrm{col}(H_{1},...,H_{N})\) with \(N\) is assigned the number of nodes in the network \(\mathcal{G}\), where \(\sum_{i=1}^{N}p_{i}=p\) and \(y_{i}\in\mathbb{R}^{p_{i}}\). This \(y_{i}\) is then assumed as the solely key data being obtained by certain local node \((i)\) to estimate the states by using the neighbouring links to cope with the lack of insufficient data with the constraint of the designed network topology,
\[\frac{dx}{dt}=Ax,\qquad\quad y=Hx=\begin{bmatrix}H_{1}\\ \vdots\\ H_{N}\end{bmatrix}x=\begin{bmatrix}y_{1}\\ \vdots\\ y_{N}\end{bmatrix} \tag{21}\]
Furthermore, this research considers the distributed estimation with Luenberger structure containing \(N\) local output and observers having the following dynamics for each node \(i\) as
\[\hat{\underline{x}}_{i}=\textbf{A}\hat{\overline{x}}_{i}+L_{i}\big{(}y_{i}-H _{i}\widehat{\underline{x}}_{i}\big{)}+\gamma M_{i}^{-1}(k_{i})\sum_{j\in \mathcal{N}_{i}}(\widehat{\underline{x}}_{j}-\widehat{\underline{x}}_{i}) \tag{22}\]
\[(A_{\mathrm{id}}-L_{\mathrm{id}}H_{\mathrm{id}})^{T}M_{\mathrm{id}}+M_{ \mathrm{id}}(A_{\mathrm{id}}-L_{\mathrm{id}}H_{\mathrm{id}})=-I_{n-\sigma_{i}} \tag{25}\]
\[\hat{\underline{x}}_{i}=\textbf{A}\widehat{\underline{x}}_{i}+L_{i}\big{(}y_{ i}-H_{i}\widehat{\underline{x}}_{i}\big{)}+\gamma M_{i}^{-1}(k_{i})\sum_{j=1}^{N} \alpha_{ij}(\widehat{\underline{x}}_{j}-\widehat{\underline{x}}_{i}) \tag{26}\]
\[\left(k_{i}-\frac{\beta}{\theta(\overline{\varepsilon})}\right)\left(\gamma- \frac{\overline{\beta}}{2\lambda_{2}}\right)>\frac{\overline{\beta}^{2}N^{2}}{ 2\lambda_{2}\theta(\overline{\varepsilon})};\ \ \ \rightarrow\ \ \forall i\in\mathcal{N};\ \ \ k_{i}\geq 1;\ \ \gamma>\frac{\overline{\beta}}{2\lambda_{2}};\ \ \theta(\overline{ \varepsilon})=\frac{1}{2}\left(1-\left(1-\frac{\widehat{\underline{ \varepsilon}}^{2}}{2}\right)^{2}\right) \tag{27}\]
\[M_{i}(k_{i})(A-L_{i}H_{i})=T_{i}\begin{bmatrix}k_{i}M_{\mathrm{id}}&0\\ 0&I_{\sigma_{i}}\end{bmatrix}T_{i}^{T}(A-L_{i}H_{i})T_{i}^{T}T_{i}^{T}=T_{i} \begin{bmatrix}k_{i}M_{\mathrm{id}}&0\\ 0&I_{\sigma_{i}}\end{bmatrix}\begin{bmatrix}A_{\mathrm{id}}-L_{\mathrm{id}}H _{\mathrm{id}}&0\\ A_{\mathrm{ir}}&A_{\mathrm{iu}}\end{bmatrix}T_{i}^{T} \tag{28}\]
\[e_{i}=\widehat{\underline{x}}_{i}-x\ \ \ \rightarrow\ \ \frac{de_{i}}{dt} =(A-L_{i}H_{i})e_{i}+\gamma M_{i}\sum_{j=1}^{N}\alpha_{ij}\big{(}e_{j}- e_{i}\big{)} \tag{29}\] \[=\Lambda e-\gamma\overline{M}(\mathcal{L}\circ I_{n})e\ \ \ \ \ \rightarrow\ \ \begin{cases}\Lambda=\mathrm{diag}\{A-L_{1}H_{1}\ \ \ \cdots\ \ \ A-L_{N}H_{N}\}\\ M=\mathrm{diag}\{M_{1}\ \ \ \cdots\ \ \ M_{N}\}\end{cases}\]
If the parameters of \(k_{i}\) and \(\gamma\) are opted rewarding the conditions in Eq. (28) with \(\beta_{i}:=2|A_{ii}\|^{2}+\|A_{iu}^{T}+A_{iu}\|\) and \(\overline{\beta}:=\max(i\in\mathcal{N})\,\beta_{i}\) where \(\beta\) is the sum of \(\beta_{i}\) from 1 to \(N\). The idea of the characteristic of \(T_{i}\) is that it is the orthonormal matrix so that it satisfies following the Eq. (29) with the error of the local node \(i\) in Eq. (30) as the combination of the two equations, Eq. (21) and (22).
## 5 Numerical scenarios
This chapter is used to elaborate the concept with some simulation. The dynamic of the system is presented in Eq. (9) with the suitable parameter as in Table (1), (2), and (3) for certain phase-conditions either minimum \(P_{-}\) or non-minimum \(P_{+}\). Since the nodes are only two \(N=2\), the communication occurs between them with \(y_{i}=H_{i}x\),
\[\begin{array}{l}H_{1}=[k_{c}\quad 0\quad 0\quad \mathbf{0}]\\ H_{2}=[0\quad k_{c}\quad 0\quad\mathbf{0}]\end{array} \tag{17}\]
The detail parameters being used in the simulation are \(\gamma=6\), \(k_{1}=3\), and \(k_{2}=4.5\) with initial condition of \(x_{0}=[8\quad 5\quad-2\quad 1]\) along with decentralized control parameters of \((K_{1}T_{i1})_{1}=(3,30)\) and \((K_{2}T_{i2})_{1}=(2.7,40)\) for the minimum phase \(P_{-}\) and for the non-minimum phase \(P_{+}\) of \((K_{1}T_{i1})_{2}=(1.5,110)\) and \((K_{2}T_{i2})_{2}=(-0.12,220)\) with ten times settling time longer than that of the minimum-phase. The parameters for distributed estimation are presented in the following details, such that,
\[\begin{array}{l}T_{1}=\begin{bmatrix}I_{2}&\boldsymbol{O}\\ \boldsymbol{O}&I_{2}\end{bmatrix}\end{array}\qquad\qquad\qquad T_{2}=\begin{bmatrix}I _{2}&\boldsymbol{O}\\ \boldsymbol{O}&I_{2}\end{bmatrix}\]
\[\begin{array}{l}L_{1\mathrm{d}}=\begin{bmatrix}3\\ 1\end{bmatrix}\qquad\qquad\qquad\qquad L_{2\mathrm{d}}=\begin{bmatrix}-1\\ 3\end{bmatrix}\\ M_{1\mathrm{d}}=\begin{bmatrix}0.5&-0.5\\ 0.5&1\end{bmatrix}\qquad\qquad M_{2\mathrm{d}}=\begin{bmatrix}0.286&-0.25\\ -0.25&0.387\end{bmatrix}\end{array}\]
The numerical scenario for minimum phase shows that the system could deal with the interconnected tanks system with the proposed parameters as being depicted in Fig. (4e) with the following error in Fig. (4a). Keep in mind that the peak errors happened are due to the changes of set-points as shown in the time of 100, 200, 300 and 350 from the two voltages since the four tanks are the interconnected system which affects one from others. However, the dynamics MIMO system is then stabilizing with the very fast time. By contrast, the non-minimum phase is much more difficult to be controlled and it needs ten times setting time than that of their counterparts as presented in Fig. (4b) for the error and Fig. (4d) for the output dynamics. Likewise, the peaks occurred are made of the changes of set-points. Furthermore, regarding the distributed estimation, both true states \((x)\) response with the black-dashed lines could be followed by the estimates of \((\widehat{x}_{1})\) and \((\widehat{x}_{2})\). Regarding Fig. (4e)-Fig. (4f), the performance of the estimation is depicted and shows the ability of tracking.
## 6 Conclusion
The mathematical dynamics of the quadruple-tank have been written along with some key parameters, such that the valve gains dividing the flow with \(\gamma_{1}+\gamma_{2}<1\) would be non-minimum and otherwise is the minimum. The constructed
Figure 4: The error of the two parameters from the minimum-phase \(P_{-}\) as (a) and the non-minimum \(P_{+}\) as (b) using decentralized PI control; The two responses of the true output \((y)\) from \(P_{-}\) as (c) with 500s and \(P_{+}\) as (d) with ten times longer settling time by 5000s with the same gains of control as designed; The response states of the distributed estimation of the \(P_{-}\) (e) and \(P_{+}\) (f) with the same initial conditions.
decentralized PI control also show the adverse of maintaining the scenario of non-minimum compared to their counterpart. With respect to the distributed estimation, it has been designed using local communication as much as the number of outputs. This local Luenberger observer design could deal with the dynamics of the quadruple-tank process while it is erratic in the early stages of iterations. Our future work would be the changes of distributed estimation along with some distributed fault detection and fault-tolerant control.
|
2305.13136 | Density biases and temperature relations for DESIRED HII regions | We present a first study based on the analysis of the DEep Spectra of Ionized
REgions Database (DESIRED). This is a compilation of 190 high signal-to-noise
ratio optical spectra of HII regions and other photoionized nebulae, mostly
observed with 8-10m telescopes and containing $\sim$29380 emission lines. We
find that the electron density --$n_{\rm e}$-- of the objects is underestimated
when [SII] $\lambda6731/\lambda6716$ and/or [OII] $\lambda3726/\lambda3729$ are
the only density indicators available. This is produced by the non-linear
density dependence of the indicators in the presence of density
inhomogeneities. The average underestimate is $\sim 300$ cm$^{-3}$ in
extragalactic HII regions, introducing systematic overestimates of $T_{\rm
e}$([OII]) and $T_{\rm e}$([SII]) compared to $T_{\rm e}$([NII]). The
high-sensitivity of [OII] $\lambda\lambda7319+20+30+31/\lambda\lambda3726+29$
and [SII] $\lambda\lambda4069+76/\lambda\lambda6716+31$ to density makes them
more suitable for the diagnosis of the presence of high-density clumps. If
$T_{\rm e}$([NII]) is adopted, the density underestimate has a small impact in
the ionic abundances derived from optical spectra, being limited to up to
$\sim$0.1 dex when auroral [SII] and/or [OII] lines are used. However, these
density effects are critical for the analysis of infrared fine structure lines,
such as those observed by the JWST in local star forming regions, implying
strong underestimates of the ionic abundances. We present temperature relations
between $T_{\rm e}$([OIII]), $T_{\rm e}$([ArIII]), $T_{\rm e}$([SIII]) and
$T_{\rm e}$([NII]) for the extragalactic HII regions. We confirm a non-linear
dependence between $T_{\rm e}$([OIII])-$T_{\rm e}$([NII]) due to a more rapid
increase of $T_{\rm e}$([OIII]) at lower metallicities. | J. E. Méndez-Delgado, C. Esteban, J. García-Rojas, K. Z. Arellano-Córdova, K. Kreckel, V. Gómez-Llanos, O. V. Egorov, M. Peimbert, M. Orte-García | 2023-05-22T15:32:29Z | http://arxiv.org/abs/2305.13136v1 | # Density biases and temperature relations for DESIRED HII regions
###### Abstract
We present a first study based on the analysis of the DEep Spectra of Ionized REgions Database (DESIRED). This is a compilation of 190 high signal-to-noise ratio optical spectra of H ii regions and other photoionized nebulae, mostly observed with 8-10m telescopes and containing \(\sim\)29380 emission lines. We find that the electron density \(-n_{\rm e}\)- of the objects is underestimated when [S ii] \(\lambda 6731/\lambda 6716\) and/or [O ii] \(\lambda 3726/\lambda 3729\) are the only density indicators available. This is produced by the non-linear density dependence of the indicators in the presence of density inhomogeneities. The average underestimate is \(\sim 300\) cm\({}^{-3}\) in extragalactic H ii regions, introducing systematic overestimates of \(T_{\rm e}\)([O ii]) and \(T_{\rm e}\)([S ii]) compared to \(T_{\rm e}\)([N ii]). The high-sensitivity of [O ii] \(\lambda\lambda 7319+20+30+31/\lambda\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31\) to density makes them more suitable for the diagnosis of the presence of high-density clumps. If \(T_{\rm e}\)([N ii]) is adopted, the density underestimate has a small impact in the ionic abundances derived from optical spectra, being limited to up to \(\sim\)0.1 dex when auroral [S ii] and/or [O ii] lines are used. However, these density effects are critical for the analysis of infrared fine structure lines, such as those observed by the JWST in local star forming regions, implying strong underestimates of the ionic abundances. We present temperature relations between \(T_{\rm e}\)([O iii]), \(T_{\rm e}\)([Ar iii]), \(T_{\rm e}\)([S iii]) and \(T_{\rm e}\)([N ii]) for the extragalactic H ii regions. We confirm a non-linear dependence between \(T_{\rm e}\)([O iii])-\(T_{\rm e}\)([N ii]) due to a more rapid increase of \(T_{\rm e}\)([O iii]) at lower metallicities.
keywords: ISM:Abundances - ISM: HII regions - galaxies: abundances - ISM: evolution.
## 1 Introduction
The determination of chemical abundances from emission line spectra of ionized nebulae is an essential tool for studying the chemical composition and evolution of the Universe, from the Milky Way to high-redshift galaxies. In ionized nebulae, the total abundance of heavy elements, the metallicity, is traced by the O/H abundance, as it comprises \(\sim 55\) per cent of the total metal content (Peimbert et al., 2007). This information can be used to explore the nucleosynthesis of chemical elements and the galaxy formation and evolution. In fact, the mean metallicity of the galaxies and the shape of radial abundance gradients depend on their masses, the star formation history and the relative importance of the gas inflows/outflows across their discs (e.g. Tinsley, 1980; Prantzos, 2008; Matteucci, 2014).
The chemical abundances of elements heavier than He can be derived from bright collisionally excited lines (CELs) in the emission line spectra of ionized nebulae. In the optical range, the emissivity of CELs is exponentially dependent on the electron temperature, \(T_{\rm e}\), being a critical physical parameter for obtaining accurate abundance values. This is the basis of the so-called direct method for determining chemical abundances (e.g. Dinerstein, 1990; Peimbert et al., 2017; Perez-Montero, 2017). Moreover, recently Mendez-Delgado et al. (2023) demonstrated the presence of temperature inhomogeneities within the highly ionized gas as theorized by Peimbert (1967). The existence of such spatial temperature variations introduces a systematic bias towards lower abundances that can reach errors as high as \(\sim 0.5\) dex in the O/H abundance (Mendez-Delgado et al., 2023). On the other hand, the fine structure CELs in the infrared (IR) range that arise from atomic transitions of low energy levels (\(\Delta\) E\(<<1\) eV) have a smaller temperature-dependence (Osterbrock & Ferland, 2006). However, in these cases the electron density, \(n_{\rm e}\), is a fundamental parameter to accurately
determine chemical abundances, as the critical densities of these low-energy levels are smaller than those involved in the emission of optical CELs (Osterbrock & Ferland, 2006).
With the advent of optical spectroscopic surveys using large Integral Field Units (IFU), data for myriads of H ii regions in large samples of external spiral galaxies have become available (e.g. Sanchez et al., 2012; Bryant et al., 2015; Bundy et al., 2015; Emsellem et al., 2022). However, it is common that most of the spectra of extragalactic H ii regions in these surveys are not deep1 enough to detect the faint auroral CELs necessary to determine \(T_{\rm e}\) or the even fainter recombination lines (RLs) of heavy-element ions. When the gas temperature is not available one has to rely on the so-called strong-line methods to estimate the gas-phase metallicity, which are based on calibrations of the O/H ratio --the proxy for metallicity when analyzing nebular spectra-- built with observed intensity ratios of bright nebular CELs (e.g. Pagel et al., 1979; Pilyugin et al., 2010, 2012; Marino et al., 2013; Pilyugin & Grebel, 2016) or on photoionization models (e.g. McGaugh, 1991; Kewley & Dopita, 2002; Kobulnicky & Kewley, 2004; Tremonti et al., 2004). Comparing the different calibrations available in the literature, one can find very large differences between the O/H ratios for the same set of observations, differences that can amount to 0.2-0.7 dex (e.g. Kewley & Ellison, 2008; Lopez-Sanchez et al., 2012; Groves et al., 2023). From the available strong-line methods, only those of Pena-Guerrero et al. (2012) take into account the presence of temperature inhomogeneities.
Footnote 1: With the concept of “deep spectrum” we mean a long-exposure time spectrum with a high signal-to-noise ratio where the main purpose is the detection of weak emission lines, such as auroral CELs or RLs.
The large amount of data generated by big surveys that one can gather from the literature permit us to explore, constrain and minimize the effects of statistical errors in the estimate of metallicities of H ii regions in a given galaxy or a group of similar galaxies (e.g Sanchez et al., 2015; Ho, 2019; Kreckel et al., 2019; Metha et al., 2021). However, only detailed studies of deep spectra of H ii regions allow us to adequately explore and constrain the effects of systematic errors in the determination of physical conditions and ionic and total abundances. On this matter, there are previous works dedicated to collect auroral CELs from the most commonly studied ions ([O ii], [O iii], [S ii], [S iii], [N ii]) (Pilyugin et al., 2012; Croxall et al., 2016; Berg et al., 2020; Rogers et al., 2021, 2022; Zurita et al., 2021). However, with some notable exceptions where recombination lines were considered (Peimbert et al., 2005; Guseva et al., 2011; Pena-Guerrero et al., 2012; Valerdi et al., 2019; Skillman et al., 2020), most previous studies are limited to the CELs of few ions, which do not provide the complete picture of the physics of the ionized gas.
Since the beginning of this century, our group has gathered a large number of intermediate spectral resolution longslit or high spectral resolution echelle spectra for a large number of Galactic and extragalactic H ii regions as well as Galactic planetary nebulae (PNe) and ring nebulae (RNe) around massive Wolf-Rayet and Of stars. This collection of data is what we call DESIRED (DEep Spectra of Ionized Regions Database, see Section 2 for references and a description of the data). The vast majority of the data have been obtained with large-aperture (8-10m) telescopes and the observations were designed to detect very faint emission lines. As a result of the remarkable signal-to-noise ratio of our collection of nebular spectra, each individual object counts with tens or even hundreds of emission lines, showing good measurements of all or some of these: (a) one or several faint \(T_{\rm e}\)-sensitive auroral CELs, (b) several density indicators based on the intensity ratios of CELs, (c) RLs of one or some heavy-element ions and (d) sets of rare faint lines as those of [Fe ii] and/or [Fe ii] or fluorescence ones, useful for detailed studies on the internal physics of the ionized gas.
The DESIRED papers seek to analyze global properties of the ionized gas in unprecedented detail, detecting and describing phenomena that have -or might have- an impact on interpretations of large-scale studies based on solid observational evidence. The present work is dedicated to the study of physical conditions (\(T_{\rm e},n_{\rm e}\)) of the ionized gas, including information about their internal structures and the temperature relations. The prescriptions, warnings and relations of this study are intended to consider different types of ionized regions and can be used both in studies of individual objects and in large-scale studies.
## 2 Description of desired
DESIRED comprises a set of 190 spectra, 72 of them correspond to 68 extragalactic H ii regions, 56 spectra of 41 Galactic H ii regions, 34 Galactic PNe, 21 spectra of 7 Galactic RNe as well as 6 spectra of 5 photoionized Herbig-Haro objects (HHs) and 1 protoplanetary disk (proplyd) of the Orion Nebula. References to the spectra are shown in Tables A1, A2, A3, A4 and A5. All the spectra have been observed by our group except those of the Galactic PNe IC 418, IC 2501, IC 4191 and NGC 7027 (Sharpe et al., 2003, 2007). We decided to include these data in DESIRED as they show an analogous level of depth and quality as the rest of the objects included in Table A4 (see the comparative analysis performed by Rodriguez, 2020). The database contains 29380 emission line detections, associated with 2486 transitions of 148 ionic species2. Of that total number of detections, 8715 are forbidden lines, while 18986 are permitted ones and 1679 remain unidentified or with doubtful identifications. From the detected permitted lines, 7836 are associated to metals. A number of 851 forbidden lines correspond to \(T_{\rm e}\)-sensitive auroral transitions [O ii] \(\lambda\lambda\)7319 + 20 + 30 + 31, [S ii] \(\lambda\lambda\)4069 + 76, [N ii] \(\lambda\)5755, [S iii] \(\lambda\)6312, [Ar iii] \(\lambda\)5192 and [O iii] \(\lambda\)4363, that can be used for \(T_{\rm e}\) determinations.
Footnote 2: In this context, permitted and forbidden transitions are considered independently. For instance, [O iii] and O ii are counted as different ionic species.
The remarkably high signal-to-noise ratio of the DESIRED spectra can be verified in any of the published reference articles. We can highlight fig. 1 of Esteban et al. (2014) or fig. 7 of Mendez-Delgado et al. (2021) in the case of the Orion Nebula; fig. 3 of Dominguez-Guzman et al. (2022) for extragalactic H ii regions in the Magellanic Clouds; fig. 3 of Esteban et al. (2016) for the RN NGC 6888 and fig. 4 of Garcia-Rojas et al. (2018) for a group of PNe.
The observations have been taken from 2002 to date with the spectrographs and telescopes shown in Table A63. The spectra were reduced and calibrated manually following a consistent procedure, using IRAF routines (Tody, 1993), Python codes and some tasks from the ESO UVES pipeline (Ballester et al., 2000). The flux, wavelength and FWHM of the lines were measured manually using the IRAF task SPLOT, individually estimating the continuum.
Footnote 3: The spectra of Sharpee et al. (2003, 2007) were taken between 2001 and 2003.
Echelle spectra were not corrected from telluric emissions, since the slit does not usually cover sky areas. However, the high spectral resolution permits us to separate the doppler shifted nebular emissions from the sky contaminations. Sky-blended lines are
identified and their use has been ruled out in this work. In most of the spectra the telluric absorption bands were not corrected. This potentially affects several wavelength ranges as \(\lambda\lambda 7600-7700\)A, \(\lambda 19000-10000\)A, where atmospheric O\({}_{2}\) and H\({}_{2}\)O bands are strong and dense (Stevenson, 1994). UVES spectra may have optical reflections within the second dichroic of the blue arm (\(\lambda\lambda 3750-4995\)A). The wavelength position of these spurious "ghosts" can be determined directly from the echellograms as they cross the different observed orders. The use of these lines is also discarded along with those with detected individual spurious effects.
Intermediate spectral resolution spectra (\(R\sim 3000\)-\(4000\)) were mostly taken with long-slit two-arms spectrographs. We verified the accuracy of the relative flux calibration between the bluest and reddest wavelength ranges. The sky emission was removed in the case of the smaller angular size nebulae (most of the Galactic H ii regions observed with OSIBIS at the \(10.4\)m GTC telescope, the extragalactic ones and PNe), this was not possible in the case of IC 5146 and M43, extended Galactic H ii regions observed with ISIS at the \(4.2\)m WHT telescope.
The spectra were corrected for interstellar extinctions and underlying stellar absorptions following the iterative process described by Lopez-Sanchez et al. (2006), which is based on the results of Mazzarella & Boroson (1993) and the observed H i Balmer and Paschen decrements, when available. No corrections for underlying stellar absorptions were made to the He i lines. However, the Galactic objects did not require such corrections (Mendez-Delgado et al., 2020). The detailed procedure for each object is described in the reference articles.
In Fig. 1, we show a BPT diagram (Baldwin et al., 1981) of all DESIRED spectra distinguishing their corresponding types of nebulae. The dashed line indicates the separation between the H ii regions and active galactic nuclei (AGNs) as defined by the empirical equation (1) of Kauffmann et al. (2003). All Galactic and extragalactic H ii regions as well as photoionized HH objects and the proplyd are located in the zone of star forming regions. This is consistent with gas photoionized by O or early B type stars. PNe and RNe are present both in the star forming region zone and in the area usually associated with AGNs. RNe associated with Wolf-Rayet stars are located within the AGN zone whereas those associated to Of stars are together with the H ii regions. This is due both to a harder ionizing spectrum from Wolf-Rayet stars and to a larger contribution from shocks, associated with stellar feedback (Esteban et al., 2016). Most PNe are located well above the H ii regions line (e.g. Kniazev et al., 2008), as expected from their harder ionizing sources. However, Abell 46, Abell 63 and Ou5 (Corradi et al., 2015) fall within the area of H ii regions. This interesting result seems
Figure 1: BPT diagram of the DESIRED spectra. The dashed line represents the boundaries between star-forming regions (to the left and below the line) and regions with harder ionizing sources (generally associated to Active Galactic Nuclei) (Kauffmann et al., 2003).
linked to the fact that these 3 regions have the largest abundance discrepancy factor (ADF) between the O\({}^{2+}\)/H\({}^{+}\) abundances derived with both RLs and CELs of the whole sample. This is in agreement with the scenario where these PNe contain metal-rich cold inclusions within the ionized gas, enhancing the emission of the H i RLs, as proposed by several authors (Corradi et al., 2015; Garcia-Rojas et al., 2022).
The metallicity range, expressed by 12+log(O/H), determined from CELs and assuming no temperature fluctuations, covered by the sample objects goes from 7.72 and 8.70 in the case of H ii regions (including both Galactic and extragalactic) and from 7.76 and 8.80 in the case of PNe. It should be noted that, due to the requirements of DESIRED observations (relatively bright spectra and high probability of detecting RLs of heavy element ions), the number of H ii regions with 12+log(OH) below 8.0 is rather limited. A drawback that could be corrected with observations with the future very large aperture telescopes.
## 3 Physical conditions
The determination of the chemical composition of photoionized nebulae requires, as a first step, accurate calculations of \(n_{\rm e}\) and \(T_{\rm e}\). DESIRED objects potentially comprise a wide range of densities from \(n_{\rm e}\sim 10^{2}\) cm\({}^{-3}\) for some extragalactic H ii regions to \(n_{\rm e}>10^{5}\) cm\({}^{-3}\) for HHs and the photoevaporating propold 170-337 of the Orion Nebula. Therefore, it is possible to explore relations between several density diagnostics. To derive \(n_{\rm e}\), we test the [S iii] \(\lambda 6731/46716\), [O iii] \(\lambda 3726/33729\), [C iii] \(\lambda 5538/\lambda 5518\), [Fe iii] \(\lambda 4658/\lambda 4702\) and [Ar iv] \(\lambda 4740/\lambda 4711\) line intensity ratios. To solve the statistical equilibrium equations, we use PyNeb 1.1.13 (Luridiana et al., 2015) and the transition probabilities and collision strengths given in Table 7. We use the _getCrossTemDen_ task of PyNeb to simultaneously derive \(T_{\rm e}\) and \(n_{\rm e}\), cross matching the aforementioned density diagnostics with the \(T_{\rm e}\)-sensitive [N ii] \(\lambda 5755/\lambda 6584\), [O iii] \(\lambda 4363/\lambda 5007\), [Ar iii] \(\lambda 5192/\lambda 7135\) and [S iii] \(\lambda 6312/\lambda 9069\) line intensity ratios. Finally, we average the density values obtained with each cross-match to obtain a representative value of \(n_{\rm e}\) for each tested density diagnostic. For the objects with reliable detections of density diagnostics but not of the aforementioned temperature diagnostics, we derive the density by assuming \(T_{\rm e}=10000\pm 1000\) K. There are only 5 objects in this last case: three slit positions of M 43 (observed by Simon-Diaz et al., 2011), two H ii regions of M 33 and another two of NGC 300 (observed by Toribio San Cipriano et al., 2016). The temperature dependence of the density diagnostics is negligible in these cases. All these objects show \(n_{\rm e}<1000\) cm\({}^{-3}\). We analyze the \(n_{\rm e}\) determinations in Section 5, defining a clear criteria to adopt its final representative value for each object. Finally, once \(n_{\rm e}\) is fixed, \(T_{\rm e}\) is calculated by using the _getTemDen_ task of PyNeb.
The near infrared lines [S iii] \(\lambda 9069,9531\) can be affected by the telluric absorption bands (Stevenson, 1994; Noll et al., 2012), potentially introducing spurious results in \(T_{\rm e}\)([S iii]) if there is no strict control over this issue. Usually, the most affected line is [S iii] \(\lambda 9531\), which lies in a wavelength zone more contaminated by telluric absorption bands, although this effect may vary depending on internal gas velocities, as in the Orion Nebula, where [S iii] \(\lambda 9069\) is usually the most contaminated one (Baldwin et al., 1991; Mendez-Delgado et al., 2021). We have tried to have a strict control on the telluric absorptions, discarding the use of the affected lines, in order to avoid spurious \(T_{\rm e}\)([S iii]) determinations. As a second check, in those objects where both lines were detected, we test the [S iii] \(\lambda 9531/\lambda 9069\) line intensity ratio. Both lines arise from the same atomic \({}^{1}D_{2}\) upper level, therefore their relative intensity must be equal to 2.47 (Froese Fischer et al., 2006), regardless of the physical conditions of the gas. We discard those objects where [S iii] \(I(\lambda 9531)/I(\lambda 9069)>2.47\) beyond the observational uncertainties, as it indicates a possible effect on [S iii] \(\lambda 9069\). However, since no telluric corrections of any kind were made, except in the Mendez-Delgado et al. (2021, 2022) spectra, we cannot guarantee that _all_ DESIRED spectra are free of telluric absorption effects on their [S iii] \(\lambda 9069,9531\) lines.
Although the [O ii] \(\lambda\lambda 7319+20+30+31/\lambda \lambda 3726+29\) and/or [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31 line ratios were measured in many objects, we prefer not using them in the determination of the final adopted \(T_{\rm e}\) of each object. As we discuss in Section 6.1, those line ratios are very sensitive to \(n_{\rm e}\) and the inferred \(T_{\rm e}\)([O ii]) and \(T_{\rm e}\)([S ii]) are affected by the presence of high-density clumps within the ionized nebulae, as will be discussed in Section 6.1.
## 4 Photoionization models
To explore the theoretical temperature relations in the absence of temperature fluctuations (\(t^{2}=0\)), we select the photoionization models of giant H ii regions from the Mexican Million Models database4(Morisset et al., 2015), built for the BOND project (Vale Asari et al., 2016) using Cloudy v17.01 (Ferland et al., 2017). We adopt the same selection criteria as Amayo et al. (2021), which considers starburst ages lower than 6 Myr, ionization-bounded and density-bounded selected by a cut of 70 per cent of the H\(\beta\) flux and a selection of realistic N/O, \(U\) and O/H values (Vale Asari et al., 2016). We also adopt the same BPT-cut defined by Amayo et al. (2021) in their equation (3). Since we do not intend to study the temperature relations in PNe or RNe beyond analyzing their differences with the results of H ii regions, we do not adopt any additional set of models.
Footnote 4: [https://sites.google.com/site/mexicanmilliomod](https://sites.google.com/site/mexicanmilliomod) models/
## 5 The density structure of ionized nebulae
Several line intensity ratios emitted from atomic levels close in energy are sensitive to \(n_{\rm e}\) due to their different collisional excitation and deexcitation rates. As shown in left panel of Fig. 2, the density dependence of several optical and infrared line ratios is not linear and they have different ranges of validity. The [S ii] \(\lambda 6731/\lambda 6716\) line intensity ratio is one of the most used density diagnostics in the literature due to its observational accessibility. Therefore, it will be used in this work as the main reference in the comparisons with other density diagnostics. In order to estimate the utility of a density diagnostic, it is convenient to study its sensitivity. We define this quantity as the variation of the line intensity ratio with \(n_{\rm e}\), being mathematically represented with the derivative of the diagnostic with respect to the density.
The sensitivity of the \(n_{\rm e}\)-diagnostics and, in general, the relationship between the inferred physical conditions and the observed line intensity ratios depend on the atomic transition probabilities and collision strengths. Several studies have analyzed the behavior of these parameters with optical spectra (Stasinska et al., 2013; Juan de Dios and Rodriguez, 2017; Morisset et al., 2020; Juan de Dios and Rodriguez, 2021; Mendoza et al., 2023). After detecting and discarding discrepant data sets, Morisset et al. (2020) and Mendoza et al. (2023) estimate uncertainties of \(\sim 10\) per cent in the radiative
atomic rates for ions like [O ii], [S ii], [Fe iii], [Cl iii] and [Ar iv]. We minimize the presence of errors in the atomic data by considering the results of the aforementioned studies, avoiding the use of discrepant atomic data sets. However, the impact of potential errors cannot be completely neglected, since the available calculations are few in number in the case of some ions.
A comparison of the relative sensitivity of different density diagnostics with respect to the widely used [S ii] \(\lambda 6731/\lambda 6716\) is shown in the right panel of Fig. 2. The first notable result is that [O ii] \(\lambda 3726/\lambda 3729\) and [S ii] \(\lambda 6731/\lambda 6716\) are equivalent diagnostics in terms of sensitivity, without significant differences. This figure also shows that [Cl iii] \(\lambda 5538/\lambda 5518\), [Fe iii] \(\lambda 4658/\lambda 4702\) and [Ar iv] \(\lambda 4740/\lambda 4711\) are not sensitive diagnostics when \(n_{\rm e}<10^{5}\) cm\({}^{-3}\). However, beyond this threshold, the aforementioned diagnostics are comparatively more and more sensitive, since [S ii] \(\lambda 6731/\lambda 6716\) decrease its sensitivity. In contrast [O ii] \(\lambda 388\mu\)m/\(\lambda 51\mu\)m shows higher sensitivity when \(n_{\rm e}<10^{3}\) cm\({}^{-3}\), but beyond this value, its sensitivity decreases to a greater extent than that of [S ii] \(\lambda 6731/\lambda 6716\). When \(n_{\rm e}\approx 10^{5.3}\) cm\({}^{-3}\), then \(\Delta\left(I_{\lambda 4716}/I_{\lambda 6731}\right)/\Delta n_{\rm e}\approx 0\). On the other hand, at fixed temperature, [O ii] \(\lambda\lambda 7319+20+30\lambda 31/\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda 6716+31\) have a very high density-sensitivity over the entire range from \(10^{2}\) cm\({}^{-3}<n_{\rm e}<10^{6}\) cm\({}^{-3}\). These line intensity ratios will be discussed in detail in Section 6.1.
If the nebulae have homogeneous density, the different diagnostics should converge to the same value if they are in their density-sensitive range. However, the emissions of the different ions can come from different volumes of ionized gas and the nebulae may contain density inhomogeneities. In fact, the presence of high-density clumps has been revealed by high-resolution images in several nearby photoionized nebulae (see e. g. Borkowski et al., 1993; O'Dell and Wong, 1996; O'Dell et al., 2002). Besides filamentary structures, jets of matter and gas flows due to photoionization are capable of compressing the gas, increasing the local density. Within the H ii regions, ongoing star formation can give rise to HHs (Herbig, 1950; Haro, 1952) and proplyds (O'Dell et al., 1993), which are associated with clumps of ionized gas that can reach density values of up to \(\sim 10^{6}\) cm\({}^{-3}\)(Henney and O'Dell, 1999). Although the high-density inclusions may represent a small fraction of the gas volume, the different collisional deexcitation rates of the diagnostics can bias them towards higher or lower values depending on their particular density-sensitivity regime. Moreover, since the refractory elements such as Fe are mostly depleted into dust grains within the ionized environments, the [Fe iii] \(\lambda 4702/\lambda 4658\) ratio may be more easily detected in shock-compressed higher-density areas where the dust destruction is taking place, such as HH objects (Mendez-Delgado et al., 2021).
Fig. 3 compares \(n_{\rm e}\)([S ii]) and the \(n_{\rm e}\) values obtained using the rest of diagnostics for all the DESIRED nebulae. The [S ii] \(\lambda 6731/\lambda 6716\) and [O ii] \(\lambda 3726/\lambda 3729\) diagnostics show an excellent agreement for the whole sample. This is not surprising since both O\({}^{+}\) and S\({}^{+}\) ions coexist in the volume of low degree of ionization and both show essentially the same sensitivity (see the right panel of Fig. 2). In some PNe, due to the possible existence of cold clumps of high metallicity (Liu et al., 2000, 2006; Garcia-Rojas et al., 2016, 2022; Richer et al., 2022), we may expect an important contribution of recombination in the observed [O ii] lines (Barlow et al., 2003; Wesson et al., 2018). This is especially important in the cases of Ou5 and Abell 46 (Corradi et al., 2015), the PNe with the largest ADF from the whole sample and the only ones with ADF\(>\)5, and where the density obtained from [O ii] \(\lambda 3726/\lambda 3729\) is clearly higher than that obtained from [S ii] \(\lambda 6731/\lambda 6716\). In the rest of the photoionized nebulae in the database, this phenomenon, if present, actually has a negligible impact on these density diagnostics.
The comparison of the [S ii] \(\lambda 6731/\lambda 6716\) values and those of [Cl iii] \(\lambda 5538/\lambda 5518\), [Fe iii] \(\lambda 4702/\lambda 4658\) and [Ar iv] \(\lambda 4740/\lambda 4711\) reveals significant deviations from a 1:1 relation in those objects where the first diagnostic gives \(n_{\rm e}<1000\) cm\({}^{-3}\). As expected from Fig 2, this is because the aforementioned diagnostics are at their low density limit, where the sensitivity is practically negligible. In the low density limit, the line intensity ratios should converge to constant values mainly fixed by the atomic collisional strengths. From the DESIRED data we obtain [Cl iii] \(\lambda 5538/\lambda 5518=0.74\pm 0.05\), [Fe iii] \(\lambda 4702/\lambda 4658=0.26\pm 0.04\) and [Ar iv] \(\lambda 4740/\lambda 4711=0.79\pm 0.07\), in consistency with the predictions of the selected atomic data (see Table 7), discarding significant errors in them.
[Cl iii] \(\lambda 5538/\lambda 5518\), [Fe iii] \(\lambda 4702/\lambda 4658\) and [Ar iv] \(\lambda 4740/\lambda 4711\) line ratios become good density indicators for \(n_{\rm e}>10^{3}\) cm\({}^{-3}\), showing higher sensitivity than [S ii] \(\lambda 6731/\lambda 6716\) (See Fig. 2). For this range of \(n_{\rm e}\), Fig. 3 shows a general offset between [S ii] \(\lambda 6731/\lambda 6716\) and the rest of the aforementioned diagnostics. This is due to the combination of two phenomena, [S ii] \(\lambda 6731/\lambda 6716\) is more sensitive in areas of lower density within the nebulae while the rest of indicators behave inversely. Furthermore, as density increases, for \(n_{\rm e}>10^{4}\) cm\({}^{-3}\), the accuracy of [S ii] \(\lambda 6731/\lambda 6716\) decreases, amplifying the size of error bars. It is noticeable that [Fe iii], [Cl iii] and [Ar iv] density diagnostics show rather consistent trends, despite arising from the low, intermediate and very high ionization volumes. This shows that the different \(n_{\rm e}\)-sensitivity range of the diagnostics dominates over the possible density stratification in the nebulae, except for few dispersed objects of the sample, as it is also shown in Fig. 4.
Considering the previous discussion and in agreement with Mendez-Delgado et al. (2023), we propose the following criteria to adopt a representative density for chemical abundance determinations using optical spectra:
1. If \(n_{\rm e}\)([S ii]) \(<\) 100 cm\({}^{-3}\), we adopt the low density limit (\(n_{\rm e}<100\) cm\({}^{-3}\)).
2. If 100 cm\({}^{-3}\) \(<n_{\rm e}\)([S ii]) \(<\) 1000 cm\({}^{-3}\), we adopt the average value of \(n_{\rm e}\)([S ii]) and \(n_{\rm e}\)([O ii]).
3. If \(n_{\rm e}\)([S ii]) \(>\) 1000 cm\({}^{-3}\), we take the average values of \(n_{\rm e}\)([S ii]), \(n_{\rm e}\)([O ii]), \(n_{\rm e}\)([Cl iii]), \(n_{\rm e}\)([Fe iii]) and \(n_{\rm e}\)([Ar iv]) when available.
4. For the HH objects we adopt \(n_{\rm e}\)([Fe iii]), while in the case of the proplyd 170-337, we adopt the reference value derived from the [S ii] \(\lambda 4069/\lambda 4076\) line intensity ratio.
The resulting representative density values are shown in the bottom panel of Fig. 3. As discussed in Section 6.1, this criteria is far from perfect, but it is accurate enough to determine chemical abundances based on optical spectra. However, we discourage its use for the determination of feedback-related pressure terms or abundances based on infrared fine structure lines without further analysis.
In criterion (i) we consider the fact that all the density diagnostics analyzed in this work are insensitive at such low densities. If the average electron density is actually in this range of values, its impact is negligible in the determination of temperature and chemical abundances (Osterbrock and Ferland, 2006). However, the consideration of another method is recommended for those who require precise determinations of the gas pressure (dependent on density) in low-density H ii regions, relevant to some phenomena such as stellar feedback (e.g. McLeod et al., 2020; Barnes et al., 2021). As a sug
gestion, considering the radiative and collisional atomic transitions from Bautista et al. (2015), the [Fe ii] \(\lambda 8617/\lambda 9267\) line intensity ratio should vary from a value \(\sim 110\) at \(n_{\rm e}=1\) cm\({}^{-3}\) to a value \(\sim 54\) at \(n_{\rm e}=100\) cm\({}^{-3}\) when assuming \(T_{\rm e}=10000\) K. These lines arise from the Fe i\({}^{+}\) lower quarter levels, and should not have significant fluorescence contributions (Baldwin et al., 1996; Verner et al., 2000). Mendez-Delgado et al. (2021, 2022) have checked the adequacy of this density diagnostic in higher density regions. However, this is highly dependent on the atomic data used (Mendoza et al., 2023).
Criterion (ii) is based on the fact that [C iii] \(\lambda 5538/\lambda 5518\), [Fe iii] \(\lambda 4702/\lambda 4658\) and [Ar iv] \(\lambda 4740/\lambda 4711\) are quite insensitive to densities smaller than 1000 cm\({}^{-3}\). In the presence of high-density inclusions within the nebulae, densities adopted under this criterion are underestimated as well as those of criterion (i). This will be demonstrated in Section 6.1. The impact of such underestimate is rather limited in optical studies, being constrained up to \(\sim 0.1\) dex when using [O ii] \(I(\lambda\lambda 7319+20+30+31)\) to estimate the O\({}^{+}\)/H\({}^{+}\) abundance. However, this can introduce large systematic errors when using IR fine structure CELs, where a density underestimate of \(\sim 300\) cm\({}^{-3}\) can affect \(T_{\rm e}\) determinations by several thousand Kelvin (see fig. 3 from Lamarche et al., 2022).
Criterion (iii) allows us to obtain more precise values of electron density. Although the use of \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) or \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) as single diagnostic is consistent with the adopted value in most of the denser nebulae within the error bars -given the high quality of the DESIRED spectra- the uncertainty of these diagnostics becomes larger as the density increases. As shown in the bottom panel of Fig. 3, a systematic underestimate of the median values of density is noticeable when \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) approaches values \(\sim 10^{4}\) cm\({}^{-3}\), concerning especially to PNe. It is difficult to establish whether this behaviour is linked to a density stratification in these objects, as some works suggest (see e. g. Rauber et al., 2014), or it is just a consequence of the different sensitivity of the compared diagnostics. Since this affects some HHs as well, this seems to indicate that the different sensitivity of the diagnostics dominates the observed trend. In this range of densities, \(T_{\rm e}\)([N ii] \(\lambda 5755/\lambda 6584\)) depends appreciably on \(n_{\rm e}\). Therefore, having large error bars in \(n_{\rm e}\) gives rise to obtain inaccurate values of \(T_{\rm e}\)([N ii] \(\lambda 5755/\lambda 6584\)) and, finally, of the ionic abundances.
Criterion (iv) is applied to photoionized HHs because indicators based on [Fe iii] lines are sensitive to very high densities, but also because the destruction of Fe-bearing dust particles by shocks enhances the emission of [Fe iii] lines. In these cases, we adopt the values obtained with a maximum-likelihood procedure using several [Fe iii] lines. This method provides values fully consistent with \(n_{\rm e}\)([Fe iii] \(\lambda 4702/\lambda 4658\)). In Fig. 3, we can see that density determinations based on [Fe iii] lines -although showing larger error bars- are marginally consistent with \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) in most of the cases except for HH 514, the propply 170-337 (Mendez-Delgado et al., 2022) and NGC 7027 (Sharpe et al., 2007). In these objects, the electron density is so high that a large fraction of the emission in CELs is produced through the much weaker auroral lines instead of the nebular ones. Unfortunately, because of the large dust depletion and low ionization degree of proplyd 170-337, \(n_{\rm e}\)([CI iii] \(\lambda 5538/\lambda 5518\)), \(n_{\rm e}\)([Fe iii] \(\lambda 4702/\lambda 4658\)) and \(n_{\rm e}\)([Ar iv] \(\lambda 4740/\lambda 4711\)) can not be derived for this object. However, the density can be determined from the [S ii] \(\lambda 4069/\lambda 4076\) ratio in this case.
## 6 Temperature structure
In this section, we analyze the temperature relations for the different ionization zones in extragalactic H ii regions. Firstly, we will start by investigating the dependence of the low ionization temperature diagnostics \(T_{\rm e}\)([O ii]), \(T_{\rm e}\)([S ii]) and \(T_{\rm e}\)([N ii]) on the electron density. Secondly, we will study the temperature relations obtained directly from the observations. In all figures of this section, we use the parameter \(P\) defined by Pilyugin (2001): [O iii] \(I(5007+4959)\) [O iii] \(I(5007+4959)\)+[O ii] \(I(3726+3729)\) as a proxy of the ionization degree of the gas.
Figure 2: Left panel: Dependence of different line intensity ratios with the electron density –\(n_{\rm e}\)–, considering a \(T_{\rm e}=10000\) K and the atomic data from Table 7. The line intensity ratios have been normalized with the expected values at \(n_{\rm e}=1\) cm\({}^{-3}\). Right panel: comparison between the density-sensitivity of the different line intensity ratios and that of [S ii] \(\lambda 6716/\lambda 6731\), considering a \(T_{\rm e}=10000\) K. The density sensitivity is defined as \(\frac{\lambda\left(I_{41}/I_{42}\right)}{\lambda\left(I_{6}\right)}\). When \(n_{\rm e}\sim 10^{5.3}\) cm\({}^{-3}\), \(\frac{\lambda\left(I_{4167}/I_{42371}\right)}{\lambda\left(I_{6}\right)}\approx 0\), inducing an asymptote.
### \(T_{\rm e}\)([O ii]), \(T_{\rm e}\)([S ii]), and \(T_{\rm e}\)([N ii])
Based on the results of photoionization models (Campbell et al., 1986; Pilyugin et al., 2006), it is generally assumed that \(T_{\rm e}\)([O ii]) \(\sim T_{\rm e}\)([S ii]) \(\approx T_{\rm e}\)([N ii]). This is also predicted by the BOND models (Sec 4). However, this is rarely satisfied observationally in extragalactic H ii regions (Perez-Montero & Diaz, 2003; Kennicutt et al., 2003; Hagele et al., 2006, 2008; Esteban et al., 2009; Bresolin et al., 2009; Berg et al., 2015). \(T_{\rm e}\)([O ii]) and \(T_{\rm e}\)([S ii]) are estimated from [O ii] \(\lambda\lambda\)7319+20+30+31/\(\lambda\lambda\) 3726+29 and [S ii] \(\lambda\lambda\)4069+76/\(\lambda\lambda\)6716+31 line intensity ratios, which can be affected by several observational effects. The first line intensity ratio is highly dependent on the reddening correction as well as the quality of the flux calibration of the spectrum given the wide wave
Figure 3: Comparison between the density derived from the [S ii] \(\lambda\)6731/\(\lambda\)6716 line intensity ratio and from the rest of the diagnostics, including the average density estimated from the adopted criteria (bottom panel). The solid line represents a 1:1 linear relation. Down arrows indicate the upper limit when the value is at the low density limit (\(n_{\rm e}<100\) cm\({}^{-3}\)).
length separation between the nebular and auroral lines. Moreover, \(\lambda\)7319+20+30+31 can be contaminated by telluric emissions. In the case of the latter line intensity ratio, the [S ii] auroral lines can be blended with O ii \(\lambda\lambda\)4069.62, \(4069.88,4072.15,4075.86\) lines, which can represent more than 10 per cent of the total flux in some nebulae.
In addition to the possible observational effects commented on above, some other physical phenomena have been invoked to explain the discrepancies between \(T_{\rm e}(\rm[O\,ii])\), \(T_{\rm e}(\rm[S\,ii])\) and \(T_{\rm e}(\rm[N\,ii])\), such as:
* Mismatch between the temperature of the volumes of O\({}^{+}\), N\({}^{+}\) and S\({}^{+}\).
* Recombination contribution to the CELs.
* Temperature fluctuations.
* Density variations.
The high quality of the DESIRED spectra permits us to minimize the effect of observational errors on \(T_{\rm e}\) determinations and to explore other physical phenomena that may cause the discrepancies. We derive \(T_{\rm e}(\rm[O\,ii])\), \(T_{\rm e}(\rm[S\,ii])\) and \(T_{\rm e}(\rm[N\,ii])\) adopting the density criteria mentioned in Section 5, actually criteria (i) or (ii) in most cases. The adoption of \(n_{\rm e}(\rm[S\,ii]\)\(\lambda\)6731/\(\lambda\)6716) or \(n_{\rm e}(\rm[O\,ii])\)\(\lambda\)3726/\(\lambda\)3729) is the standard procedure for the analysis of extragalactic H ii regions and therefore our results can be directly compared with other works.
Fig. 5 shows the comparison between \(T_{\rm e}(\rm[O\,ii])\), \(T_{\rm e}(\rm[S\,ii])\) and \(T_{\rm e}(\rm[N\,ii])\) derived with the standard procedure of adopting \(n_{\rm e}(\rm[S\,ii]\)\(\lambda\)6731/\(\lambda\)6716) and \(n_{\rm e}(\rm[O\,ii]\)\(\lambda\)3726/\(\lambda\)3729) as the representative electron density. The blue lines correspond to the best linear fits, which are presented in Table 1. Despite of the quality of the data, there are few outlier regions: NGC 5471, H 37 (Esteban et al., 2020), N 66A (Dominguez-Guzman et al., 2022) and H II-2 (Lopez-Sanchez et al., 2007) with very high values of \(T_{\rm e}(\rm[S\,ii])\) and NGC 2363 (Esteban et al., 2009) with an extremely high value of \(T_{\rm e}(\rm[O\,ii])\). These regions may have particular physical phenomena or some non-identified contamination in the auroral lines, such as those described previously. Although they are included in Fig. 5, they are not considered in the linear fits shown in Table 1. We will focus on the global trends.
As it can be seen in Fig. 5, \(T_{\rm e}(\rm[O\,ii])\) and \(T_{\rm e}(\rm[S\,ii])\) are higher than \(T_{\rm e}(\rm[N\,ii])\) for most values of this last parameter, in agreement with previous findings (Esteban et al., 2009; Bresolin et al., 2009; Rogers et al., 2021). It should be noted that \(T_{\rm e}(\rm[O\,ii])\)-\(T_{\rm e}(\rm[N\,ii])\) and \(T_{\rm e}(\rm[S\,ii])\)-\(T_{\rm e}(\rm[N\,ii])\) increase as a function of temperature, as shown in the fit parameters given in Table 1. On the other hand, \(T_{\rm e}(\rm[S\,ii])\) versus \(T_{\rm e}(\rm[O\,ii])\) fits an almost 1:1 relation, with a slight offset to higher \(T_{\rm e}(\rm[O\,ii])\) values.
#### 6.1.1 Are these temperatures different?
The differences between the temperatures determined from CELs of different ions are usually explained because they are representative of zones with different ionization conditions. This may result from differences in the ionization potentials, in the spectral distribution of the ionizing radiation and sometimes on the absorption edges on the ionizing radiation and on the presence of charge exchange and dielectronic recombination contributions (Stasinska, 1980; Garnett, 1992). Although there are small differences in the ionization energy ranges of S\({}^{+}\), O\({}^{+}\) and N\({}^{+}\) and some other properties, photoionization models predict that this should not have relevant effects on the difference between \(T_{\rm e}(\rm[S\,ii])\), \(T_{\rm e}(\rm[O\,ii])\) and \(T_{\rm e}(\rm[N\,ii])\). The exceptions may be very high metallicity regions, where the internal temperature gradients can be very marked (Stasinska, 2005). However, as can be inferred from Fig. 5, the differences in the top panels are higher for the regions of higher degree of ionization, which is most typical case at lower metallicities. Furthermore, although the coexisting volumes of S\({}^{+}\) and O\({}^{+}\) usually differ much more than those of N\({}^{+}\) and O\({}^{+}\) (e.g. see fig. 2 from Levesque et al., 2010), the
Figure 4: Comparison between the density derived from the [C iii] \(\lambda\)5538/\(\lambda\)5518 line intensity ratio and those from [Ar iv] \(44740/\lambda\)4711 and [Fe iii] \(44702/\lambda\)4658. The symbols code is the same as in Fig. 1. It should be noticed the good agreement that exist when considering regions with \(n_{\rm e}\)\(>\)1000 cm\({}^{-3}\) (which leaves out most extragalactic H ii regions and RNs, blue dots and magenta crosses, respectively), as they are in their optimal sensitivity range, regardless of the degree of ionization of the ion.
first pair of ions shows better consistency between their respective \(T_{\mathrm{e}}\) values, as it is shown in the bottom panel of Fig. 5. This result suggests that the difference between the ionization structure alone does not explain the differences between \(T_{\mathrm{e}}(\mathrm{[S\,II]})\), \(T_{\mathrm{e}}(\mathrm{[O\,II]})\) and \(T_{\mathrm{e}}(\mathrm{[N\,II]})\).
It is sometimes argued in the literature that part of the optical [S ii] emission can be originated in the photodissociation region (PDR) where H and He are mostly neutral, discarding \(T_{\mathrm{e}}(\mathrm{[S\,II]})\). This argument has also been found together with the adoption of \(n_{\mathrm{e}}(\mathrm{[S\,II]})\,\lambda 6731/\lambda 6716)\) as valid density estimator, even for the entire nebula (e.g. Esteban et al., 2020). It is clear that this can not be an explanation of the differences between \(T_{\mathrm{e}}(\mathrm{[S\,II]})\) and \(T_{\mathrm{e}}(\mathrm{[N\,II]})\) since, although there may be some S\({}^{+}\) in the volume where H and He are neutral, the emission of [S ii] lines requires numerous collisions with free electrons that can only be supplied in sufficient quantities by the ionization of H and He (O'Dell et al., 2023). Therefore, [S ii] emission should arise from the ionized volume and the surrounding areas of the ionization front. Exceptions may appear when there are shocks in the ionization front.
#### 6.1.2 Recombination contributions?
Rubin (1986) pointed out the possibility of significant recombination contributions to the atomic levels that produce optical CELs of N and O ions. At first order, one would expect recombinations to be more important in regions of higher metallicity (Stasinska, 2005), where the temperature is lower. This is the opposite behavior of our observations shown in Fig. 5. A complex question is to know in what proportion the recombinations affect the inferred \(T_{\mathrm{e}}(\mathrm{[S\,II]})\), \(T_{\mathrm{e}}(\mathrm{[O\,II]})\) and \(T_{\mathrm{e}}(\mathrm{[N\,II]})\) in the analyzed extragalactic H ii regions, as recombination contributions can affect both the auroral and nebular [O ii] lines whereas it is expected to only affect the auroral [N ii] line. To clarify this, we use the photoionization models described in Section 4. These models consider the recombination contributions in the [O ii] and [N ii] lines using the recombination coefficients calculated by Pequignot et al. (1991), Fang et al. (2011, 2013) and Storey et al. (2017).
In Fig. 6 we show that when the recombination contribution is relevant, the measured [O ii] \(\lambda\lambda 7319+20+30+31/\lambda\lambda 3726+29\) line intensity ratio tends to be comparatively more enhanced than [N ii] \(\lambda 5755/\lambda\lambda 6548+84\). This implies that, if there are recombination contributions (dielectronic plus radiative) to the [O ii] and [N ii] CELs, we would expect \(T_{\mathrm{e}}(\mathrm{[O\,II]})>T_{\mathrm{e}}(\mathrm{[N\,II]})\) in most cases.
To date, there are no evidences of relevant recombination contributions to the [S ii] CELs. Furthermore, there is also a lack of calculations of effective recombination coefficients for this ion. However, potential recombination contributions to the [S ii] CELs
Figure 5: Relations between \(T_{\mathrm{e}}(\mathrm{[O\,II]})\), \(T_{\mathrm{e}}(\mathrm{[S\,II]})\) and \(T_{\mathrm{e}}(\mathrm{[N\,II]})\) derived by adopting \(n_{\mathrm{e}}(\mathrm{[S\,II]})\,\lambda 6731/\lambda 6716) and \(n_{\mathrm{e}}(\mathrm{[O\,II]})\,\lambda 3726/\lambda 3729) in the extragalactic H ii regions of the sample. The color of the points represents the value of their \(P\) parameter (Pilyugin, 2001, see text) that can be used as proxy of the ionization degree of the nebulae. The blue solid line represents the linear fit to the data. The black solid line represents a 1:1 linear relation.
can be tested by assuming that this ion has the same electronic configuration than [O ii]. Therefore, as a first approximation, the recombination contribution to the \({}^{2}P\) and \({}^{2}D\) levels of [S ii] would be similar to that of [O ii] weighted by the S\({}^{2+}\)/O\({}^{2+}\) abundance ratio. In Fig. 7 we show that the potential recombinations under this case are negligible when \(T_{\rm e}(\,\)[S ii]\()>7000\) K, which is the case for all our data (see Fig. 5).
Considering the above, if the difference between \(T_{\rm e}(\,\)[O ii]\()\) and \(T_{\rm e}(\,\)[N ii]\()\) is actually produced by recombinations, this would increase as a function of the intensity of the O ii RLs. However, Fig. 8 demonstrate that \(T_{\rm e}(\,\)[O ii]\()-T_{\rm e}(\,\)[N ii]\()\) and the intensity of the O ii V1 RLs (adopted from Mendez-Delgado et al., 2023) do not correlate, discarding significant recombination contributions. Most importantly, if \(T_{\rm e}(\,\)[O ii]\()\) are affected by recombinations, the close 1:1 relation shown by \(T_{\rm e}(\,\)[O ii]\()\) and \(T_{\rm e}(\,\)[S ii]\()\) in Fig. 5 would be difficult to explain, given the predictions of Fig. 7. Therefore, recombination effects on [S ii], [O ii] and [N ii] CELs do not seem to explain the observed differences in their corresponding temperatures.
#### 6.1.3 Temperature inhomogeneities?
Peimbert (1967) introduced the formalism of internal temperature inhomogeneities in ionized nebulae, quantified by the root mean square temperature fluctuations parameter (\(t^{2}\)). In the presence of such fluctuations in the volume where S\({}^{+}\), O\({}^{+}\) and N\({}^{+}\) coexist, we would expect \(T_{\rm e}(\,\)[O ii]\()\)\(\geq\)\(T_{\rm e}(\,\)[N ii]\()\)\(\geq\)\(T_{\rm e}(\,\)[S ii]\()\), as a consequence of the different excitation energies of the atomic levels involved (see equation 15 of Peimbert, 1967). However, this is not the case in the observed trends shown in Fig. 5, in agreement with the recent results by Mendez-Delgado et al. (2023). Those authors find that although the effects of \(t^{2}\) are evident in the high-ionization volume of nebulae, they seem to be absent in the low-ionization one.
#### 6.1.4 Density inhomogeneities
The [O ii]\(\lambda\lambda 7319+20+30+31/\lambda\lambda 3726+29\) and [S ii]\(\lambda\lambda 4069+76/\lambda 46716+31\) line intensity ratios are highly dependent on density as it is shown in Fig. 2. When \(T_{\rm e}\) is fixed, the \(n_{\rm e}\)-sensitivity of the aforementioned line intensity ratios is larger than that of [S ii]\(\lambda 6731/\lambda 6716\), [O ii]\(\lambda 3726/\lambda 3729\), [Cl iii]\(\lambda 5538/\lambda 5518\), [Fe iii]\(\lambda 4702/\lambda 4658\) and [Ar iv]\(\lambda 4740/\lambda 4711\) in practically the entire range \(10^{2}\) cm\({}^{-3}<n_{\rm e}<10^{6}\) cm\({}^{-3}\). If there are density inhomogeneities in the nebulae, these line ratios would give higher densities than those derived from [S ii]\(\lambda 6731/\lambda 6716\) or [O ii]\(\lambda 3726/\lambda 3729\)(Peimbert, 1971; Rubin, 1989). The presence of high-density clumps biases [S ii]\(\lambda 6731/\lambda 6716\) and [O ii]\(\lambda 3726/\lambda 3729\) towards lower values of \(n_{\rm e}\) and this would impact \(T_{\rm e}(\,\)[N ii]\()\) determination to a smaller extent than \(T_{\rm e}(\,\)[O ii]\()\) and \(T_{\rm e}(\,\)[S ii]\()\). This behavior is illustrated in Fig. 9, where it can be seen that \(T_{\rm e}(\,\)[N ii]\()\) is insensitive to density up to \(\sim 10^{4}\) cm\({}^{-3}\), two orders of magnitude beyond than in the case of \(T_{\rm e}(\,\)[O ii]\()\) or
Figure 8: \(T_{\rm e}(\,\)[O ii]\()-T_{\rm e}(\,\)[N ii]\()\) difference as a function of the intensity of the O ii recombination multiplet V1. The color of the symbols represents the value of their \(P\) parameter (as in Fig 5).
Figure 6: Comparison of the impact of recombination contributions on \(T_{\rm e}(\,\)[O ii]\()\) and \(T_{\rm e}(\,\)[N ii]\()\). These predictions are based on photoionization models described in Section 4. The color of the symbols represents the value of their \(P\) parameter (as in Fig 5). The black solid line represents a 1:1 linear relation. In case of recombination contributions, \(T_{\rm e}(\,\)[O ii]\()>T_{\rm e}(\,\)[N ii]\()\) for most cases.
Figure 7: Comparison of the derived \(T_{\rm e}(\,\)[S ii]\()\) without recombination contributions with the hypothetical case of recombination contributions in proportion to those of [O ii]. These predictions are based on photoionization models described in Section 4. The color of the symbols represents the value of their \(P\) parameter (as in Fig 5). The black solid line represents a 1:1 linear relation. The recombination contributions are negligible for \(T_{\rm e}(\,\)[S ii]\()>7000\) K.
\(T_{\rm e}\)([S ii]). If the high-density gas is \(10^{2}\) cm\({}^{-3}<n_{\rm e}<10^{4}\) cm\({}^{-3}\) or if the density is higher but occupies a small fraction of the total ionized volume, \(T_{\rm e}\)([N ii]) may remain unaffected in contrast to what happens with \(T_{\rm e}\)([O ii]) and \(T_{\rm e}\)([S ii]).
As a conclusion, we propose that the presence of high-density inclusions within the volume observed in the spectra of extragalactic H ii regions naturally explains the behavior seen in Fig. 5, including the bottom panel, as [O ii] \(\lambda\lambda 7319+20\)+30+31/\(\lambda\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31\) have a similar dependency on density5. If we use the [O ii] and [S ii] \({}^{2}\)P\({}^{0}\)/D\({}^{0}\) (auroral to nebular) line intensity ratios as density diagnostics instead of temperature ones by cross-matching them with \(T_{\rm e}\)([N ii]) (_getCrossTenDen_ of PyWeb can be used), we obtain densities that are consistent with each other and systematically larger than \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) (or \(n_{\rm e}\)([O ii] \(\lambda 3726/33729\))), as shown in Fig. 10. \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) underestimate the density by \(\sim 300\) cm\({}^{-3}\) on average, even when \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) \(<10^{2}\) cm\({}^{-3}\). If \(T_{\rm e}\)([N ii]) is adopted, this underestimate of \(n_{\rm e}\) has a small impact in the calculation of chemical abundances based on optical CELs, except when [S ii] and [O ii] auroral lines are used. However, the underestimate of density is relevant in the case of ionized gas pressure determinations and the correct interpretation of the properties depending on this quantity. We remark that the presence of high density inclusions and the underestimate of density by \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) and \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) (see Fig. 3) are extremely relevant when using infrared fine structure CELs (Lamarche et al., 2022). In analogy to what happens with temperature inhomogeneities in the optical CELs, density inhomogeneities may introduce systematic bias in the chemical abundances derived from infrared CELs. If \(n_{\rm e}\) is underestimated, ionic abundances are also underestimated.
Footnote 5: In fact, the \(n_{\rm e}\) dependency of [O ii] \(\lambda\lambda 7319+20\)+30+31/\(\lambda\lambda 3726+29\) is slightly higher, and this may explain the larger number of points below the 1:1 line.
In the case of the high-density nebulae of the DESIRED sample, where \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) > 1000 cm\({}^{-3}\), we find good consistency between the adopted density and the values derived from the [O ii] and [S ii] \({}^{2}\)P\({}^{0}\)/\({}^{2}\)D\({}^{0}\) line intensity ratios, as shown in Fig. 11. It should be noted that in these cases, the adopted densities are mainly weighted by the [Ci iii], [Fe iii] and [Ar iv] density diagnostics. This suggests that although high-density clumps (or density gradients) may be present in all ionized nebulae, the systematic effects on the derived properties can be reduced in those objects showing higher mean densities. In contrast, in low-density nebulae, the presence of high-density clumps can go unnoticed by using \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) or \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) and therefore affecting the reliability of further calculations involving these parameters. A possible solution would be the use of [O ii] \(\lambda\lambda 7319+20+30+31/\lambda\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31\) as density indicators together with [N ii] \(\lambda 5755/\lambda 6584\) to determine the temperature. Another conclusion of the discussion carried out
Figure 11: Comparison between the average \(n_{\rm e}\) obtained from the [O ii] \(\lambda\lambda 7319+20\)+30+31/\(\lambda\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31\) line intensity ratios and the adopted density for high density nebulae (\(n_{\rm e}\)([S ii] \(\lambda 6731/46716\))\(>\)1000 cm\({}^{-3}\)). The black solid line represents a 1:1 linear relation.
Figure 10: Comparison between the average \(n_{\rm e}\) obtained from the [O ii] \(\lambda\lambda 7319+20\)+30+31/\(\lambda\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31\) line intensity ratios and \(n_{\rm e}\)([S ii] \(\lambda 6731/46716\)) for extragalactic H ii regions. The color of the symbols represents the value of their \(P\) parameter (as in Fig 5). The black solid line represents a 1:1 linear relation.
so far is that the use of \(T_{\rm e}(\rm[O\,\textsc{ii}])\) and \(T_{\rm e}(\rm[S\,\textsc{ii}])\) should be avoided when \(T_{\rm e}(\rm[N\,\textsc{ii}])\) is available.
The different sensitivity of auroral and nebular [O ii] lines to density cause the systematic difference between the O\({}^{+}\) abundances determined with the [O ii] auroral and nebular lines; fact that has been described by several authors, especially in the case of PNe (Stasinska et al., 1998; Escudero et al., 2004; Rodriguez, 2020). In the case of PNe, there may be other phenomena playing a role. In the upper panel of Fig. 12 we compare the O\({}^{+}\) abundance derived from [O ii] auroral and nebular lines using \(T_{\rm e}(\rm[N\,\textsc{ii}])\) and the average of \(n_{\rm e}(\rm[S\,\textsc{ii}]\,\lambda 6731/\lambda 6716)\) and \(n_{\rm e}(\rm[O\,\textsc{ii}]\,\lambda 3726/\lambda 3729)\). In the figure, we can see that the O\({}^{+}\)/H\({}^{+}\) ratio derived with the [O ii] auroral lines is up to \(\sim\)0.1 dex higher, on the average. In the bottom panel of Fig. 12 we show the same comparison but using [O ii] \(\lambda\lambda 7319+20+30+31/\lambda\lambda 3726+29\) and [S ii] \(\lambda\lambda 4069+76/\lambda\lambda 6716+31\) as density indicators. As we can see, with this approach the systematic difference is removed.
### The DESIRED temperature relationships for extragalactic HII regions
The temperature can be stratified within ionized nebulae, which is reflected in differences between the representative values of different ionic species. The most common procedure to consider the temperature stratification when deriving chemical abundances is to adopt \(T_{\rm e}(\rm[O\,\textsc{ii}])\) and \(T_{\rm e}(\rm[N\,\textsc{ii}])\) for the high and low ionization volumes, respectively. Other temperature-sensitivity line ratios as \(T_{\rm e}(\rm[S\,\textsc{iii}])\)\(\lambda 6312/\lambda 9069)\) arises from zones of intermediate ionization (e.g. Berg et al., 2015).
Fig. 13 shows the DESIRED temperature relationships derived from different diagnostics associated with different ionization volumes of the gas. In each plot, we include the best fit to the data, the predicted linear fit from the BOND models (see Section 4) and the model-derived relations of Garnett (1992). In Table 2 we present the DESIRED temperature relations (column 4) and the scatter and number of objects considered in each case (columns 5-6).
Upper left panel of Fig. 13 shows the \(T_{\rm e}(\rm[O\,\textsc{ii}])\) _vs._\(T_{\rm e}(\rm[N\,\textsc{ii}])\) relationship defined for the DESIRED extragalactic H ii regions and the linear fit to the data. There is wealth of works devoted to study this relation in the literature (e.g., Campbell et al., 1986; Garnett, 1992; Pagel et al., 1992; Pilyugin, 2007; Esteban et al., 2009; Arellano-Cordova and Rodriguez, 2020; Berg et al., 2020; Rogers et al., 2021, 2022), finding a relatively high scatter. Arellano-Cordova and Rodriguez (2020) showed that part of the dispersion is due to the effects of metallicity and the degree of ionization and therefore related to nebular properties. With the DESIRED extragalactic H ii regions, we minimize spurious scatter that can occur by using low signal-to-noise spectra and confirm a departure from a linear relationship. This departure becomes larger with the degree of ionization (and lower metallicities) and becomes noticeable when \(T_{\rm e}(\rm[O\,\textsc{ii}])>10000\) K. Such a deviation from a linear relationship has been reported by several authors previously (e.g. Pilyugin, 2007; Arellano-Cordova and Rodriguez, 2020). In Fig. 13 we also include a quadratic fit to the data, which is only valid within \(7000\,\rm K<T_{\rm e}(\rm[O\,\textsc{ii}])<16,500\) K. However, its shape at \(T_{\rm e}(\rm[O\,\textsc{ii}])>13,000\) K is determined by the position of only two objects in the diagram, NGC 5408 (Esteban et al., 2014) and NGC 2363 (Esteban et al., 2009). As shown in Table 2, the photoionization models described in Section 4 are not able to reproduce the curvature observed between \(T_{\rm e}(\rm[O\,\textsc{iii}])\) and \(T_{\rm e}(\rm[N\,\textsc{ii}])\) in the DESIRED extragalactic H ii regions. This curvature is not reproduced either if the models are weighted with the methodology proposed by Amayo et al. (2021) in their equation (4), considering the observational sample compiled by Zurita et al. (2021) and Izotov et al. (2007).
Mendez-Delgado et al. (2023, see their fig. 2 and equation (4)) derive a tight linear relation between \(T_{\rm e}(\rm[N\,\textsc{ii}])\) and the average temperature of the high ionization volume, \(T_{0}(\rm O^{2+})\), parameter that can be used to estimate the O/H ratio without the bias induced by temperature inhomogeneities which does affect the abundances determined using \(T_{\rm e}(\rm[O\,\textsc{iii}])\). Nevertheless, \(T_{\rm e}(\rm[N\,\textsc{ii}])\) is usually very difficult to determine in faint low metallicity H ii regions and the only available temperature diagnostic is often \(T_{\rm e}(\rm[O\,\textsc{iii}])\). In such cases, it is possible to use the relations presented in Table 2 to estimate \(T_{\rm e}(\rm[N\,\textsc{ii}])\) and consequently \(T_{0}(\rm O^{2+})\), using equation (4) from Mendez-Delgado et al. (2023).
Fig. 13 includes also temperature relations involving the uncommon \(T_{\rm e}(\rm[Ar\,\textsc{iii}])\), derived from the [Ar iii] \(\lambda 5192/\lambda 7135\) intensity ratio. DESIRED contains the largest collection of \(T_{\rm e}(\rm[Ar\,\textsc{iii}])\) determinations for H ii regions. Considering that the ionization con
Figure 12: Comparison between the O\({}^{+}\) abundance derived with [O ii] auroral and nebular lines. Upper panel: the physical conditions adopted are \(T_{\rm e}(\rm[N\,\textsc{ii}])\) and the average of \(n_{\rm e}(\rm[S\,\textsc{ii}])\)\(\lambda 6731/\lambda 6716) and \(n_{\rm e}(\rm[O\,\textsc{ii}])\)\(\lambda 3726/\lambda 3729). Bottom panel: the physical conditions adopted are \(T_{\rm e}(\rm[N\,\textsc{ii}])\) and the average of \(n_{\rm e}(\rm[S\,\textsc{ii}])\)\(\lambda 4069+76/\lambda 6716+31) and \(n_{\rm e}(\rm[O\,\textsc{ii}])\)\(\lambda 47319+20+30+31/\lambda 43726+29\). The color of the symbols represents the value of their \(P\) parameter (as in Fig 5). The black solid line represents a 1:1 linear relation. The relation between both quantities is tighter in the bottom panel.
ditions of \(\Delta\)\({}^{2+}\) are different than those of S\({}^{2+}\) or O\({}^{2+}\), we cannot strictly say that \(T_{\rm e}\)([A iii]) is also representative of the same ionization volume where \(5^{2+}\) or O\({}^{2+}\) lie. In the middle left panel of Fig. 13, we can see that \(T_{\rm e}\)([A iii]) follows a rather linear relationship with \(T_{\rm e}\)([O iii]) for the spectra of extragalactic H ii regions. Lower left panel of Fig. 13 shows that the behavior of the \(T_{\rm e}\)([Ar iii]) _vs._\(T_{\rm e}\)([N ii]) relationship has certain similarity to the \(T_{\rm e}\)([O iii]) _vs._\(T_{\rm e}\)([N ii]) one. The two objects with the highest \(T_{\rm e}\) show some deviation towards larger \(T_{\rm e}\)([Ar iii]) values. This agrees with the results obtained by Mendez-Delgado et al. (2023), who find that \(T_{\rm e}\)([Ar iii]) and \(T_{\rm e}\)([O iii]) seem to be affected by \(t^{2}\) in a similar way.
Upper right panel of Fig. 13 shows the \(T_{\rm e}\)([S iii]) _vs._\(T_{\rm e}\)([O iii]) relationship defined by the DESIRED spectra of extragalactic H ii regions. The slope of the linear fit to the data is very similar to that obtained from model predictions of Garnett (1992) and Vale Asari et al. (2016). However, the dispersion around the fit is larger for the observational points with higher \(T_{\rm e}\) and \(P\) parameter values. Those points correspond mainly to spectra of H ii regions of the Magellanic Clouds (Dominguez-Guzman et al., 2022). As mentioned in Section 3, some of our estimates of \(T_{\rm e}\)([S iii]) might not be completely free of telluric absorptions in the [S iii] \(\lambda 6096,9531\) lines and this fact may enhance the derived \(T_{\rm e}\)([S iii]). In the middle right panel of Fig. 13 we present the \(T_{\rm e}\)([S iii]) _vs._\(T_{\rm e}\)([N ii]) relationship, which follows a linear relation with a remarkably small dispersion except for the spectra with lowest \(T_{\rm e}\) values. Our linear fit has a steeper slope compared to the relations found by Berg et al. (2020) or Rogers et al. (2021). This might be due to the larger proportion of DESIRED spectra with \(T_{\rm e}\)([S iii]) \(>\) 12000 compared to the samples of Berg et al. (2020) or Rogers et al. (2021), where the vast majority of objects are below that \(T_{\rm e}\) value. The higher slope defined by the DESIRED spectra may be related to the fact that \(-\)as it was also noted in the \(T_{\rm e}\)([O iii]) _vs._\(T_{\rm e}\)([N ii]) and \(T_{\rm e}\)([Ar iii]) _vs._\(T_{\rm e}\)([N ii]) relationships-\(T_{\rm e}\)([S iii]) tends to be higher than \(T_{\rm e}\)([N ii]) in spectra with larger \(T_{\rm e}\) values. As it has been said before, and following the results by Mendez-Delgado et al. (2023), this indicates that \(T_{\rm e}\)([S iii]) may also be affected by \(t^{2}\). This possibility, however, requires a verification since the telluric absorptions in the [S iii] \(\lambda 9069,9531\) lines act in the same direction as \(t^{2}\).
## 7 Discussion and Conclusions
In this paper we present a first study based on DEep Spectra of Ionized REgions Database (DESIRED), a collection of high-quality deep optical spectra of ionized nebulae from the literature. The data were mostly obtained with 8-10m telescopes over more than 20 years by our research group and have been carefully reduced in an homogeneous way. DESIRED contains \(\sim\) 29380 emission lines of 190 spectra of Galactic and extragalactic H ii regions, PNe, RNe as well as photoionized HH objects and one proplyd of the Orion Nebula. The main aim of the study of the DESIRED sample as a whole is to draw attention to and quantify systematic effects that may bias the determination of physical conditions and chemical abundances of ionized gas in the Universe, as well as to better understand the physics of the formation of certain faint emission lines. The philosophy of DESIRED has been to prioritize the quality and depth of the spectra over their quantity in the design of the observations. However, due to the continuity of the project over the years, the number of objects has been increasing substantially, reaching a level comparable to that of a small survey, with the possibility of increasing in the future, especially with observations of low-metallicity (12+log(O/H) \(<\) 8.0) objects with very large aperture telescopes. Although formally this is the first paper based on the exploitation of DESIRED, it was also used by Mendez-Delgado et al. (2023), who analyzed the systematic bias introduced by temperature fluctuations in the determination of ionized abundances in H ii regions, a task impossible to perform with any other sample.
In this paper, we explore the density structure of the DESTRED objects as well as the \(T_{\rm e}\)-\(T_{\rm e}\) relations for extragalactic H ii regions. Regarding the density structure, we show that [Cl iii] \(\lambda 5538/\lambda 5518\), [Fe iii] \(\lambda 4658/\lambda 4702\) and [Ar iv] \(\lambda 4740/\lambda 4711\) are good density indicators when \(10^{3}\) cm\({}^{-3}<n_{\rm e}<10^{6}\) cm\({}^{-3}\), whereas [S ii] \(\lambda 6731/\lambda 6716\), [O ii] \(\lambda 3726/\lambda 3729\) are density sensitive when \(10^{2}\) cm\({}^{-3}<n_{\rm e}<10^{4}\) cm\({}^{-3}\). We find good consistency between diagnostics associated to different ionization volumes when the sensitivity ranges are similar. This implies that the sensitivity range of the diagnostics used is a more relevant parameter to obtain good density determinations than their selection attending to the ionization volume in which the abundance is determined. Based on these findings, in Section 5 we present simple and consistent criteria to derive the representative density for chemical abundance studies in the optical range.
We demonstrate that \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) and \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) are biased towards lower densities in extragalactic H ii regions due to the presence of density inhomogeneities and the non-linear sensitivity of these indicators. This is inferred from the behavior of [O ii] \(\lambda\lambda 7319+20+30+31/\lambda\lambda\) 3726+29 and [S ii] \(\lambda\lambda 4069+76/\lambda 6716+31\) intensity ratios, commonly used to compute \(T_{\rm e}\)([O ii]) and \(T_{\rm e}\)([S ii]), respectively. When \(T_{\rm e}\)([O ii]) and \(T_{\rm e}\)([S ii]) -derived adopting \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) and \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)- are compared with \(T_{\rm e}\)([N ii] \(\lambda 5755/\lambda 6584\)) they show systematic trends that can not be explained by observational errors, mismatches between the ionization volumes, recombination contribution or temperature fluctuations, but are explained by the presence of an inhomogeneous density structure. The sensitivity of [O ii] \(\lambda\lambda 7319+20+30+31/\lambda\lambda\) 3726+29 and [S ii] \(\lambda\lambda 4069+76/\lambda 6716+31\) in higher densities - \(10^{2}\) cm\({}^{-3}<n_{\rm e}<10^{6}\) cm\({}^{-3}\)- makes them better diagnostics than \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) or \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) when they are cross-correlated with \(T_{\rm e}\)([N ii]), since they are sensitive to the presence of high-density clumps.
In the analysis of extragalactic H ii regions, the density underestimate of \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) or \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) is of \(\sim 300\) cm\({}^{-3}\) on the average, even if the aforementioned diagnostics give values consistent with the low density limit (\(<100\) cm\({}^{-3}\)). The implications of this underestimate in the calculation of chemical abundances from optical spectra are rather small, being constrained up to \(\sim 0.1\) dex when O\({}^{+}\) abundances are estimated with the [O ii] \(\lambda 4\lambda 7319+20+30+31\) CELs. However, the density underestimate is critical for studies based on infrared fine structure CELs. For instance, [O iii] \(\lambda 88\mu\)m decreases its emissivity \(\sim\)40 per cent when \(n_{\rm e}\) changes from 200 cm\({}^{-3}\) to 500 cm\({}^{-3}\), implying an increase of the derived chemical abundances of \(\sim 70\) per cent. Density diagnostics in the infrared such as [O iii] \(88\mu\)m/\(52\mu\)m are likely to suffer a bias towards lower densities even to a greater extent than \(n_{\rm e}\)([S ii] \(\lambda 6731/\lambda 6716\)) or \(n_{\rm e}\)([O ii] \(\lambda 3726/\lambda 3729\)) due to their different sensitivity ranges (see Fig. 2).
Finally, we present the temperature relations for DESIRED extragalactic H ii regions considering the \(T_{\rm e}\)-sensitive [N ii] \(\lambda 5755/\lambda 6584\), [O iii] \(\lambda 4363/\lambda 5007\), [Ar iii] \(\lambda 5192/\lambda 7135\) and [S iii] \(\lambda 6312/\lambda 9069\) intensity ratios. The availability of such a number of different \(T_{\rm e}\) diagnostics permits us to calculate chemical abundances considering the stratification of temperature at different
ionization volumes. We confirm a departure from a linear fit in the \(T_{\rm e}\)([O iii]) _vs._\(T_{\rm e}\)([N ii]) relationship, which is more prominent in regions of lower metallicity. This is consistent with the presence of larger temperature inhomogeneities in the high ionization volume of these systems, as Mendez-Delgado et al. (2023) propose in a recent study. A similar departure from a linear fit seems also to be present in the \(T_{\rm e}\)([Ar iii]) _vs._\(T_{\rm e}\)([N ii]) and \(T_{\rm e}\)([S iii]) _vs._\(T_{\rm e}\)([N ii]) relationships of the DESIRED spectra of extragalactic H ii regions.
Figure 13: Temperature relations of the DESIRED extragalactic H ii regions. _Top panels:_\(T_{\rm e}\)[N ii] (left) and \(T_{\rm e}\)([S iii]) (right) as a function of \(T_{\rm e}\)([O iii]). _Middle panels:_ The \(T_{\rm e}\)([O iii]) - \(T_{\rm e}\)([Ar iii]) relation (left) and the \(T_{\rm e}\)([N ii]) - \(T_{\rm e}\)([S iii]) relation (right). _Bottom panels:_ The \(T_{\rm e}\)([N ii])-\(T_{\rm e}\)([Ar iii]) relation (left) and the \(T_{\rm e}\)([Ar iii]) relation (right). The solid blue line represent the linear fit of the data. The dashed and dotted lines indicates the model predictions of Garnett (1992) and the BOND models (Vale Asari et al., 2016), respectively. The red solid line in the upper left panel represents a second degree polynomial fit.
## Acknowledgements
We thank the referee, Grazyna Stasinska, for her careful revision of the manuscript and useful comments that have contributed to increase the quality of the paper. JEMD thank to A. Amayo for her help regarding the handling of the BOND photoionization models. JEM-D, OE and KK gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the form of an Emmy Noether Research Group (grant number KR4598/2-1, PI Kreckel). CE and JG-R acknowledge support from the Agencia Estatal de Investigacion del Ministerio de Ciencia e Innovacion (AEI-MCINN) under grant _Espectroscopia de campo integral de regiones H II locales. Modelos para el estudio de regiones H II extragalactics_ with reference 10.13039/501100011033. JG-R acknowledges support from an Advanced Fellowship under the Severo Ochoa excellence program CEX2019-000920-B. JG-R and VG-LL acknowledge financial support from the Canarian Agency for Research, Innovation and Information Society (ACIISI), of the Canary Islands Government, and the European Regional Development Fund (ERDF), under grant with reference ProID2021010074. CE, JG-R and VG-LL acknowledge support under grant P/308614 financed by funds transferred from the Spanish Ministry of Science, Innovation and Universities, charged to the General State Budgets and with funds transferred from the General Budgets of the Autonomous Community of the Canary Islands by the MCIU.
## Data Availability
The original data is public and available in the references cited in Tables 1-5. All our calculations are present in the files of the online material. DESIRED files, although already public, can be shared upon reasonable request.
|
2303.10969 | The 17 April 2021 widespread solar energetic particle event | Context. A solar eruption on 17 April 2021 produced a widespread Solar
Energetic Particle (SEP) event that was observed by five longitudinally
well-separated observers in the inner heliosphere at heliocentric distances of
0.42 to 1 au: BepiColombo, Parker Solar Probe, Solar Orbiter, STEREO A, and
near-Earth spacecraft. The event produced relativistic electrons and protons.
It was associated with a long-lasting solar hard X-ray flare and a medium fast
Coronal Mass Ejection (CME) with a speed of 880 km/s driving a shock, an EUV
wave as well as long-lasting radio burst activity showing four distinct type
III burst. Methods. A multi-spacecraft analysis of remote-sensing and in-situ
observations is applied to attribute the SEP observations at the different
locations to the various potential source regions at the Sun. An ENLIL
simulation is used to characterize the interplanetary state and its role for
the energetic particle transport. The magnetic connection between each
spacecraft and the Sun is determined. Based on a reconstruction of the coronal
shock front we determine the times when the shock establishes magnetic
connections with the different observers. Radio observations are used to
characterize the directivity of the four main injection episodes, which are
then employed in a 2D SEP transport simulation. Results. Timing analysis of the
inferred SEP solar injection suggests different source processes being
important for the electron and the proton event. Comparison among the
characteristics and timing of the potential particle sources, such as the
CME-driven shock or the flare, suggests a stronger shock contribution for the
proton event and a more likely flare-related source of the electron event.
Conclusions. We find that in this event an important ingredient for the wide
SEP spread was the wide longitudinal range of about 110 degrees covered by
distinct SEP injections. | N. Dresing, L. Rodríguez-García, I. C. Jebaraj, A. Warmuth, S. Wallace, L. Balmaceda, T. Podladchikova, R. D. Strauss, A. Kouloumvakos, C. Palmroos, V. Krupar, J. Gieseler, Z. Xu, J. G. Mitchell, C. M. S. Cohen, G. A. de Nolfo, E. Palmerio, F. Carcaboso, E. K. J. Kilpua, D. Trotta, U. Auster, E. Asvestari, D. da Silva, W. Dröge, T. Getachew, R. Gómez-Herrero, M. Grande, D. Heyner, M. Holmström, J. Huovelin, Y. Kartavykh, M. Laurenza, C. O. Lee, G. Mason, M. Maksimovic, J. Mieth, G. Murakami, P. Oleynik, M. Pinto, M. Pulupa, I. Richter, J. Rodríguez-Pacheco, B. Sánchez-Cano, F. Schuller, H. Ueno, R. Vainio, A. Vecchio, A. M. Veronig, N. Wijsen | 2023-03-20T09:43:07Z | http://arxiv.org/abs/2303.10969v1 | # The 17 April 2021 widespread solar energetic particle event
###### Abstract
Context:A complex and long-lasting solar eruption on 17 April 2021 produced a widespread Solar Energetic Particle (SEP) event that was observed by five longitudinally well-separated observers in the inner heliosphere covering distances to the Sun from 0.42 to 1 au: BepiColombo, Parker Solar Probe, Solar Orbiter, STEREO A, and near-Earth spacecraft. The event was the second widespread SEP event detected in solar cycle 25 and produced relativistic electrons and protons. It was associated with a long-lasting solar hard X-ray flare showing multiple hard X-ray peaks over a duration of one hour. The event was further accompanied by a medium fast Coronal Mass Ejection (CME) with a speed of 880 km s\({}^{-1}\) driving a shock, an EUV wave as well as long-lasting and complex radio burst activity showing four distinct type III burst groups over a period of 40 minutes.
Aims:We aim at understanding the reason for the the wide spread of elevated SEP intensities in the inner heliosphere as well as identifying the underlying source regions of the observed energetic electrons and protons.
Methods:A comprehensive multi-spacecraft analysis of remote-sensing observations and in-situ measurements of the energetic particles and interplanetary context is applied to attribute the SEP observations at the different locations to the various potential source regions at the Sun. An ENLIL simulation is used to characterize the complex interplanetary state and its role for the energetic particle transport. The magnetic connection between each spacecraft and the Sun is determined using ballistic backmapping in combination with potential field source surface extrapolations in the lower corona. In combination with a reconstruction of the coronal shock front we then determine the times when the shock establishes magnetic connections with the different observers. Radio observations are used to characterize the directivity of the four main injection episodes, which are then employed in a 2D SEP transport simulation to test the importance of these different injection episodes.
Results:A comprehensive timing analysis of the inferred solar injection times of the SEPs observed at each spacecraft suggests different source processes being important for the electron and the proton event. Comparison among the characteristics and timing of the potential particle sources, such as the CME-driven shock or the flare, suggests a stronger shock contribution for the proton event and a more likely flare-related source of the electron event.
Conclusions:In contrast to earlier studies on widespread SEP events, we find that in this event an important ingredient for the wide SEP spread was the wide longitudinal range of about 110\({}^{\circ}\) covered by distinct SEP injections, which is also supported by our SEP transport modeling.
## 1 Introduction
Solar energetic particle (SEP) events are characterized by a rich and complex set of physical processes responsible for the acceleration and propagation of the particles. Since the early observations of Forbush (1946), an enormous amount of knowledge has been built around SEPs, highlighting their importance for understanding the behavior of the outer layers of the Sun's atmosphere, as well as addressing fundamental questions related to energetic particle propagation in astrophysical environments (e.g., Reames, 2021). Multi-point observations of SEP events at different heliospheric locations provide an invaluable opportunity to study the production and transport of energetic particles, with several recent studies addressing the problem from a variety of perspectives (e.g., Dresing et al., 2014; Gomez-Herrero et al., 2015; Klein & Dalla, 2017; Rodriguez-Garcia et al., 2021; Frassati et al., 2022).
On 2021 April 17 a SEP event was observed by multiple spacecraft at well-separated locations in the inner heliosphere (within 1 au) but also by spacecraft in orbit about Mars (at 1.63 au from the Sun). The solar origin of the SEP event was temporary associated with a solar flare from behind the southeastern limb of Earth-facing Sun. This event can be considered the second widespread SEP event of solar cycle as it was detected over a longitude span of 210\({}^{\circ}\) (with the first widespread SEP event of solar cycle 25 occurring on 2020 Nov 29 and analyzed by e.g. Kollhoff et al., 2021; Kouloumvakos et al., 2022; Palmerio et al., 2022). It is the first SEP event observed at five well-separated locations in the inner heliosphere (within 1 au) and also constrained by observations at Mars. Figure 1 (left) illustrates the observer
locations in the heliographic equatorial plane together with nominal Parker field lines connecting each observer with the Sun depicted in the center of the plot. The black arrow marks the longitude of the associated flare (identified using Solar Orbiter STIX measurements as described in Sect. 4.1), and the dashed black spiral denotes the nominal magnetic field line connecting to this location. BepiColombo (yellow) was the spacecraft with the best nominal connection to the flare site. Parker Solar Probe (purple) and Solar Orbiter (blue) were approximately equally separated in longitude from the flaring region, but on different sides. STEREO A (red) and Earth (black) were further separated to the west of the flare. Despite the large angular separation between all spacecraft, the SEP event was observed at these five locations as shown in Fig. 1 (right), which makes it a widespread event (e.g., Dresing et al., 2014). The top panel shows \(\sim\)1 MeV electron intensities and the bottom panel \(\sim\)25 MeV proton intensities. As expected due to its closest magnetic connection, BepiColombo observed the highest intensities. Parker Solar Probe, being situated closest at 0.42 au from the Sun to the Sun, observed significantly higher intensities than Solar Orbiter, although their total separation angles are comparable. STEREO A observed a weak proton event but no significant increase of MeV electron intensities. At Earth/L1 (SOHO), the location with the poorest nominal magnetic connection with the flare site, the electron event seems more intense and distinct when compared with STEREO A. While Parker Solar Probe and Solar Orbiter were situated close to the ecliptic plane at slightly northern latitudes, BepiColombo, STEREO A and Earth were situated at southern latitudes (see Table 1, which summarizes the observer locations and their magnetic connections to the Sun) with a maximum of \(-7.2^{\circ}\) in the case of BepiColombo. The 17 April 2021 event, therefore, shows not only a spatial asymmetry with respect to the flare longitude but also clear differences between the electron and proton distributions.
We investigate here the drivers for this wide SEP spread as well as the reasons for the observed asymmetries. The most common explanations for widespread events have been a large acceleration region, e.g. an extended coronal shock (e.g., Rouillard et al., 2012; Gomez-Herrero et al., 2015; Lario et al., 2016; Rodriguez-Garcia et al., 2021; Kouloumvakos et al., 2022), and/or efficient perpendicular transport in the corona or interplanetary medium (e.g., Dresing et al., 2012; Droge et al., 2016).
The paper is organized as follows. After discussing the instrumentation used in this study in Sect. 2, we begin with a detailed analysis of the magnetic connectivity between the different observer locations and the Sun (Sect. 3). Section 4.1 discusses the complex and long-lasting flare of the event, Sect. 4.2 describes the analysis of the associated coronal mass ejection (CME) and CME-driven shock. Sect. 4.3 presents observation and analysis of the associated extreme ultra-violet (EUV) wave. Observations and a reconstruction of the coronal CME-driven shock are presented in Sect. 4.4. Sect. 4.5 provides an analysis of the various type II and III radio bursts observed during the event. In Sect. 5, we study the interplanetary context, involving also simulations of the state of the interplanetary medium using 3D magnetohydrodynamic (MHD) simulations, and present an overview of the multi-spacecraft SEP observations. A more detailed analysis of SEP onset times, velocity dispersion, and pitch-angle distributions is presented in Sect. 5.3. In Sect. 6, we relate the timing of SEP arrivals with solar counterpart observations to infer the parent source regions of the SEPs. In Sect. 7, interplanetary transport modeling results are presented assuming two different scenarios: the first being the standard scenario of a single SEP injection into interplanetary space from a single source region, while a second scenario assumes multiple SEP injections from different particle sources at different times. Sections 8 and 9 provide the discussion and conclusions of the study presented here, respectively.
## 2 Instrumentation
_BepiColombo_
Several data sets from the cruise phase of BepiColombo (Benkhoff et al., 2021) en route to Mercury are used in this study, such as from the Solar Intensity X-Ray and Particle Spectrometer (SIXS; Huovelin et al., 2020) on board the Mercury Planetary Orbiter (MPO, the European spacecraft involved in the BepiColombo mission). SIXS provides measurements of high-energy electrons and protons with the SIXS-P particle detector. This instrument consists of five orthogonal 150 \(\mu\)m thick Si PIN detectors, also called 'Sides', and a \(5\times 5\times 6.3\) mm\({}^{3}\) CsI(Tl) scintillator with photodiode read-out. It detects electrons in the range 50 keV to 3 MeV and protons in the range 1 to 30 MeV with a total nominal geometric factor of about 0.19 cm\({}^{2}\) sr. We note that Sides 0 and 4 are partially and totally obstructed by the spacecraft cruise shield, respectively.
We also use data from the BepiColombo Environment Radiation Monitor (BERM; Pinto et al., 2022) on board MPO, which is a particle detector that consists of a single silicon stack telescope with a small particle entrance of 0.5 mm\({}^{2}\) and a 50 \(\mu\)m beryllium cutoff window. Particles are arranged into 5 channels for electrons (\(\sim\)0.15-10 MeV), 8 channels for protons (1.5-100 MeV), and 5 channels for heavy ions (1-50 MeV\(\cdot\)mg\({}^{1}\cdot\)cm\({}^{-2}\)). BERM is mounted behind the radiator panel and faces the anti-sunward direction.
In addition, data from the Solar Particle Monitor (SPM) on board the Japanese s/c Mercury Magnetospheric Orbiter (MMO, also known as Mio; Murakami et al., 2020) are employed. SPM is a particle detector that forms part of the housekeeping suite, and it consists of two silicon photodiodes (SPM1 and SPM2), each one with an effective area of 10 mm \(\times\) 10 mm and a depletion layer thickness of 0.3 mm. Each sensor has four different deposited energy channels, which cover the energy ranges 70-1170 keV and 50-200 MeV respectively. A calibration of the sensors is currently being performed with Monte Carlo simulations based on Geant4 (Agostinelli et al., 2003).
Finally, we also use data from the BepiColombo MPO magnetometer (MPO-MAG; Heyner et al., 2021), which is composed of two tri-axial fluxgate magnetometers placed on a 2.9 m boom. MPO-MAG measures magnetic field up to 128 Hz in a \(\pm\)2048 nT range.
_Parker Solar Probe_
Particle Instrument-Low (EPI-Lo; Hill et al., 2017), consisting of 80 time-of-flight apertures. The high-energy particles are measured with the Energetic Particle Instrument-High (EPI-Hi; Wiedenbeck et al., 2017), consisting of three telescopes of stacked solid-state detectors, using the standard \(dE/dx\) versus residual energy technique to measure ions from \(\sim\)1 to \(>\)100 MeV/nuc and electrons in the range \(\sim\) 0.5-6 MeV. The first two Low Energy Telescopes (LETs) of EPI-Hi consist of a double-ended detector, providing oppositely viewing apertures (LETA and LETB) and one single-ended detector (LETC) with a viewing axis perpendicular to that of LETA. The third telescope (High Energy Telescope; HET) covers the highest energies and is double-ended with two apertures (HETA and HETB) providing roughly sunward and anti-sunward viewing directions along the nominal Parker spiral.
Observations of the magnetic field are obtained from the fluxgate magnetometer part of the FIELDS (Bale et al., 2016) suite, and solar wind measurements are provided by the Solar Probe Cup (SPC) instrument part of the Solar Wind Electrons Alphas and Protons (SWEAP; Kasper et al., 2016) investigation.
Radio observations are provided by the Radio Frequency Spectrometer (RFS; Pulupa et al., 2017) part of FIELDS, which is a dual-channel digital spectrometer designed for both remote-sensing of radio waves and in-situ measurements of electrostatic fluctuations between 10 kHz and 19.2 MHz. Here, we use the RFS data when input channels were set to the two pairs of crossed dipoles. Besides the radio flux density, it also allows us to retrieve the degree of circular polarization (Pulupa et al., 2020).
### Solar Orbiter
Data from several instruments on board Solar Orbiter (Muller et al., 2020) are used. The Spectrometer/Telescope for Imaging X-rays (STIX; Krucker et al., 2020) provides imaging spectroscopy in the X-ray range (4-150 keV). It has a full-disk field of view (FOV) and sub-second time resolution. The Radio and Plasma Waves (RPW; Maksimovic et al., 2020, 2021; Vecchio et al., 2021) instrument on Solar Orbiter consists of several subsystems including the Thermal Noise and High Frequency Receiver (TNR-HFR or THR) with a dual channel sweeping receiver in the range from 4 kHz up to 16 MHz. In particular, THR provides measurements of the plasma quasi-thermal noise (QTN) in the range 4 kHz - 1 MHz. When the QTN signal is quite strong, the spectral peak at the electron plasma frequency can be identified from which the in-situ absolute electron density can be derived (Meyer-Vernet et al., 2017; Khotyaintsev et al., 2021). The properties of energetic particles as measured by Solar Orbiter are studied using the Electron Proton Telescope (EPT) and the High Energy Telescope (HET) of the Energetic Particle Detector (EPD; Rodriguez-Pacheco et al., 2020) instrument suite. Both sensors consist of two double-ended telescopes. EPT and measures ions and electrons in the energy ranges 20 keV - 15 MeV and 20-400 keV, respectively, and HET relativistic electrons between 0.3 and 30 MeV and protons between 7 and 107 MeV.
The Solar Orbiter Magnetometer (MAG; Horbury et al., 2020) is a fluxgate vector magnetometer, yielding in-situ measurements of the interplanetary magnetic field with 16 vectors/s (normal mode) and up to 128 vectors/sec (burst mode).
The lower-energy, thermal, and suprathermal particles are measured by the Solar Wind Analyzer (SWA; Owen et al., 2020) suite. In this work, measurements from the SWA Proton and Alphas Sensor (PAS), sampling 3D velocity distribution functions of protons and alpha particles in the 0.2-20 keV energy range with a 4 s time cadence, are used to address the in-situ plasma moments, such as the solar wind's bulk flow speed and density.
### STEREO A
Observations from several instruments on board the Solar Terrestrial Relations Observatory (STEREO; Kaiser et al., 2008) are used in this study. As the STEREO B spacecraft is inactive since October 2014 due to multiple hardware anomalies, only data from instruments on board
Figure 1: Longitudinal spacecraft constellation and magnetic connectivity at 16:00 UT on 17 April 2021 (left) together with multi-spacecraft SEP observations (right). The upper panel shows \(\sim\)1 MeV electron intensities and the lower panel \(\sim\)25 MeV proton intensities (respectively ions) observed by the spacecraft indicated by the same colors in the left figure. The observers’ configuration plot (left panel) has been produced using the Solar MAgnetic Connection HAUS (Solar-MACH; Gieseler et al., 2022) tool.
STEREO A are available for the period under consideration.
The STEREO/WAVES (S/WAVES; Bougeret et al. 2008) instrument provides comprehensive measurements of all components of the electric field fluctuations between 2.5 kHz and 16 MHz. It allows us to locate sources, and calculate the polarization state (including apparent source sizes) of radio emissions in a heliocentric distance range from 4 \(R_{\odot}\) to 1 au, while the flux density can be measured even down to 2 \(R_{\odot}\)(Krupar et al. 2014). Unfortunately, direction-finding data were not available for this event.
Interplanetary magnetic field measurements are provided by the Magnetic Field Experiment (MFE; Acuna et al. 2008), part of the In situ Measurements of Particles And CME Transients (IMPACT; Luhmann et al. 2008) instrument suite. MFE is a triaxial fluxgate magnetometer mounted on a telescopic boom at a distance of \(\sim\)3 m from the spacecraft body, reaching a maximum cadence of 32 vectors/s.
Energetic particle observations with 1-minute cadence are provided by several instruments, part of the IMPACT investigation. The Solar Electron Proton Telescope (SEPT; Muller-Mellin et al. 2008) consists of dual double-ended magnet/foil particle telescopes measuring 30-400 keV electrons and 60-7000 keV ions. Two separate units provide anisotropy information in four different looking directions: Sun, Asun (pointing sunward and anti-sunward along the nominal Parker spiral, respectively), North, and South (pointing towards the North and South ecliptic poles, respectively). Since July 2015, after the solar conjunction, the spacecraft was rolled 180\({}^{\circ}\) about the spacecraft-Sun line and these nominal pointing directions changed, with the Sun and Asun telescopes pointing perpendicular to the nominal Parker spiral direction, North pointing southward and South pointing northward. The Low Energy Telescope (LET; Mewaldt et al. 2008) measures protons \(\sim\)2 to \(\sim\)13 MeV and heavier ions from \(\sim\)2 to \(>\)40 MeV/nuc (species-dependent energy range). The field of view is divided into 16 different sectors, providing directional information. The High Energy Telescope (HET; von Rosenving et al. 2008) provides the highest energy measurements, including 0.7-4 MeV electrons and 13-100 MeV protons.
Physical properties of the solar wind plasma are obtained by the Plasma and Suprathermal Ion Composition (PLASTIC; Galvin et al. 2008) instrument, in particular by the Solar Wind Sector (SWS), sampling solar wind proton bulk parameters.
Remote-sensing observations from STEREO A are provided by several instruments that are part of the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI; Howard et al. 2008). This instrument suite includes an Extreme UltraViolet Imager (EUVI; Wuelser et al. 2004), two coronagraphs (COR1 and COR2) imaging the corona from 1.4 up to 15 \(R_{\odot}\), and two Heliospheric Imager (HI; Eyles et al. 2009) cameras (HI1 and HI2).
#### Near-Earth spacecraft
From the Wind spacecraft (Oglivie & Desch 1997), we use the Magnetic Field Investigation (MFI; Lepping et al. 1995) instrument, measuring at a cadence of 11 vectors/s. The Solar Wind Experiment (SWE; Ogilvie et al. 1995) and Three-Dimensional Plasma and Energetic Particle Investigation (3DP; Lin et al. 1995) instruments provide energetic particle measurements from which we use electron observations in the range of \(\sim\)40-600 keV. Finally, the Wind/WAVES (WAVES; Bougeret et al. 1995) instrument measures the electric field from 0.3 Hz up to 13 MHz using three dipolar antennas.
From the Solar and Heliospheric Observatory (SOHO; Domingo et al. 1995), we use energetic proton measurements of the Energetic and Relativistic Nuclei and Electron (ERNE; Torsti et al. 1995) covering energies of a few to a hundred MeV, and energetic electron measurements in the MeV range by the Electron Proton Helium Instrument (EPHIN), part of the Comprehensive Suprathermal and Energetic Particle Analyser (COSTEP; Muller-Mellin et al. 1995) suite, and coronagraph observations by the Large Angle and Spectrometric Coronagraph (LASCO; Brueckner et al. 1995).
Extreme ultraviolet (EUV) solar images were obtained by the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) on board the Solar Dynamics Observatory (SDO; Pesnell et al. 2012).
#### Mars
Observations at Mars have been obtained from two spacecraft in orbit around the planet. Energetic particle data come from the Solar Energetic Particle (SEP; Larson et al. 2015) instrument on board the Mars Atmosphere and Volatile Evolution (MAVEN; Jakosky et al. 2015) mission. MAVEN/SEP is a solid-state telescopic detector with two identical sensors (SEP1 and SEP2), each containing two oppositely arranged double-ended telescopes (A and B) and measuring ions in the energy range \(\sim\)20-6000 keV and electrons in the range of \(\sim\)20-200 keV. SEP1 and SEP2 are body-mounted onto the MAVEN spacecraft to provide orthogonal look directions, with each telescope providing a 42\({}^{\circ}\)\(\times\) 31\({}^{\circ}\) FOV coverage.
Solar wind density and velocity come from the Analyzer of Space Plasmas and Energetic Atoms (ASPERA-3; Barabash et al. 2006) on board Mars Express (MEX; Chicarro 2004), and in particular from the Ion Mass Analyzer (IMA) sensor that measures ions in the energy range 10 eV/q - 30 keV/q.
## 3 Spacecraft constellation and magnetic connectivity
For all locations at which this event was observed, with the exception of Mars, we derive the instantaneous magnetic connectivity to the solar surface at 16:00 UT on 17 April 2021 using two different coronal models. The first model is a standard potential-field source surface (PFSS; Schatten et al. 1969; Altschuler & Newkirk 1969; Wang & Sheeley 1992) model out to 2.5 \(R_{\odot}\), and the second is the Wang-Sheeley-Arge (WSA; Arge & Pizzo 2000; Arge et al. 2003, 2004; McGregor et al. 2008) model. The latter makes use of the Schatten Current Sheet (SCS) model to extend the PFSS solution in this work to 5 \(R_{\odot}\), providing a more realistic magnetic field topology of the upper corona. This height is appropriate for applications of deriving spacecraft connectivity since WSA is magnetostatic and is designed to be most accurate in low beta regimes, whereas when coupled with an MHD model as its inner boundary condition one would want to derive the coronal field to between 20-30 \(R_{\odot}\) to ensure that the solar wind is supersonic and super alfvenic (Arge et al. 2004). In both the WSA and
traditional PFSS approach, the observed solar wind speed at the time of the event is used to backmap the event to the model-derived coronal field, assuming a Parker spiral. In the case of BepiColombo, which did not measure the solar wind speed during cruise phase, a nominal value of 400 km s\({}^{-1}\) is used, which coincides with the simulated solar wind given by the ENLIL model (Odstrcil et al., 2004), as discussed in Sect. 5. In this section, we present the results from both models, discuss any differences between the solutions, and elaborate on any uncertainties associated with magnetic connectivity.
Both the PFSS and WSA coronal solutions are obtained using an identical Air Force Data Assimilative Photospheric Flux Transport (ADAPT; Arge et al., 2010, 2011, 2013; Hickmann et al., 2015) time-dependent photospheric field map, derived using data from Global Oscillation Network Group (GONG; Harvey et al., 1996) magnetograms. ADAPT uses flux-transport modeling (Worden & Harvey, 2000) to account for solar time-dependent phenomena (e.g., differential rotation, meridional, and supergranulation flows) when observational data are not available. This is especially useful for studying events that originate on the solar far-side (i.e., the solar hemisphere not visible from Earth). Since ADAPT is an ensemble model, it provides 12 possible states (i.e., realizations) of the solar surface magnetic field, ideally representing the best estimate of the range of possible global photospheric flux distribution solutions at any given moment in time. When coupled with the WSA model, ADAPT-WSA derives an ensemble of 12 realizations representing the global coronal field and spacecraft magnetic connectivity to 1 \(R_{\odot}\) (i.e., the solar surface) for a given moment in time, providing an estimate of the uncertainty in the ADAPT-WSA solution. The best realization is then determined by comparing the model-derived and the in-situ observed radial interplanetary magnetic field (IMF) and solar wind speed.
Figure 2 shows the instantaneous connectivity derived with the PFSS coronal field solution, with the corresponding footpoint connectivity listed in Table 1 columns (6)-(7). The plot shows the Sun in the center, the source surface (dashed circle), which is the outer boundary of potential-field models, and the spacecraft constellation in the heliospheric Carrington coordinate system, where the unit of distance is the solar radius. The magnetic connectivity to different spacecraft is estimated as a nominal Parker spiral connecting to the source surface, from which magnetic field lines are tracked downwards to the photosphere using a PFSS extrapolation. Table 1 shows the magnetic connection points from the various spacecraft to the photosphere and the observed solar wind speed that is used to calculate the Parker spiral. Note that the scale in the plot is logarithmic above the source surface and linear below.
Figure 3 shows the WSA-derived instantaneous magnetic connectivity and thus the model-estimated magnetic footpoint for each of the five spacecraft on 17 April 2021 at 16:00 UT for all 12 realizations of the ADAPT-WSA output. The 12 realizations often produce similar results causing overlapping footpoints, which are overlaid onto the WSA-derived coronal holes shaded in red (positive/outwardly-directed field) and blue (negative/inwardly-directed field). Since this plot is a summary of the connectivity for all 12 realizations, the coronal hole shading represents any grid cell derived as open by any of the 12 realizations. The heliospheric current sheet (HCS) is overplotted in yellow and is nearly parallel with the solar equatorial plane. The average and standard deviation over all realizations of footpoint connectivity are calculated and shown in Table 1 columns (9)-(10), with the exception of STEREO A. For this spacecraft, the values in Table 1 represent eight of the 12 ADAPT-WSA realizations which derived the magnetic footpoint at the southern polar coronal hole boundary. The other four realizations derive the source region of this event at the northern polar coronal hole boundary, with an average and standard deviation of 308.8\({}^{\circ}\)\(\pm\)1.1\({}^{\circ}\) Carrington longitude, 63.1\({}^{\circ}\)\(\pm\)0.3\({}^{\circ}\) heliographic latitude. This is common when the spacecraft is near the HCS (discussed in more detail below). It is important to note that in several instances on this table, the standard deviation of the footpoint connectivity falls within the 2.0\({}^{\circ}\) resolution limit of the WSA model. The standard deviation is only included to show the precision and range of variance among the 12 realizations.
For all spacecraft except STEREO A, the 12 ADAPT-WSA realizations produce very similar results for the model-determined magnetic footpoints. The largest standard deviation that was calculated was 8.4\({}^{\circ}\) for Parker Solar Probe's longitudinal footpoint connectivity (purple). This is likely because Parker Solar Probe observed this event on the solar far-side as seen from Earth, where there are no observations of the photospheric field to update our solution. All other spacecraft have nominal standard deviations in footpoint latitude and Carrington longitude, giving us more confidence in our results.
When comparing the results from the PFSS model vs. WSA, both the model-derived polarities (Table 1 column
Figure 2: Semi-logarithmic representation of the spacecraft constellation in Carrington coordinate system. The plot is linear inside the dashed black circle, which marks the distance of the potential field source surface at 2.5 \(R_{\odot}\) in this case), and the orange circle marks the Sun. Above 2.5 \(R_{\odot}\), the plot is logarithmic in distance. Color-coded solid circles mark the various spacecraft of the constellation, and the lines connected to them represent the nominal Parker spiral solutions computed considering their heliocentric distances and the observed solar wind speeds. Inside the dashed black line, the magnetic connection is extrapolated with a PFSS solution, where the color of the lines corresponds to heliospheric latitude. The black arrow corresponds to the flare location.
(8)) and the footpoint connectivity agree overall, with the exception of the magnetic footpoint derived for Parker Solar Probe. The PFSS model derives the source region of the Parker Solar Probe-observed event on the boundary of the northern polar coronal hole extension (at 28.6\({}^{\circ}\) latitude, 146.7\({}^{\circ}\) Carrington longitude), whereas the ADAPT-WSA derived source region is at the boundary of the northern polar coronal hole (at 65.3\({}^{\circ}\) latitude, 131.1\({}^{\circ}\) Carrington longitude). Differences in the two model solutions could arise for a few reasons, a primary one being that Parker Solar Probe observed this event on the solar far-side where we do not have recent observations of the photospheric field to drive coronal models. Nevertheless, both models derived the footpoint locations of this event for all five spacecraft as originating from the boundaries of coronal holes, with each spacecraft situated within 5\({}^{\circ}\) of the HCS. When the solar wind originates from the HCS at locations where it is nearly parallel to the solar equatorial plane, there is increased uncertainty in the backmapped locations of the magnetic footpoints for observers at 1 \(R_{\odot}\) when using any coronal model. This is because a difference of a few degrees from the HCS (i.e., 1 - 2 model grid cells) could connect the spacecraft to either side of the streamer belt. It is also common in this scenario for the spacecraft-measured polarity to fluctuate between inward and outward connectivity as the spacecraft never becomes sufficiently separated from the HCS. However, for this event, both models accurately derive the solar wind magnetic field polarity, which is measured by each of the five spacecraft in situ (Table 1 column (8)), giving us more confidence in our results.
Selecting the best ADAPT input map to drive both models is particularly challenging for this event because the spacecraft were widely separated in longitude, whereas this type of modeling produces the most accurate results for spacecraft connected to the most recently added photospheric field observations (i.e., in this case Earth and STEREO A). Additionally, Parker Solar Probe and Bepi-Colombo observed this event on the solar far-side. Solar Orbiter was also located on the far-side; however, the spacecraft was connected to the near-side (i.e., the solar hemisphere visible from Earth) at 1 R\({}_{\odot}\). Lastly, there were two far-side active regions (ARs) that rotated onto the near-side on 19 and 22 April (labeled with an "\(\times\)" in Fig. 3). Although they are not visible in the ADAPT map from 17 April, the locations of these ARs are labeled with crosses in Fig. 3. New far-side AR emergence is problematic for all coronal models (Wallace et al., 2022). To account for this evolution, we test various input maps from 17 - 23 April into both the PFSS model and WSA. We find that the connectivity for each spacecraft does not change drastically by using any particular map in this date range. Therefore, we select a map from the time closest to the SEP event, 17
Figure 3: ADAPT-WSA derived instantaneous magnetic connectivity on 17 April 2021 at 16:00:00 UT for five of the spacecraft observing the SEP event, overlain onto the corresponding ADAPT-GONG map used to derive the coronal field, and the WSA-derived coronal holes (red/positive, blue/negative). The footpoint connectivity for each spacecraft is labeled by 12 colored points, one for each ADAPT-WSA realization, and the WSA-derived HCS is overplotted in yellow. The locations of two ARs that emerge on the solar far-side are labeled with an “\(\times\)”, yet are not incorporated into the ADAPT map until several days after this event. AR 12818 associated with a solar flare (discussed in Sect. 4) is labeled with an orange “\(\times\)”. Orange shading marks the portion of the Sun not observed by remote imagers on board STEREO A, Solar Orbiter, or spacecraft near Earth.
April 2021 at 16:00:00 UT. It is important to note that the two far-side ARs fall inside the visible hemispheres observed by Solar Orbiter and STEREO A during the time of the SEP event, making it possible to identify if any of these ARs are associated with a solar flare. One of the far-side ARs located at \(-19.09^{\circ}\) latitude, \(204.73^{\circ}\) Carrington longitude was associated with a solar flare (discussed in detail in Sect. 4). Additionally, we can be confident that none of these far-side ARs produced flares within the longitudinal sector in which no remote observations of the solar corona are available (i.e., from \(\sim\)45 - \(125^{\circ}\) Carrington longitude, shaded in orange in Fig. 3) comprising the location of Parker Solar Probe, situated at \(104^{\circ}\) Carrington longitude.
## 4 Remote-sensing observations of the solar corona
### Observations of the associated flare
The SEP event was associated with a solar flare occurring in the active region that was assigned the NOAA AR number 12818 when it rotated onto the Earth-facing hemisphere three days later. While the flare was clearly visible in the field of view of the STEREO A EUVI instrument, it was initially occulted as seen from from Earth, but later phases of the eruption could be seen above the limb. Starting around 15:45 UT, the eruption of a flux rope was observed in EUV with SDO/AIA above the northeastern limb. Note that because we use observational assets at different locations, we have shifted all times pertaining to flare observations to UT at the Sun. From 16:03 UT on, the cusp-shaped top of a flaring arcade became visible in the 131 A channel. At wavelengths corresponding to lower temperatures, flaring loops appeared only after 18:15 UT, consistent with a considerable occultation angle. The GOES soft X-ray flux started to increase at 16:15 UT and peaked at 17:10 UT as a GOES class of B9.7 (see top panel in Fig. 4).
In contrast to Earth-based assets, the whole flare was visible from Solar Orbiter, and observed in hard X-rays (HXR) with STIX. We show STIX count rates integrated over two energy bins in the bottom panel of Fig. 4. The counts are background-subtracted and normalized to the peak count rate in the two ranges. The thermal HXR emission at 4-10 keV (generated by the hot plasma) increased from 15:55 UT onward, peaked at 16:22 UT, and decayed to pre-event background levels only at around 19:00 UT, thus indicating a long-duration event. Again, these times refer to when events have happened on the Sun. At Solar Orbiter, they were observed 7 min later. Based on a statistical comparison of STIX and GOES/SXR X-ray fluxes for flares that were fully visible for both instruments, the true GOES class can be estimated to \(\sim\)C51 This is also shown by the fact that the GOES fluxes peak more than half an hour after the STIX thermal count rate, since GOES sees significant emission only when larger loops due to magnetic reconnection become filled by hot plasma later in the event.
Footnote 1: See the STIX website for a description of the method: [https://datscenter.stix.i4ds.net/wiki/index.php?title=GOES_Flux_vs_STIX_counts](https://datscenter.stix.i4ds.net/wiki/index.php?title=GOES_Flux_vs_STIX_counts). The discrepancy between the B9.7 obtained from actual GOES observations and the class estimate from STIX is mainly due to occultation of the majority of hot flare plasma as seen from Earth.
Above 15 keV, the STIX light curves show the spiky behavior typical for the non-thermal HXR emission generated by accelerated electrons that precipitate into the chromosphere. At least 13 non-thermal spikes are identified. While the non-thermal emission phase is usually restricted to a few minutes in typical C-class flares (cf. Veronig et al., 2002), this flare shows non-thermal emission over 50 min. In particular, there are two major peaks in the late phase that in contrast to the other show emission above 25 keV, indicative of a comparatively harder spectrum. In Appendix A, we provide a full spectral analysis of the event using the STIX data.
HXR images can be reconstructed from pixelated STIX science data. Figure 5 shows the HXR sources for the eight HXR peaks that had the largest number of non-thermal counts overplotted on STEREO A EUVI 304 A images that have been rotated so that they correspond to the viewpoint of Solar Orbiter. In the EUVI frames, we mainly see the chromospheric flare ribbons, thus such a reprojection that assumes that all features are lying on the solar surface does not introduce significant projection artefacts. Red contours show the coronal thermal source (6-10 keV), and the blue contours show the chromospheric non-thermal footpoints (15-25 keV). All images are reconstructed with the Expectation Maximization algorithm (Massa et al., 2019). Normally, the precise source locations are provided by the STIX Aspect System (Warmuth et al., 2020). However, Solar Orbiter was at a heliocentric distance of 0.84 au, which is too far from the Sun to provide a reliable pointing solution. We therefore apply the average image displacement obtained from other events from the cruise phase where aspect information was available as implemented in the STIX imaging software. This method yields a mean position uncertainty of about \(\pm 10^{\prime\prime}\). The flare position (plotted in heliocentric Cartesian coordinates in Fig. 5) is at the heliographic co
Figure 4: X-ray observations of the associated flare. Top: Soft X-ray fluxes as recorded by GOES-16. Bottom: Normalized background-subtracted STIX count rates integrated over two different energy bands. Note the gradual evolution of the thermal emission at 4-10 keV (blue) as opposed to the multiple non-thermal spikes seen at 15-25 keV (green) and at 25-50 keV (red; multiplied by 0.3 for clarity). For both GOES and STIX, times have been shifted so that they refer to UT at the Sun.
ordinates of E111S18 (203\({}^{\circ}\) Carrington longitude). As seen from Earth, this corresponds to an occultation angle of 20\({}^{\circ}\).
The coronal thermal source undergoes little evolution little evolution throughout the flare. One might expect to observe a pair of non-thermal sources consistent with the footpoints of the magnetic loops containing the hot plasma (cf. Fletcher et al., 2011). However, most HXR peaks only show a single footpoint at the eastern edge of the thermal source. The issue here is that while the individual non-thermal peaks are all very clearly defined, the total number of counts above 15 keV per peak is quite low, on the order of 1 000-2 000 counts. This is marginal for imaging, particularly in case there is more than one source present. Nevertheless, we find that all non-thermal peaks are originating from the same active region, and there is no evidence of a second remote source. While the presence of such a secondary source cannot be ruled out, it would have to be weaker than the main source by a factor of 5-10. We conclude that the footpoint brightness was very asymmetric in this event, with the eastern footpoint clearly dominat
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ & & & & & & \multicolumn{2}{c}{PFSS(\({}^{a}\))} & \multicolumn{2}{c}{ADAPT-WSA(\({}^{a}\))} \\ Spacecraft & r & Long.\({}^{(b)}\) & Lat.\({}^{(b)}\) & V\({}_{obs}\) & Long.\({}^{(b)}\) & Lat.\({}^{(b)}\) & Polarity & Long.\({}^{(b,c)}\) & Lat.\({}^{(b,c)}\) \\ & (au) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (km s\({}^{-1}\)) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (O, M) & (\({}^{\circ}\)) & (\({}^{\circ}\)) \\ \hline Flare & — & 203 & \(-\)17 & — & — & — & — & — & — \\ & & & & & & & & \\ & & & & & & & & \\ & & & & & & & & \\ \end{tabular}
\end{table}
Table 1: Magnetic connectivity between spacecraft and the Sun. Columns (1)–(4) present the respective observer and its location in Carrington coordinates (with the first row providing the flare location). Column (5) lists the measured solar wind speed, (6)–(7) and (9)–(10) provide the backmapped magnetic footpoints of the observer at the solar surface using the simple PFSS and the ADAPT-WSA models, respectively. Column (8) presents the magnetic field polarity observed (O) in situ and modeled (M) by ADAPT-WSA and PFSS.
Figure 5: Flare evolution as seen in a series of STIX HXR images overlaid on STEREO A EUVI 304 Å images that have been rotated so that they correspond to the viewpoint of Solar Orbiter. Depicted are the coronal thermal source (red contours) and the chromospheric non-thermal footpoints (blue contours) reconstructed with the Expectation Maximization algorithm. The integration times (UT at the Sun) correspond to the eight non-thermal HXR peaks with the highest number of counts above 15 keV. Additionally, the observation times of the EUVI images are shown. For reference, a longitude-latitude grid (in Stonyhurst coordinates) with a spacing of 5\({}^{\circ}\) is overplotted.
ing. This is consistent with the flare ribbons seen at 304 A, where also the southwestern ribbon is the dominating one. The different non-thermal peaks are not associated with changing footpoint locations.
### CME observations
A CME erupted from the same active region as the associated flare, NOAA AR 12818, located in the southern solar hemisphere at heliographic coordinates E111S18 (203\({}^{\circ}\) Carrington longitude) on the day of the event. The active region entered Earth's field of view on 20 April. The evolution of the CME was observed \(\sim\)30\({}^{\circ}\) from the eastern limb towards the central meridian as seen from STEREO A. The bottom panels of Fig. 6 show EUV images taken by SDO/AIA (left) and STEREO A/EUVI (right) at \(\sim\)16:10 UT (all times refer to observation times at the spacecraft). At this time, we observe the first clear indication of the eruption, when the CME exhibits prominent signatures of over-expansion (e.g. Patsourakos & Vourlidas, 2009), evidenced by the bubble-like appearance in EUV images. At the same time, the flare ribbons are activated along an arched path. Between 16:12 UT and 16:25 UT, the CME continued to expand as it reached the edge of the field-of-view of the EUV instruments. The left panel of Fig. 7 shows the time (\(\sim\)16:50 UT) at which the CME was clearly visible by the coronagraph imagery: COR1 (second column) and LAS-CO/C2 data (first column). As shown in the images, the CME morphology in white light images at these heights is consistent with classic flux-rope characteristics, namely featuring the presence of a bright outer rim followed by a cavity (e.g. Vourlidas et al., 2013).
The angular separation between STEREO A and Earth was \(\sim\)53\({}^{\circ}\), which still enables a reliable 3D reconstruction of the CME (e.g. Balmaceda et al., 2018; Verbeke et al., 2022). For this purpose, we used the graduated cylindrical shell (GCS; Thernisien et al., 2006) model to reproduce the CME appearance by fitting pairs of EUV (at distances below 1.5 \(R_{\odot}\)) and white-light (from \(\sim\)2.5 to 22 \(R_{\odot}\)) images. The model consists of a croissant-like structure fully described by six free parameters: three for location and orientation (latitude and longitude of the CME leading-edge, and tilt or inclination of the main axis of the CME with respect to the solar equator), and three for the geometry (height; aspect ratio, which sets the rate of expansion versus the height of the CME; and angular separation of the legs or half-angle). The sensitivity (deviations) in the parameters of the GCS analysis is given in Table 2 of Thernisien et al. (2009). It is worth noting that these parameters are sensitive to image quality and human interpretation (Verbeke et al., 2022). The routine used for the reconstruction is _rtcloudwidet.pro_, available as part of the _scraytrace_ package in the SolarSoft IDL library2.
Footnote 2: [http://www.lmsal.com/solarsoft/](http://www.lmsal.com/solarsoft/)
The bottom panels of Fig. 6 and Fig. 7 show the GCS fit analysis, where the green mesh represents the flux rope structure. The 3D reconstruction shows that the CME follows a non-radial path towards the solar equator in the early evolution with the latitude varying from \(-\)14\({}^{\circ}\) to \(-\)9\({}^{\circ}\) from 16:12 to 17:23 UT. The longitude and the tilt angle, meanwhile, do not show deviations, staying at fixed values of \(-\)116\({}^{\circ}\) and \(-\)70\({}^{\circ}\), respectively. The GCS parameters were chosen to better describe the portion of the CME oriented towards Solar Orbiter, as the croissant-like shape used for the fitting could not represent fully the CME due to its non-radial propagation and curved axis. The last term was introduced by Rodriguez-Garcia et al. (2022) to refer to flux ropes that may deviate from the nominal semi-circular (croissant-like) shape and have instead an undulating axis. The CME speed at the leading-edge estimated from the linear fit to the height-time measurements is 880 km s\({}^{-1}\). The width of the CME is estimated based on Dumbovic et al. (2019), where the semi-angular extent in the equatorial plane is expressed by \(R_{\rm maj}-(R_{\rm maj}-R_{\rm min})\times[tilt]/90\). Then, the total angular extent of the CME is 46\({}^{\circ}\). The value of \(R_{\rm maj}\) (face-on CME half-width) is calculated by adding \(R_{\rm min}\) (edge-on CME half-width) to the half-angle, and \(R_{\rm min}\) was calculated as the arcsin(\(aspect\)\(ratio\)). The CME width deviation was derived from the mean half-angle error, estimated by Thernisien et al. (2009) as \(+\)13\({}^{\circ}\)/\(-\)7\({}^{\circ}\). Thus, at the latest time of the 3D reconstruction at 19:23 UT, corresponding to a CME height of 15.5 \(R_{\odot}\), the narrow CME (\(\sim\)46\({}^{\circ}\)) is propagating in the direction E116S09 with a moderate speed (\(\sim\)880 km s\({}^{-1}\)).
\begin{table}
\begin{tabular}{l r r r} \hline Spacecraft & Lon. sep. & Lat. sep. & Total sep. \\ / Location & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (\({}^{\circ}\)) \\ \hline BepiColombo & 1.1 & 79.2 & 79.2 \\ PSP & -71.9 & 82.3 & 98.1 \\ Solar Orbiter & 64.9 & 4.8 & 61.0 \\ STA & 126.3 & 15.5 & 108.7 \\ L1 & 144.4 & 5.5 & 127.3 \\ \hline \end{tabular}
\end{table}
Table 2: Separation angles between location of the flare and spacecraft magnetic footpoints based on ADAPT-WSA values
Figure 6: EUV observations by SDO/AIA (_left_) and STEREO A/EUVI (_right_) at the same instant of time. The green mesh corresponding to the 3D reconstruction of the CME is overlaid to base-difference images shown in the _upper panels_.
### EUV wave observations
Figure 8 together with the accompanying movie shows an overview of the EUV wave evolution in STEREO A/EUVI 195 A running-difference images created with a lag of 150 s. The prominent signatures of the EUV wave, which exhibits a quasi-circular propagation away from the eruptive center over the solar disk, are already clearly seen around 16:10 UT and can be followed for about 40 min in STEREO A. As follows from the derivation of EUV wave kinematics and perturbation characteristics in Appendix C, the EUV wave on the solar disk extends to about 680 Mm from the source region with a mean velocity of 223-327 km s\({}^{-1}\). Above the solar limb, in the northern direction, the EUV wave can be followed to a distance of about 740 Mm, propagating with speeds of 260-450 km s\({}^{-1}\) (for heights increasing from 1.05 to 1.15 \(R_{\odot}\)). At the same time, in the southern direction, the wave is seen only to about 350 Mm, propagating with speeds of 220-300 km s\({}^{-1}\). As seen from the movie, accompanying Fig. 8, the EUV wave reaches the backmapped magnetic footpoints of BepiColombo (yellow) at around 16:55 UT (point 3) and at around 17:00 UT (points 1 and 2). Each of the points 1, 2 and 3 correspond to the spacecraft's magnetic field footpoints obtained using either ADAPT-WSA (point 1), the PFSS model at 1 R\({}_{\odot}\) (point 2) and PFSS at a height of 100 Mm above the photosphere (point 3). The footpoints of other spacecraft, which lie on the visible hemisphere as seen by STEREO A, are displayed in other colors as described in the figure legend.
### CME-driven shock observations
In white-light images the signatures of a shock wave formed in front of the expanding flux rope are faint. We use calibrated, excess-mass images (i.e. pre-event image subtracted) and display them in Fig. 7. By 16:30 UT, when the CME front is visible in COR1 FOV, the EUV wave is still visible on the surface. The CME exhibits a diffuse front ahead the brighter rim, more clearly seen at the north flank in both COR1 and LASCO-C2 images (marked with white arrows in the left panels of Fig. 7). This typical "two-front" morphology is generally interpreted as evidence of a CME-driven shock in white-light images (Ontiveros & Vourlidas
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Spacecraft & r & V\({}_{obs}\) & Estimated & Shock \\ & (au) & (km s\({}^{-1}\)) & Intersection & Height\({}^{\rm a}\) \\ & & & [UT] & [R\({}_{\odot}\)] \\ \hline BepiColombo & 0.63 & 400 & 16:30\(\pm\)3 min & 1.58 \\ Parker Solar Probe & 0.42 & 328 & 17:19\(\pm\)3 min & 1.45 \\ Solar Orbiter & 0.84 & 375 & 16:55\(\pm\)3 min & 1.07 \\ STEREO A & 0.97 & 385 & 17:24\(\pm\)3 min & 1.04 \\ L1 & 1.00 & 601 & 17:30\(\pm\)3 min & 1.01 \\ \hline \end{tabular} 1
\end{table}
Table 3: First intersection between the coronal shock and magnetic field lines connecting to the spacecraft as determined with the ADAPT-WSA model (Sect. 3). Times refer to observation times at 1 au.
Figure 7: Base-difference images of the coronagraph observations by SOHO/LASCO/C2 and STEREO A/COR1 (_left_) and LASCO/C3 and STEREO A/COR2 (_right_) at different times. The green (red) mesh corresponding to the 3D reconstruction of the CME (CME-driven shock) is shown in the _lower panels_. The white arrows indicate the signatures of the CME-driven shock.
2009; Vourlidas et al. 2013). The EUV wave is visible on the disk until 16:50 UT. By this time, the CME reaches the edge of COR1 FOV. At larger distances, namely COR2 FOV, a diffuse arched-shaped feature (white arrow in right panel of Fig. 7) is also seen propagating on the northwest quadrant. This feature is best visible between 18:23 UT and 20:23 UT in COR2 FOV and may result from the compression of a relatively weak shock wave against the underlying coronal structures. We use these features to estimate the angular extension of the shock. For this, a spherical surface (Kwon & Vourlidas 2017) is used to model the 3D appearance of the shock (represented by the red mesh in Fig. 7).
From the 3D reconstruction, we estimate that the shock reaches a speed of \(\sim\)1500 km s\({}^{-1}\) below 5 R\({}_{\odot}\) and is propagating on the direction between Solar Orbiter and Bepi-Colombo, consistent with the direction estimated from the CME 3D modeling in Sect. 4.2. Following Kwon & Vourlidas (2017), shown in their figure 2, we determine the angular width of the shock to be \(\sim\)180\({}^{\circ}\) at 19:23 UT, corresponding to a height of the shock nose of \(\sim\)16.3 R\({}_{\odot}\). Table 3 shows the timing of the first intersection between the coronal shock reconstruction and the magnetic field lines obtained with the ADAPT-WSA model connecting to the different spacecraft analyzed in this study. All times refer to observation times at 1 au.
### Radio observations
The radio emission associated with the eruption on 17 April 2021 was observed by ground-based and space-borne instruments and includes both type II and type III radio bursts. Type II (TII hereon) bursts are related to acceleration of energetic electrons at shock waves (Krasnoselskikh et al. 1985; Benz & Thejaappa 1988, and references therein), while type III (TIII hereon) radio bursts are signatures of fast electron beams propagating via open (or quasi-open) magnetic field lines from the corona to interplanetary space (Zheleznyakov 1965; Jebaraj et al. 2023b). In Fig. 9, we present a dynamic radio spectrum using measurements from the ground-based
Figure 8: EUV wave overview as observed in STEREO A/EUVI 195 Å running-difference images from 16:15 to 16:40 UT. We follow the EUV wave in four angular sectors 1–4. A movie accompanying the figure is available online (movie1). Markers show magnetic footpoints derived for STEREO A (red), BepiColombo (yellow), Solar Orbiter (blue), and ACE (green) spacecraft. The magnetic footpoints are determined using a combination of ballistic backmapping in the heliosphere and backmapping below the source surface using ADAPT-WSA to 1 Rs (points 1), a standard PFSS model to 1 \(R_{\odot}\) (points 2), and to a height of 100 Mm above the photosphere (points 3). As seen in the accompanying video, the EUV wave reaches the BepiColombo footpoints at around 16:55 UT (point 3) and at around 17:00 UT (points 1 and 2). Times refer to the observation time at STEREO A.
e-Callisto instrument located at Swiss Landschlacht, providing observations in the 80 MHz to 10 MHz range. This spectrum shows a poorly-observed decametric TII radio burst starting at 16:26 UT and exhibiting both fundamental and second harmonic emission lanes. Harmonic emission is brighter compared to the fundamental due to the large angle between the source and Earth (directivity of harmonic emission has a wider angle than the fundamental; Zheleznyakov and Zaitsev 1970; Tkachenko et al. 2021).
Figure 10 presents a combined dynamic radio spectrum of all available hecto-kilometer observations from all the observing spacecraft, namely, Parker Solar Probe, Solar Orbiter, STEREO A, and Wind. The spectrum shows a number of different radio emissions including groups of TIII bursts and distinctly patchy TII emission. A list of the starting times of each radio burst as observed by different spacecraft is provided in Table 4. An interesting aspect is that most of the TIII radio burst groups were best or exclusively observed by PSP/FIELDS/RFS partly due to its enhanced resolution and sensitivity (Pulupa et al. 2017). But the radial distance of the spacecraft from the Sun and also the directivity of the emission at the source (Thejappa et al. 2012) also play a key role. Jebaraj et al. (2020) have suggested that the intensity of a radio burst is higher in the direction of the source propagation. Therefore, the intensity of the radio emission at different observers depends on both the position of the observing spacecraft and the intrinsic directivity of the radio source. This explains the emission intensity at Parker Solar Probe, which was the closest spacecraft to the Sun during the flare-CME event. As we show in Fig. 1 (left), the spacecraft were located at different longitudes and radial distances (Table 1). In the following, we use the Parker Solar Probe spectra (Fig. 10 panel 1) to describe the spectral morphology of the TIII and TII bursts. The identification of the different TIII and TII bursts in Parker Solar Probe observations provide the foundation for the multi-spacecraft directivity analysis presented in Appendix B, where we combine the identification of the bursts with the cross-calibrated data from other spacecraft to locate the source in interplanetary space (Fig. B.1).
The different groups of TIII radio bursts and multiple components of the TII bursts exhibit interesting characteristics as far as their spectral morphology is concerned. The first and third TIII groups (TIII(1) and TIII(3) hereon) are rather faint at the short-hectometer wavelengths and appear to be intense across all spacecraft observations. The second and fourth type III groups (TIII(2) and TIII(4) hereon) were observed almost only by Parker Solar Probe and consisted of a large number of individual TIII bursts that were better distinguishable in the short-hectometer wavelengths. This indicates that during the time when TIII(2) and TIII(4) were observed, there were multiple smaller episodes of electron acceleration and subsequent release into the open magnetic field lines in the direction of Parker Solar Probe.
This is further corroborated by the polarization measurements made by PSP/FIELDS/RFS, which indicate that the energetic electron beams (the sources of type III bursts) were strongly directed towards Parker Solar Probe. Appendix B discusses the details of the polarization measurements extensively. The results indicate that TIII(2) and TIII(4) originated from a region of negative magnetic field polarity. The relatively high degree of polarization (Fig. 10 panel 2) of TIII(4) at its origin also hints at a region with high magnetic field strength (e.g., ramp of a quasi-perpendicular shock wave). As for the magnetic connectivity, Parker Solar Probe also observed Langmuir waves (see; Ginzburg and Zhelezniakov 1958; Melrose 1985) on multiple occasions, close to the local plasma frequency. This indicates that the electron beams generating TIII(2) and TIII(4) were directly sampled by Parker Solar Probe.
The TII bursts associated with the event are distinctly patchy and complex in the hectometer wavelengths (see Fig. 10). It is likely that the different TII components are associated with the same shock wave but at different regions. All TII bursts also appear bursty in terms of intensity variations (marked in Fig. 10), suggesting an on-and-off emission process at the shock front (Mann and Classen 1995). On-and-off TII bursts are believed to be emitted from locations on the shock wave where the upstream plasma conditions induce rapid changes to its obliquity and other characteristics (e.g., Schmidt and Cairns 2014; Jebaraj et al. 2021; Kouloumakos et al. 2021). Due to their patchy nature, it is somewhat difficult to distinguish between them, but we identify two main TII radio components (TII(1) and TII(2), which are marked in Fig. 10) observed in the short hectometer wavelengths (16-13 MHz) together with TIII(1) and TIII(3), and TIII(4), respectively. Overall, it seems these patchy TII bursts were observed from the start of the event and the beginning of TIII(1) and continued even after TIII(4).
Furthermore, we note another interesting temporal and spectral phenomenon observed together with TIII(4), namely, the presence of TII herringbone-like features (TII(HB) hereon). The observation of such a feature may either indicate interaction of the shock wave and TIII(4), or that some of the electron beams generating TIII(4) may just be herringbones accelerated at the near-perpendicular shock front. Herringbone features with no clear backbone emission are often clear indicators of shock fronts with near-perpendicular geometry (\(\theta_{Bn}=87^{\circ}-89.9^{\circ}\)), which are able to accelerate electrons along either side of the magnetic field lines interacting with the said shock front (Mann and Klassen 2005). We also note that observations beyond TIII(4) of patchy TII bursts may be associated with both TII(1) and TII(2).
Figure 9: Decametric type II radio burst observed by the Swiss-Landschlacht e-Callisto receiver. Fundamental and second harmonic lanes are marked by F and H, respectively.
A similar mechanism may also contribute to TIII(2), which was also observed uniquely by Parker Solar Probe. The polarization analysis of TIII(2) presented in Appendix B suggests that if there would have been a herringbone-like feature at the origin of these type III bursts, it would be observed in the decamater wavelengths. However, due to the lack of meter-decameter observations, it is not possible to make such a conclusion.
Solar Orbiter was the second-closest radio observer radially and also the second-closest spacecraft to the flaring active region in terms of the magnetic connectivity (see Fig. 1 and Fig. 2). The spacecraft observed mainly TIII(1) and TIII(3) and also TIII(4) at lower frequencies. Due to the limited survey-mode observations during the initial phase of the mission (Maksimovic et al., 2020), the low resolution HFR observations from the Solar Orbiter/RPW instrument make it difficult to recognize the strongly patchy type II burst. However, the likely intensity variations (in the frequency range 16-5 MHz) of the type II bursts can be seen in Fig. 10 panel 3.
At the time of the event, STEREO A was located almost diametrically opposite (\(\sim\)180\({}^{\circ}\)) from Parker Solar Probe. It observed well the TIII(1) and TIII(3) and also partially the TIII(2). TIII(4) was observed faintly at lower frequencies at this location. This indicates that for STEREO A the source region of the TIII(2) and TIII(4) may have been partially and fully occulted. A number of type II patches corresponding to the ones observed by Parker Solar Probe were also observed. TII(1) and TII(2), which are indicated by the red and yellow rectangles in Fig. 10, were observed to be nearly as intense as in Parker Solar Probe. However, TII(HB) was considerably weaker (marked by the orange rectangle). Such variations in intensity may indicate that the source directivity was in the direction of the spacecraft, which observed the brighter emission. In this case, the faintness of TII(HB) in STEREO A observations further supports that the source of the herringbones was likely located close to the line-of-sight of Parker Solar Probe and at the periphery of STEREO A.
Wind was the furthest spacecraft from the flare location and therefore only observed the low frequency parts of TIII(1) and TIII(3). Most other bursts were either too faint or not observed at all by the spacecraft. An interesting feature here is that Wind observed very faint signatures of both TII(1) and TII(2) as indicated by the rectangles in Fig. 10. Their fluxes however were an order of magnitude smaller than the ones observed by STEREO A. Considering that TII(1) was observed by ground based instrumentation, it is likely that the source of the emission was visible from Earth and therefore for Wind as well.
The multi-vantage point observations also introduce the phenomena of time delay (light travel time to spacecraft). By combining the time delay and the intensity variations between different spacecraft, it is possible to locate the spatial position of the source at a given frequency. We present a detailed analysis of the radio source propagation in Appendix B. Figure B.1 shows the radio source locations of TIII(1) and TIII(3) estimated using a directivity model. The results of the analysis suggest that TIII(1) propagated in the longitude \(-121.0^{\circ}\pm 3.2^{\circ}\) (slightly east of the flare longitude), and the electron beam generating TIII(3) propagated in the longitude \(-98.3^{\circ}\pm 4.1^{\circ}\) (slightly west of the flare longitude). This fits the by-eye analysis of the radio bursts based on their visibility to each observer.
## 5 Interplanetary context and SEP observations
The heliospheric conditions through which particles and CME-driven shocks propagate at the time of their release can significantly affect the SEP timing and intensity profiles (e.g. Laitinen et al., 2013; Dalla et al., 2020; Lario et al., 2022). We use both multi-point solar wind and IMF observations and the WSA-ENLIL+Cone model (Odstrcil et al., 2004) to provide a comprehensive understanding of the geometry, not only of the interplanetary structures and their possible influence in the propagation of the SEPs, but also of the shocks and their role in forming the observed intensity-time profiles. In this section, we first describe the ENLIL simulation and then discuss in-situ plasma, magnetic field, and multi-spacecraft SEP observations.
### The state of the interplanetary medium as derived with the ENLIL model
ENLIL is a global 3D MHD model3 that provides a time-dependent background characterization of the heliosphere outside 21.5 R\({}_{\odot}\). ENLIL uses time-dependent magnetograms as a background, into which spheroidal-shaped high-pressure structures without any internal magnetic field can be inserted to mimic observed CME-associated solar wind disturbances. ENLIL-modelled CMEs have an artificially higher thermal pressure to compensate for the lack of a strong magnetic field (Odstrcil et al., 2004, and references therein). To improve the characterization of the heliosphere, multi-point coronagraph observations are used to infer CME parameters, using the GCS model described in Sect. 4.2. The inner boundary condition is given by the WSA V5.2 model, using inputs from the standard quick-reduce zero-point corrected magnetograms from GONG (GONGZ), available on the National Solar Observatory website4. In this case, the GONGZ magnetograms fit the in-situ solar wind speed and magnetic field polarity better (not shown). The reliability of the CME arrival predictions depends strongly on the initial CME input parameters, such as speed, direction, and width (Lee et al., 2013; Mays et al., 2015; Kay et al., 2020), but also on the errors that can arise in the ambient model parameters and on the accuracy of the solar wind background derived from the magnetograms (Lee et al., 2013). Based on Wold et al. (2018), the mean absolute arrival-time prediction error is 10.4 \(\pm\) 0.9 hours, with a tendency to an early prediction of \(-4.0\) hours.
Footnote 3: [https://ccmc.gsfc.nasa.gov/models/modelinfo.php?](https://ccmc.gsfc.nasa.gov/models/modelinfo.php?) model=ENLIL%20with200Come%20Model
Footnote 4: [ftp://gong2.nso.edu/QR/zqs/](ftp://gong2.nso.edu/QR/zqs/)
The magnetic connectivity at the onset time of the SEP event is relevant to the understanding of the SEP observations, and considering the ENLIL-modelled varying solar wind conditions to calculate the IMF lines is an alternative to using the nominal Parker spirals. The preconditioning of the heliosphere and the interaction of the IP structures with the ambient solar wind that might be present at the SEP onset time can actively influence this connectivity (Masson et al., 2012; Palmerio et al., 2021; Lario et al., 2022). Therefore, we choose an ENLIL simulation time from 10 to 24 April 2021 (i.e. from seven days before to seven days after the SEP event onset). This interval encompasses possible previous CMEs as well as subsequent CMEs propagating through the structured solar wind streams up to a distance
of 2.1 au. All these structures may influence the propagation of particles and CME-driven shocks arriving at the different spacecraft. For this purpose, the GCS 3D reconstruction process presented in Sect. 4.2 is also used for the other nine relevant CMEs erupting in the time range of 10-24 April. The CMEs details and model set-up parameters and the results of the simulations are available on the Community Coordinated Modeling Center (CCMC) website5.
Footnote 5: [https://ccmc.gsfc.nasa.gov/database_SH/Laura_Rodriguez-Garcia_04132_SH_1.php](https://ccmc.gsfc.nasa.gov/database_SH/Laura_Rodriguez-Garcia_04132_SH_1.php)
The left panel of Fig. 11 shows a snapshot of the solar wind radial speed in the ENLIL simulation around the SEP onset time on 17 April 2021 at 16:00 UT. The black contours track the ICME ejecta. They are manifested in the simulation as coherent and outward propagating high-density regions. The pattern of slower (\(\sim\)300 km s\({}^{-1}\)) and a bit faster (\(\sim\)500 km s\({}^{-1}\)) solar wind streams is visible in the plot. The black and white dashed lines represent the IMF lines connecting the Sun with the various observer positions. The simulation shows several transient and corotating struc
Figure 10: Radio spectrograms from all available space-borne observatories. Panels 1 & 2 show Stokes I and the net polarization (Stokes V/I) from Parker Solar Probe. Panels 3 – 5 show the Stokes I measurements from Solar Orbiter, STEREO A, and Wind, respectively. The different bursts are indicated in panel 1. The TII bursts are marked in other panels by rectangular boxes of red (TII(1)), yellow (TII(2)), and orange (TII(HB)).
tures present near Solar Orbiter, Earth, STEREO A, and Mars at the time of the onset of the particles that might modify the magnetic connectivity and SEP propagation conditions. There is a relatively small ICME reaching Solar Orbiter during the ongoing SEP event. According to ENIL, this ICME does not extend to any other investigated spacecraft. Ahead of this ICME there is a clearly wider eruption covering about 140\({}^{\circ}\) in longitude. Its western edge encloses STEREO A at the time of the SEP injection from the Sun and it is between the Sun and Mars. None of the CMEs inserted into ENILI impacts Earth, but the leading edge of a stream interaction region (SIR) is reaching the planet at the time of the onset of the particle intensity increase seen at Earth.
The ENILI simulation also shows that at the time of the initial SEP injection from the Sun (left panel of Fig. 11) the IP medium is relatively undisturbed between the Sun and BepiColombo as well as Parker Solar Probe. We note that the wide ICME discussed previously crossed BepiColombo and Solar Orbiter, but this was before the SEPs were injected at the Sun. Nevertheless, this ICME may still have an effect on the propagation conditions of SEPs. The simulated status of the heliosphere around the SEP onset time agrees overall with the in-situ plasma and magnetic field measurements as discussed further below. The right panel of Fig. 11 shows the heliosphere two days later, on 19 April 2021 at 21:00 UT. The ICME that was associated to the SEP event has then reached BepiColombo and Solar Orbiter. The simulation suggests that the ICME nose propagates between these two spacecraft and both of them cross the structure near the flanks.
The five bottom panels of Figs. 12, 13, and 14 present the in-situ plasma and magnetic field data over-plotted with the pink line showing the result of the ENILI simulation from 17 April to mid 23 April. The whole set of panels in these figures present, from top to bottom, energetic electron intensities at different energies (1), proton/ion intensities at different energies (2), the magnetic field magnitude (3), the magnetic field latitudinal (4) and azimuthal (5) angles in spacecraft centered radial-tangential-normal (RTN) coordinates, namely \(\theta_{\rm B\_RTN}\) and \(\phi_{\rm B\_RTN}\), the solar wind proton speed (6), and the solar wind proton density (7). As specified in the following section, ENILI follows the general trend of the measured solar wind speed at the locations of Solar Orbiter and Mars, which were only separated by 9\({}^{\circ}\) in longitude during the SEP event, while at the remaining locations there are some differences with in-situ measurements. Although ENILI reproduced the overall features of high-speed streams present in the heliosphere during the period of study, the differences between the modeled and measured solar wind profiles could be explained by complex coronal holes that render the comparison between measurements and ENIL results at Earth difficult, as well as preventing accurately resolving the glancing encounter of the SIR at STEREO A, as discussed below.
ENILI successfully predicts the arrival of the several ICMEs observed in situ within the uncertainty of the model, as shown in the increase of the speed, density, or magnetic field in the pink profiles in Figs. 12, 13, and 14. Due to the absence of the internal magnetic field in the simulated CMEs, the magnetic field magnitude increase is, however, lower than what was measured in situ. In particular, based on ENILI simulations and measured in situ as discussed in Sect. 5.2, the ICME related to the SEP event is intercepted by BepiColombo and Solar Orbiter, while Mars might be only observing the associated IP shock, not the ICME ejecta. We relate the better simulation of the arrival time of this ICME at Solar Orbiter location in comparison with BepiColombo to the fact that we chose the CME parameters which better reproduced the portion of the CME oriented towards Solar Orbiter as ENILI input, as discussed in Sect. 4.2. The minimum longitudinal extent of the ICME related to the SEP event is \(\sim\)45\({}^{\circ}\), as shown in the right panel of Fig. 11. This value is in agreement with the angular extent of the CME along the equatorial plane (\(\sim\)46\({}^{\circ}\)) estimated from the GCS reconstruction presented in Sect. 4.2.
Multi-spacecraft in-situ plasma, magnetic field and SEP observations in context with the ENILI simulation
As discussed in Sect. 5.1, during the period of study there are several IP structures impacting the spacecraft under consideration, which may in turn affect the SEP particle profiles. In the following, we discuss the energetic particle observations and their relation with the interplanetary context.
#### 5.2.1 BepiColombo
The two top panels of Fig. 12 (left) show the SEP event as observed by BepiColombo detected over a broad energy range by the MPO/SIXS, MPO/BERM, and Mio/SPM instruments with the time of the flare onset marked by the arrows at the top. Panel (1) shows the impulsive energetic electron event that reaches energies of at least 2 MeV. The proton intensity-time profile (panel 2) shows a more gradual increase. The Mio/SPM observations show that the event was observed even at proton energies \(>\) 200 MeV. There is no plasma information available, but at the time when particle intensities started to increase, the solar wind speed given by ENIL is \(\sim\)400 km s\({}^{-1}\), as shown by the pink line in panel (6).
Commonly, the in-situ identification of the passage of ICMEs is based on a set of signatures typically observed in magnetic field and plasma data as well as some other proxies, such as bi-directional suprathermal electron (BDE) profiles (e.g., Zurbuchen & Richardson, 2006; Kilpua et al., 2017). BepiColombo lacked plasma data, but the in-situ MPO-MAG magnetic field observations in panels (3)-(5) do not show any evidence of typical ICME signatures (e.g., enhanced field, low field variety, coherent field rotation) during the onset and rising phase of the SEP event. This agrees with the previously discussed ENILI simulation results and confirms that there was no large-scale solar wind structure at BepiColombo that could have directly influenced the SEP time profiles.
The increase in the magnitude of the magnetic field observed by BepiColombo on 19 April marks the arrival of the ICME related to the SEP event. The IP shock arrives at 11:40 UT (vertical solid line), while ENILI simulates the ICME arrival time \(\sim\)6 hours earlier. Unfortunately, SIXS has a data gap at that time so that a potential low-energy particle intensity response to the shock passage (i.e., an energetic storm particle event) could not be studied in detail. However, BERM fluxes at \(\sim\)1.5-5.9 MeV do not show a sig
nificant increase at the shock or a response to the passage of the ejecta, which follows the shock (gray shaded area).
The leading edge of the ejecta was observed at 13:57 UT on 19 April identified by a change in the magnetic field polarity along with the presence of coherent and organized magnetic field. Specifically, we observe a smooth and monotonic change of the magnetic field latitudinal and azimuthal angles shown in panels (4) and (5) that lasted until 20 April 00:04 UT. No other structures are observed until the end of the period shown in Fig. 12.
#### 5.2.2 Parker Solar Probe
Panel (6) of Fig. 12 (right) shows that at the time of the SEP event onset the solar wind speed at Parker Solar Probe is \(\sim\)320 km s\({}^{-1}\). The SEP event has a very impulsive time profile both in the electrons shown in panel (1) and in the protons shown in panel (2). Compared with BepiColombo the event has a shorter duration, namely a faster decay. PSP/EPI-Hi/HET observes observes intensity increases at electron energies above 2 MeV and protons above 50 MeV.
Based on the plasma and magnetic field data given by the SWEAP and FIELDS instruments, no IP structures can be identified during the whole period shown in the right column of Fig. 12 location of Parker Solar Probe. This is in agreement with the ENLIL simulation results.
#### 5.2.3 Solar Orbiter
Panels (1) and (2) of Fig. 13 (left) show the SEP event observed by Solar Orbiter. While the electron event is observed to reach energies up to \(\sim\)1 MeV, it is not as impulsive as the event observed by BepiColombo and Parker Solar Probe but shows more of a plateau-like profile. The intervening structures present at the time of the SEP onset, as suggested by ENLIL (Sect. 5.1), might be associated with this behavior as they might hinder the SEP transport. This may also be the reason for the low anisotropy observed at the onset of the event as described in Sect. 5.3.3. The energetic ion observations by Solar Orbiter/EPD/EPT allow us to discern the initial phase of the event only at energies \(\gtrsim\)400 keV, reaching energies up to \(\sim\)60 MeV as observed by EPD/HET.
The solar wind speed at the time of the electron event onset is \(\sim\)380 km s\({}^{-1}\), as shown in panel (6), measured by the SWA instrument, which is well reproduced by ENLIL (pink line). In a later phase of the SEP event, Solar Orbiter observes several IP structures identified using MAG, SWA and the RPW instruments on board Solar Orbiter. A first ejecta (first gray shaded area in the left column of Fig. 13) arrives at 05:24 UT on 18 April. While it does not affect the energetic electron intensity time profiles or the high-energy (\(\gtrsim\)2 MeV) ion intensity time profiles, it seems to have acted as a particle barrier for low-energy ions. Only after its passage, at 14:18 UT on the same day, an increase in the \(\lesssim\)400 keV energy ions is observed. These particles are likely associated with the solar eruption on 17 April. The ICME-driven shock associated with this eruption arrives at Solar Orbiter on April 17 at 20:20 UT on 19 April. The shock simulated by ENLIL arrives a about \(\sim\)30 minutes later than the measured shock as shown by the pink line in panels (3), (6-7) in the left column of Figure 13.
Figure 11: Radial velocity contour plot from the ENLIL simulation in the ecliptic plane. The black and white dashed lines represent the IMF lines, and the black contours track the ICME ejecta. The white lines correspond to the HCS, which separates the regions with opposite magnetic polarity, shown in blue (negative) or red (positive) on the outer edge of the simulation region. _Left panel:_ magnetic connectivity of the different spacecraft around the particle solar release time. _Right panel:_ SEP event-related ICME arrival to Solar Orbiter.
The shock obliquity (\(\theta_{Bn}\), namely the angle between the shock normal and the upstream magnetic field) is estimated at Solar Orbiter using the magnetic coplanarity method (e.g., Paschmann & Schwartz, 2000). A value of \(\theta_{Bn}\sim(21\pm 5)^{\circ}\) is computed, employing a systematic variation of the upstream and downstream averaging window lengths between 3 and 13 minutes, with the method described in Trotta et al. (2022). The lack of plasma data around the shock crossing limits further analyses. However, using the novel method introduced by Gedalin et al. (2021), an estimation for the Alfvenic Mach number using magnetic field only data yields \(M_{A}\sim 1.8\), consistent with the fact that the shock passage has no significant influence on the energetic particle population at higher energies.
The low-energy ions keep rising until a peak is observed shortly before the shock passage, after which the intensities decrease. Right after the shock passage, another solar energetic electron event is observed on 20 April, which is not related to the event under study but originated from an M1.1 flare in AR 12816 (at S24E25 as seen from Earth), peaking at 23:42 UT on 19 April. The \(\sim\)1 MeV/nucleon ions associated with this new injection showed a large enrichment of \({}^{3}\)He, with a \({}^{3}\)He/\({}^{4}\)He ratio of, with a ratio of \(\sim\)5% (not shown). The second ejecta, which arrives at 06:51 UT on 20 April, corresponds to the ICME associated with the 17 April SEP event. It is marked by a smooth magnetic field and monotonic and coherent rotations in the magnetic field angles. While the second energetic electron event shows a depression of fluxes during the ejecta passage, low-energy ions show an enhancement inside the ejecta during its first half and a decrease during the second half.
#### 5.2.4 Stereo A
Observations of the SEP event at STEREO A are shown in Fig. 13 (right). STEREO A observes a clear electron event,
Figure 12: In-situ SEP time profiles as well as plasma and magnetic field observations by BepiColombo (left) and Parker Solar Probe (right). _Top_: Energetic electron and proton temporal profiles observed from several energy channels and instruments. For SIXS, we use fluxes detected in side 2 of the detector. The flare eruption time is represented by the arrow on the upper x-axes. The vertical solid line and gray shaded area, respectively, indicate IP shock and ejecta transit observed by BepiColombo. _Bottom_: In-situ plasma and magnetic field observations. The panels present, from top to bottom, the magnetic field magnitude, the magnetic field latitudinal and azimuthal angles, \(\theta_{\text{B-RTN}}\) and \(\phi_{\text{B-RTN}}\), the solar wind speed, and the proton density, where RTN stands for radial-tangential-normal coordinates (IP structures as described in top panel). The pink lines represent the ENLIL simulation results.
however only at near-relativistic (\(\lesssim\)400 keV) energies. Similarly to Solar Orbiter, the proton event is only well observed at higher energies, namely \(\gtrsim\)2 MeV. The maximum energy of the proton event seems to be lower than at the other spacecraft, barely reaching \(\sim\)25 MeV. At lower ion energies, SEPT observes an \(|\)bf enhanced pre-event intensity background, (most likely due to the SIR as described below), which might mask the SEP event.
As shown by the magnetic field and plasma data in panels (3)-(7), the SEP event onset takes place during the passage of an SIR at STEREO A, which is indicated by the salmon-shaded vertical bar at the beginning of the time interval displayed in the right column of Fig. 13. The signatures indicate a glancing cross of the SIR structure, with only a very modest increase of the solar wind speed. The speed rises from \(\sim\)400 to \(\sim\)450 km s\({}^{-1}\); sudden changes of the magnetic field polarity close to the stream interface (dashed vertical line), and drops in the magnetic field strength together with temperature increases (not shown) and proton density enhancement, which suggests that local reconnections are occurring. The ENLIL simulation also suggests an SIR arrival (not shown), but several hours earlier than observed, and infers a clearer intersection of the high-speed stream with the spacecraft. At STEREO A, no signatures of ICMEs are detected. This is in agreement with the ENLIL results.
The lowest-energy ion channels of SEPT show a clear variation in their intensities happening right after the stream interface. At the same time, the thermal proton density drops. The energetic electron increases observed after the data gap are associated to later SEP events that are not related to the event under study.
#### 5.2.5 Earth
Figure 14 (left) shows the SEP event observed at near-Earth spacecraft. Similar to STEREO A, Earth is embedded in the trailing portion of an SIR, after the stream interface
Figure 13: In-situ SEP time profiles as well as plasma and magnetic field observations by Solar Orbiter (left) and STEREO A (right). _Top_: Energetic electron and proton temporal profiles observed from several energy channels. We use the sunward looking sectors of Solar Orbiters’ EPD/EPT and HET. For STEREO A, as not all instruments provide sectored measurements, we use omni-directional data. The salmon shaded area indicates an SIR observed by STEREO A, while the stream interface is shown as a dashed line. Flare time and rest of IP structures indicated as in Fig. 12. _Bottom_: In-situ plasma and magnetic field observations. Solar wind densities for Solar Orbiter are obtained from RPW/QTN measurements.
passage, as measured by the MFI and SWE instruments on board Wind. The arrival at Earth of the high-speed stream simulated by ENLIL arrives a few hours later than that actually measured. The rear boundary of the SIR is difficult to define, as there is not a clear step-like speed increase and the dynamic pressure does not show any clear peak. The reason behind this behavior might be that the solar wind arriving at Earth originates from multiple and complex coronal holes. There is a big southern coronal hole extending to the equator and some low-latitude large patcher holes (not shown). The solar wind speed at the onset of the particle event is \(\sim\)600 km s\({}^{-1}\), as shown in panel (6). Panel (1) shows that only SOHO/EPHIN, which has a very low instrumental background, observes a clear but very gradual electron event at 0.25-0.7 MeV. The lower energies covered by Wind/3DP are showing an enhanced background that possibly masks the SEP event and may contain also ion contamination. This enhanced background, likely caused by the SIR, also dominates the low-energy ion observations by Wind/3DP. However, SOHO/EPHIN and ERNE show a proton event extending into the deca-MeV range, which is small, gradual, and clearly delayed with respect to the time of the flare.
#### 5.2.6 Mars
On 17 April 2021 Mars was located at a heliocentric distance of 1.6 au 22\({}^{\circ}\) west of the flaring active region at 225\({}^{\circ}\) Carrington longitude. The top two panels of the right column of Figure 14 show \(\sim\)60-210 keV electron and \(\sim\)70-7000 keV proton intensities as measured in different energy channels of MAVEN/SEP. Panels (3)-(5) show only the ENLIL simulations of the magnetic field, as no measurements are available. The solar wind speed (6) and density (7) measurements by MEX/ASPERA-3/IMA are rather sparse, however, they show an overall good agreement with the ENLIL simulation for the solar wind speed. In this case, we also show the dashed pink lines corresponding to the background solar wind simulation, without including any CME. The separation of the solid and dashed lines indicate the effects produced by the passage of the interplanetary structures, based only on ENLIL results. A first interplanetary shock is modeled to arrive at 09:00 UT on 18 April.
Figure 14: In-situ SEP time profiles as well as plasma and magnetic field observations by Earth (left) and Mars (right). _Top_: Energetic electron and proton time profiles observed from several energy channels. Flare time and IP structures as in Fig. 12. _Bottom_: In-situ plasma and magnetic field observations. Panels as in Fig. 12. The pink dashed lines are the ENLIL background solar wind with no CMEs included in the simulation.
Two pre-SEP event ICMEs arrive at 04:00 UT on 19 April and 10:00 UT on 21 April, as simulated by ENLIL, that might be the same ICMEs measured earlier by Solar Orbiter. Lastly, the interplanetary shock related to the SEP event under study impacts Mars at 10:00 UT on 22 April, based on both the simulation and on the increase in solar wind speed and density measured in situ. According to ENLIL, the shock is, however, not followed by an ejecta. The ICME flank might therefore have missed Mars.
It is difficult to associate the energetic particle increases observed by MAVEN with the 17 April SEP event. Nevertheless, an electron increase is observed in the higher-energy channels right after the flare (marked by an arrow). Although the onset times are very hard to determine and might suggest a too-early onset to account for the expected travel time of these electrons, a potential SEP contribution cannot be excluded. More likely, on the other hand, is that the CME-driven shock associated with the event has contributed to the electron and proton increases observed on 22 April because the peaks of the SEP increases agree well with the shock arrival time simulated by ENLIL. However, another possible source of this increase could be the same new SEP event as observed also by STEREO A on 22 April (at S24E25 as seen from Earth), which is magnetically well-connected with Mars during the period under study (see Fig. 1).
### SEP pitch-angle distributions and first arriving particles
Figure 1 (right) combines the SEP observations as measured by the five inner-heliospheric spacecraft and shows how strongly the event characteristics such as intensity-time profiles, onset times, and peak intensities vary from observer to observer. Given the well-separated positions of these spacecraft shown in Fig. 1 (left) and their varying separations with respect to the parent flare location, this is not unexpected. It has been found in earlier studies that the longitudinal distribution of peak intensities usually shows a decrease with increasing longitudinal separation angle from the associated flare longitude (e.g., Lario et al., 2013; Dresing et al., 2014; Richardson et al., 2014). These authors described the longitudinal peak-intensity distributions with Gaussian functions. However, due to the limitation of only three well-separated observers, these analyses suffered from large uncertainties. The new, larger spacecraft fleet will allow us to analyze these longitudinal distributions in a better way, however, instrument inter-calibrations, especially of the new mission's payload, are still pending. Looking at the \(\sim\)20-25 MeV proton peak intensities observed by each spacecraft (Fig. 1) we find, however, a deviation from the expected ordering of peak intensities with absolute longitudinal separation angle. Parker Solar Probe, which is slightly worse connected (\(|\Delta\Phi|=72^{\circ}\)) than Solar Orbiter (\(|\Delta\Phi|=65^{\circ}\)), observes not only a significantly higher-intensity event, but also a clearly more impulsive time profile. While the higher peak intensities at Parker Solar Probe could be explained by its smaller radial distance from the Sun as compared to Solar Orbiter (0.42 au vs. 0.84 au), the significantly different time profiles rather suggest a different connectivity to the SEP injection region.
In this section, we analyze the SEP observations in more detail to determine the timing of the first arriving particles, which allows us to relate the SEPs with their solar counterpart observations (see Sect. 6). Furthermore, we analyze pitch-angle distributions (PADs) to characterize the degree of pitch-angle diffusion, namely the importance of transport effects.
#### 5.3.1 BepiColombo
BepiColombo detects the most intense event out of all observers. This is expected based on both its closer radial distance from the Sun (0.63 au) and its fairly good connection to the associated flaring active region -1\({}^{\circ}\) (79\({}^{\circ}\)) longitudinal (total) separation angle between the flare site and the spacecraft magnetic footpoint (cf. Table 1). BepiColombo also observes the earliest SEP onsets, e.g. 16:30 UT for 71 keV electrons, and the corresponding inferred injection times are the earliest out of all observers (see Sect. 6). Surprisingly, BepiColombo/SIXS detects a 5-minute earlier onset time for 71 keV electrons than for 960 keV electrons (see Table 4). Although these onset times almost agree within the error bars, the difference between the inferred injection times is significant. The much longer travel time of the lower energy electrons yields an 11-minutes earlier injection time compared to the \(\sim\)1 MeV electrons. The first 25 MeV protons are detected at 17:00 UT\(\pm\)4 min, which corresponds to an inferred injection time situated between that of the low-energy electrons and the high-energy electrons. However, given the error bars, it would agree with both of the inferred electron injection times. Figure 15 shows sectored energetic particle measurements by SIXS (middle panel) as well as the pitch-angles covered by the center of the four different viewing directions (top panel) and the intensity-PAD in the bottom panel. The left-hand figure shows \(\sim\)100 keV electrons and the right-hand figure shows 8 MeV protons. Although the event is anisotropic, as can be seen by the different intensity levels observed by the different sides of the SIXS instrument, a velocity dispersion analysis did not yield meaningful results neither for electrons nor for protons. Therefore, we can only apply the time-shift analysis to infer the particle injection times at the Sun at specific energy bands. The proton anisotropy is stronger and the anisotropic phase is clearly longer for protons, lasting about two hours. Unfortunately, the electron onset falls into a period of poor pitch-angle coverage of the sector where particles streaming from the Sun along the outward magnetic field would enter (pitch angle 0). The electron anisotropy could therefore be underestimated during the onset phase and this could lead also to a determination of too late electron onset times.
#### 5.3.2 Parker Solar Probe
Based on the \(\sim\)25 MeV proton observations (see Fig. 1, right), Parker Solar Probe observes the second most intense event after BepiColombo. Because Parker Solar Probe's electron observations are not yet available in units of intensity, it is not possible to compare their intensity level with that of other spacecraft.
Figure 16 (left) shows the energetic electron observations at 920 keV (top panel) and 90 keV (third panel) in the different viewing directions as provided by EPI-Hi/LET and in two wedges of EPI-Lo, respectively. The determined onset times using the sunward-looking sectors and 5-min averaged data are marked by the red dashed lines.
Figure 16: PSP/IS\(\odot\)IS observations of the onset of the energetic electron (left) and proton enhancement (right). Top left: time profile of \(\sim\)920 keV electrons observed by the three orthogonal EPI-Hi/LET telescope apertures. Left second panel: pitch angle of each of the EPI-Hi/LET apertures. Left middle: time profile of \(\sim\)90 keV electrons observed by EPI-Lo wedges 3 and 7 (sunward and anti-sunward facing, respectively). Left fourth panel: pitch angle of the boresight of EPI-Lo wedges 3 and 7. Left bottom: magnetic field magnitude and vector in RTN coordinates as measured by the PSP/FIELDS magnetometer. Top right: time profile of \(\sim\)10 MeV protons observed by the three orthogonal EPI-Hi/LET telescope apertures. Right middle: time profile of \(\sim\)25 MeV protons observed by the three orthogonal EPI-Hi/LET telescope apertures. Right bottom: pitch angle of each of the EPI-Hi/LET apertures.
Figure 15: Pitch-angle distribution of 106 keV electrons (left) and 8.02 MeV protons (right) measured by BepiColombo/SIXS. Top: pitch-angle coverage of sides 0–3, middle: intensities measured in sides 0–3, bottom: pitch-angle distribution with color-coded intensities normalized to the median of each time step. Gray pitch-angle bins mark no pitch-angle coverage, while white bins are zero-count periods.
While EPI-Hi does not provide the necessary time resolution to discern velocity dispersion in these relativistic electrons, the time resolution of EPI-Lo would be sufficient to discern velocity dispersion in the near-relativistic electrons. However, the limited statistics at these energies makes it challenging to conclude whether EPI-Lo observed electron velocity dispersion or not. Nevertheless, a small but significant anisotropy is present in the 90 keV electron observations, as denoted by the higher intensity of the sunward-looking wedge W3 (black) compared to the anti-sunward viewing wedge W7 (red). Still during the rising phase of the electron event, around 17:00 UT, we observe a second step, marked by the blue dashed line in the third panel of Fig. 16 (left), which is observed by both EPI-Lo and EPI-Hi. It does not correlate with any changes in the magnetic field and therefore seems not to be caused by a local effect. Later, around 17:30-18:00 UT, we observe a phase of stronger anisotropy consistent between EPI-Lo and EPI-Hi that appears to be tied to changes in the magnetic field.
Figure 16 (right) shows proton observations at 10 MeV (top panel) and 25 MeV (middle panel) in different viewing directions as provided by EPI-Hi/LET. As for BeipColombo, the proton observations show a stronger anisotropy than the electrons, which also lasts longer (\(>\)6 hours). In both the 10 MeV and 25 MeV time-intensity profiles, the sunward-facing aperture (LETA) shows the fastest onset and highest intensity.
In contrast to the electron observations, the protons show a clear velocity dispersion. Figure 17 shows a velocity dispersion analysis (VDA) that results in a path length of \(L=0.63\) au traveled by the protons and an inferred proton injection time at 16:46 UT\(\pm\)10 min. Even considering the uncertainties, this injection time is significantly later than those determined for electrons through a time shift analysis (TSA), using the same path length, that result in 16:26 UT (16:30 UT) for 920 keV (90 keV).
#### 5.3.3 Solar Orbiter
Solar Orbiter's magnetic footpoint at the Sun is similarly far separated in longitude from the flare location to that of Parker Solar Probe, but it is situated on the other side, namely west of the flare. Solar Orbiter observes significantly lower proton intensities than Parker Solar Probe (see Fig. 1, right). Furthermore, in contrast to BeipColombo and Parker Solar Probe, who observe an impulsive proton time profile, Solar Orbiter observes a gradual profile both in electrons and protons. While in case of CME-driven shock acceleration a more gradual time profile is expected for an observer situated to the east of the source region as compared to an observer situated to the west due to their different connections to the CME-driven shock front (e.g., Cane et al., 1988), the difference in their peak intensities is not expected to show such strong asymmetry (e.g., Richardson et al., 2014). However, Solar Orbiter's distance to the Sun, which is twice of Parker Solar Probe's distance, is expected to contribute to this intensity difference.
Energetic electron observations do not show any significant anisotropy, neither at lower energies as illustrated by the \(\sim\)100 keV electron PADs (Fig. 18, left) observed by Solar Orbiter/EPT, nor at MeV energies (not shown). The right-hand part of Fig. 18 shows the PAD of \(\sim\)8 MeV protons as detected by Solar Orbiter/HET, which shows that the early phase of the MeV proton event is anisotropic for about seven hours, showing higher fluxes in the sunward-looking telescope that corresponds to pitch angles near 180\({}^{\circ}\), consistent with the inward magnetic polarity (see also column 8 in Table 1).
To perform a VDA, we determine the onset time based on the proton time profiles in the HET sunward telescope. Therefore, we use the energy channels between 7 MeV and 45 MeV and reconstruct the energy bins by combining every three proton channels. We then apply the Poisson-CUSUM method (Huttunen-Heikinmaa et al., 2005) and derive the onset time in those new channels. The VDA for protons is based on these onset times and results in an inferred injection time at 17:14\(\pm\)12 min and a path length of \(L\) = 1.24\(\pm\)0.18 au. We display the results in Fig. 19 where we overplot the resulting VDA fit on the dynamic proton spectrogram.
For electrons, no VDA was possible because the onset times of many different energy channels were basically the same. A possible reason could be the rather poor pitchangle coverage during the event onset not covering the direction along the magnetic field. We therefore determine inferred injection times for selected energy channels based on TSA assuming the same path length as derived from the proton VDA (see Table 4).
The earliest arriving particles are MeV electrons with an onset time at 16:52\(\pm\)15 min (1.1-2.4 MeV) followed by near-relativistic (106 keV) electrons at 17:13\(\pm\)2 min. Both low and high-energy electron onset times lead to earlier solar injection times (16:41\(\pm\)15 min 16:55\(\pm\)2 min, respectively) than that obtained from proton VDA (see Sect. 6).
Figure 17: VDA of protons from PSP EPI-Hi/LETA from 1 to 30 MeV. Top panel: the red line is a ‘by-eye’ fit to the onset of the observed intensities as a function of 1/v and time. Bottom panel: the same data are plotted, but with the velocity dispersion removed. The legend provides the derived path length and injection times corresponding to the fit line.
#### 5.3.4 Stereo A
STEREO A is a far-separated observer with 129\({}^{\circ}\) (109\({}^{\circ}\)) of longitudinal (total angular) separation between the flare location and the spacecraft's magnetic footpoint at the Sun computed with ADAPT-WSA. It is therefore not surprising that the SEP event at STEREO A is less intense than those explored so far, and that the intensity-time profiles are more gradual and isotropic (e.g., Dresing et al., 2014). Figure 20 (left) shows the electron PAD observed by STEREO A/SEPT at \(\sim\)100 keV, which shows no anisotropy except during the time of the maximum, where the intensity in the anti-sunward sector is slightly higher. We note that, since the spacecraft was put upside-down after the superior solar conjunction in 2015, the Sun and anti-Sun sectors no longer point along the nominal Parker spiral but perpendicular to it.
Figure 20 (right, third panel from top) shows the 4-6 MeV proton intensities observed in the 16 sectors of STEREO A/LET. The top panel shows the color-coded intensity-PAD and the second panel shows the pitch-angles of the sector centers. The statistics in the single sectors are poor, which is why the bottom panel shows averaged intensities of the eight A and B sectors, respectively, and in black an average of all sectors. Interestingly, LET shows a double-peak time profile with the first peak, starting shortly after 18:00 UT on 17 April, being much more anisotropic than the second peak as almost no intensity is yet observed in the B-side sectors of LET. The depletion between the peaks at \(\sim\)6 UT on 18 April is not caused by poor pitch-angle coverage. Indeed, the pitch-angle coverage is better during this phase than during neighboring periods. As shown in Fig. 13 (right), there is no clear interplanetary structure that can be associated with this dip. We therefore argue that it is either caused by a change of the magnetic connection to the parent source region or a distinct new particle injection, which is also supported by the differently strong anisotropies during both peaks.
Due to the gradual nature of the event and rather poor statistics, it was not possible to apply a VDA, and in order to determine onset times we had to average the data, leading to significant uncertainties. We obtain an onset at 18:25\(\pm\)10 min for 85-125 keV electrons and 19:30\(\pm\)1h for 13.6-23.8 MeV protons. Assuming a scatter-free propagation along a nominal Parker spiral with a length of 1.16 au, this would translate to inferred injection times of 18:08\(\pm\)10 min for the electrons and 18:20\(\pm\)1 h for the protons. The event is also less energetic at STEREO A. Different to all other inner spacecraft, STEREO A does not detect electrons in the MeV range and the event at 25 MeV protons is very weak (see Fig. 1, right). However, this could be also due to instrumental differences with STEREO A/HET being less sensitive.
#### 5.3.5 Near-Earth Spacecraft
As already discussed in Sect. 5.2.5, the SEP event at the Sun-Earth L1 point is only observed at high energies, both in electrons and protons. However, as the event is weak and very gradual, no VDA was possible and the determined onset times (see Table 4) suffer large uncertainties. Altogether, the observations of a gradual, delayed, and small event at Earth suggest that the event was only observed due to perpendicular particle diffusion (e.g., Dresing et al., 2012) since there was probably no direct magnetic connection to a source region.
Figure 19: VDA of protons measured by Solar Orbiter/HET sun (red points) and EPT sun (blue points, not included in the VDA fit). The vertical red line and shade represent the derived injection time and uncertainty.
Figure 18: Pitch-angle distribution of 86-130 keV electrons (left) and 7.4-9.2 MeV protons (right) observed by Solar Orbiter/EPD-EPT and EPD-HET, respectively. Top: Pitch-angle coverage of the four different sensor apertures. Middle: Intensities observed by each field of view. Bottom: Pitch-angle distribution with color-coded intensities.
## 6 Combined timing analysis and implications on the sources of the SEP event
Table 4 presents a timeline of the main features of the 17 April 2021 SEP event with all observations times at the different spacecraft (column 2) being shifted to the Sun (column 1). This means, remote-sensing observations are corrected for the varying light travel times based on the different spacecraft distances, and energetic particle onset times are used to infer the corresponding injection times at the Sun. Studying the SEP event as observed by the multiple spacecraft implies the use of a multitude of instruments that provide different energy ranges and channel widths, different instrumental backgrounds, varying signal-to-noise ratios based on their locations with respect to the SEP source region, as well as different local interplanetary conditions that can influence the SEP observations (see Sect. 5). All these factors make a comparison of the SEP observations at the different spacecraft challenging. For example, for many spacecraft locations or species, a VDA was not possible (see Sect. 5.3). In these cases, we apply a simple TSA to infer the SEP injection times and use an energy channel showing a clear onset time. This implies that we sometimes have to use different energy ranges to infer SEP injection times.
Table 4 (see also Fig. 21) shows that the first feature of the event was the start of the HRX flare, which happened already at 16:00 UT. The flare then continued for almost an hour showing multiple HXR peaks with the last and strongest one at 16:44:30 UT (see Sect. 4.1). The first radio type III burst onset (TIII(1)) was only observed 16 minutes after the start of the flare, followed by the first type II (TII(1)) burst observation at 16:18 UT (see also Sect. 4.5). Around this time (16:18\(\pm\)4min), the first SEPs were inferred to be injected towards EpiColombo as derived from the 71 keV electron onset. The protons of about 25 MeV were injected were injected later towards BepiColombo at 16:25\(\pm\)4min, temporally situated between TIII(1) and TIII(2), at the end of TII(1). BepiColombo's SPM instrument on board Mio was even able to detect \(>\) 200 MeV protons, which arrived, however, significantly delayed with an inferred injection time at 16:43\(\pm\)5min. Surprisingly, we determine a significant later injection time for \(\sim\)1 MeV electrons at 16:29\(\pm\)1min, as compared to the 71 keV electrons, which happened during TIII(2). Due to the impulsive, high-intensity event observed by BepiColombo/SIXS the onset times are well-defined carrying only small uncertainties, which are assumed to be the same for the inferred injection times. This strongly suggests that not only the electrons and protons observed at BepiColombo are related to different injection episodes, but also that the near-relativistic and relativistic particles suggest different injection times, a feature which was also observed during the 9 October 2021 SEP event (Jebaraj et al. 2023a).
Figure 21 illustrates the inferred SEP injection times (vertical lines) in comparison with the HXR flare observations taken by Solar Orbiter/STIX and the radio observations by PSP/RFS. In contrast to Table 4, Fig. 21 only displays those injection times that were inferred to happen during the early phase of the event, namely during the radio active phase. Therefore, only times corresponding to BepiColombo, Parker Solar Probe, and Solar Orbiter, the three best-connected spacecraft, are included.
In the case of Parker Solar Probe, we find later injection times compared to BepiColombo and a significantly earlier injection of electrons compared to protons. Both relativistic and near-relativistic electrons are inferred to be injected during TIII(2) at 16:26\(\pm\)5min (\(\sim\)1 MeV) and 16:30\(\pm\)5min (\(\sim\)90 keV). Because TIII(2) was found to be strongly directed towards Parker Solar Probe, this association is not surprising. We do not find evidence of SEPs related with TIII(1) to reach Parker Solar Probe's location. However, the inferred injection time of a step-like feature in the rising phase of Parker Solar Probe's electron event (see Sect. 5.3.2) shows a temporal correlation with TIII(4), the second type III burst, which shows a strong directivity towards Parker Solar Probe (cf. Sect. 4.5). The clear velocity dispersion observed by Parker Solar Probe for deka-MeV protons yields an injection time at 16:46\(\pm\)10min, which is temporally situated between TIII(3) and TIII(4). This sug
Figure 20: Pitch-angle distribution of 85-125 keV electrons (left) and 4-6 MeV protons (right) observed by STEREO A/SEPT and LET, respectively. Left plot shows from top to bottom: Pitch-angle coverage of the different sensor apertures, intensities observed by each field of view, and pitch-angle distribution with color-coded intensities. Right plot shows from top to bottom: Pitch-angle distribution with color-coded intensities, pitch-angle pointing of the different LET sectors, intensities observed by each sector, and average intensities measured by the eight sectors on each side of the instrument.
gests, similar to BepiColombo observations, that electrons and protons were injected during different episodes and could possibly be related to different acceleration mechanisms and locations.
As discussed in Sect. 5.2.3 the interplanetary conditions between Solar Orbiter and the Sun were disturbed by minor transient events, likely affecting the SEP transport and leading to the comparatively lower peak intensities and less well-defined onsets. This could also lead to delayed SEP onsets and consequently yield too-late-inferred injection times, especially when using TSA, which we do for the electrons observed by Solar Orbiter. Nevertheless, using the same path length as derived from proton VDA we find the same pattern of electrons inferred to be injected earlier at 16:41\(\pm\)15min (\(\sim\)1.6 MeV) and 16:55\(\pm\)2min (\(\sim\)100 keV) than protons for which we were able to perform a VDA yielding an injection time at 17:14\(\pm\)12min for protons between 7 and 45 MeV. Only the 106 keV electron injection time could potentially be related to a radio feature, that is, TIII(4). The inferred proton injection time is about 20 minutes later than the last type III burst (TIII(4)).
The onset times of SEPs at STEREO and Earth are so delayed and uncertain that we cannot infer a direct connection with any of the early activity phenomena of the event, as shown in Fig. 21. Furthermore, the injection times of the three best-connected spacecraft (BepiColombo, Parker Solar Probe, and Solar Orbiter) spread already over the whole radio active time period of about 40 min. This implies that all four type III radio bursts could mark distinct SEP injections that have contributed to the global multi-spacecraft SEP event. The different directions of these radio bursts (see Sect. 4.5) furthermore opens the possibility that the multiple injection episodes were differently important for the different observer locations.
The vertical shaded regions in Fig. 21 denote the times (including uncertainties) at which a magnetic connection with the CME-driven shock was established with each of the five inner-helisphere spacecraft according to the analysis reported in Sect. 4.4 and summarized in Table 3. Although we find the shock to potentially connect already early and at low heights with all the five spacecraft locations, several inferred injection times happened already before, suggesting that the shock was not the main accelerator of these
Figure 21: Inferred SEP injection times (vertical lines with temporal error bars on top) overplotted on the radio spectrogram as observed by PSP/RFS and the 15-25 keV X-ray observations by Solar Orbiter/STIX. All times have been shifted to the Sun by assuming the propagation time of the emission to the respective spacecraft. The shaded ranges mark the times when the spacecraft establish a magnetic connection with the CME-driven shock including an uncertainty of \(\pm\)3 min.
first arriving particles. For BepiColombo, the 71 keV electron injection time and that of the 25 MeV protons (taking into account the uncertainty ranges) agree with the shock connection time. Relativistic electrons are found to be injected later, making a shock-related source still possible. For Parker Solar Probe, which has a comparatively late shock connection time at 17:11\(\pm\)3, all SEP injection times are inferred to happen significantly earlier. In contrast, for Solar Orbiter a sole shock source could be justified as the shock connection time happens during the first inferred injection time (taking into account the large error bar), which is the one of the MeV electrons, and well before the inferred
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{Date / Time} & \multicolumn{1}{c}{Observer / Instr.} & \multicolumn{1}{c}{Feature} & \multicolumn{1}{c}{Comment} \\ At the Sun & At observer & & & & \\ \hline
**17 April** & & & & & \\
16:00 & 16:07 & Solar Orbiter/STIX & 1st nonthermal HXR peak & Major peak, very impulsive, \(<\)1 min duration & \\
16:16 & 16:24 & STA/SWAVES & type III burst \#1 onset & Also seen at PSP, Solar Orbiter, & and Wind. \\
16:18\(\pm\)4min & 16:30\(\pm\)4min & BepiColombo/SIXS & t\({}_{\rm inj}\) of 71 keV el. & TSA, path length L=0.669 au \\
16:18 & 16:26 UT & Ground based/ & onset of decameter- & \\ & & e-CALLISO & type II burst & \\
16:22\(\pm\)3 min & 16:30\(\pm\)3 min & BepiColombo & shock connection & time based on Earth \& STA obs. \\
16:25\(\pm\)4 min & 17:00\(\pm\)4 min & BepiColombo/SIXS & t\({}_{\rm inj}\) of 25 MeV p. & TSA, path length L=0.669 au \\
16:26 & 16:31 & PSP/RFS & type III burst & Series of five type III bursts; \\ & & & group \#2 start & all seen by PSP, highly polarized \\
16:26 \(\pm\)5 min & 16:32\(\pm\)5 min & PSP/EPI-Hi & t\({}_{\rm inj}\) of 920 keV el. & one also by SO and STA \\
16:28 & 16:36 & SOHO/LASCO & CME 1st appearance & TSA, path length L=0.63 au \\ & & & CME 1st appearance & E116S09; speed: v=880 km s\({}^{-1}\) \\ & & & at \(\sim\)16 \(R_{\odot}\) & \\
16:29\(\pm\)1min & 16:35\(\pm\)1min & BepiColombo/SIXS & t\({}_{\rm inj}\) of 960 keV el. & TSA, path length L=0.669 au \\
16:30\(\pm\)5min & 16:37\(\pm\)5min & PSP/EPI-Lo & t\({}_{\rm inj}\) of \(\sim\)90 keV el. & TSA, path length L=0.63 au \\
16:35 & 16:40 & PSP/RFS & type III burst & type II and III bursts, \\ & & & group \#3 start & also seen at SolO, Wind, STA \\
16:35:30 & 16:42 & Solar Orbiter/STIX & nonthermal HXR peak \#12 & major late peak, \\
16:41\(\pm\)15min & 16:52\(\pm\)15min & Solar Orbiter/HET & t\({}_{\rm inj}\) of 1.1-2.4 MeV el. & TSA, path length L=1.24 au \\
16:43\(\pm\)5min & 16:53\(\pm\)5min & BepiColombo/ & t\({}_{\rm inj}\) of \(>\)200 MeV p. & Earliest onset seen in \\ & & & & all SPM channels \\
16:44:30 & 16:51:30 & Solar Orbiter/STIX & nonthermal HXR peak \#13 & TSA, path length L=0.67 au \\
16:46\(\pm\)10min & & PSP/EPI-Hi & t\({}_{\rm inj}\) of 1-30 MeV protons & strongest peak at 25-50 keV; \\ & & & \(\sim\)10 min duration & \\
16:47\(\pm\)3 min & 16:55\(\pm\)3 min & Solar Orbiter & vDA, resulting path & VDA, resulting path \\
16:49 & 16:54 & PSP/RFS & length L=0.63\(\pm\)0.05 au & length L=0.63\(\pm\)0.05 au \\ & & & & time based on Earth \& STA obs. \\
16:54\(\pm\)5min & 16:58\(\pm\)5min & PSP/EPI-Hi & type III burst \#4 start & type II and III bursts, \\
16:55\(\pm\)2min & 17:13\(\pm\)2min & PSP/EPI-Hi & time of potential 2nd inj. & highly polarized \\ & & & of 920 keV of el. & TSA, path length L=0.43 au \\
16:55\(\pm\)2min & 17:13\(\pm\)2min & Solar Orbiter/ & t\({}_{\rm inj}\) of 106 keV el. & TSA, path length L=1.24 au \\ & & & EPT-North & \\
17:11\(\pm\)3 min & 17:19\(\pm\)3 min & PSP & shock connection & time based on Earth \& STA obs. \\
17:14\(\pm\)12min & & Solar Orbiter/ & t\({}_{\rm inj}\) of 7-45 MeV p. & VPA, resulting path \\ & & & EPT+HET & & length L=1.24\(\pm\)0.18 au \\
17:16\(\pm\)3 min & 17:24\(\pm\)3 min & STEREO A & shock connection & time based on Earth \& STA obs. \\
17:30\(\pm\)3 min & 17:38\(\pm\)3 min & Earth & shock connection & time based on Earth \& STA obs. \\
18:08\(\pm\)10min & 18:25\(\pm\)10min & STA/SEPT-North & t\({}_{\rm inj}\) of 85-125 keV el. & TSA, path length L=1.16 au \\
18:20\(\pm\)1h & 19:30\(\pm\)1h & STA/HET & t\({}_{\rm inj}\) of 13.6-23.8 MeV p. & TSA, path length L=1.16 au \\
22:03\(\pm\)2.5h & 22:15\(\pm\)2.5h & SOHO/EPHIN & t\({}_{\rm inj}\) of 0.25-0.7 MeV el. & TSA, path length L=1.23 au \\
**18 April** & & & & \\
4:07\(\pm\)2h & 5:00\(\pm\)2h & SOHO/ERNE & t\({}_{\rm inj}\) of 13-25 MeV p. & TSA, path length L=1.23 au \\ \hline \hline \end{tabular}
\end{table}
Table 4: Timing of solar phenomena and inferred SEP injection times t\({}_{\rm inj}\). All times shifted to the Sun.
injections of the \(\sim\)100 keV electrons and that of the protons. For STEREO A and Earth, the shock connection times happened earlier than any inferred injection times, but because of the strongly delayed and uncertain onset times, it is not possible to pin down a clear role of the shock against the potentially involved transport effects.
## 7 Interplanetary transport modeling
In this section, we present simulations of the interplanetary transport of SEPs using the spatially 2D model of Strauss and Fichtner (2015). Simulations are performed for 150 keV electrons to qualitatively illustrate the transport concepts discussed in this work like the role of perpendicular diffusion vs. a direct magnetic connection to the source region. As input to the model we implement the pitch-angle and perpendicular diffusion coefficients used by Strauss et al. (2017, 2020) that are based on fundamental turbulent quantities and optimized to reproduce the catalog of widespread events from Dresing et al. (2014). The top left panel of Fig. 22 shows a contour plot of the omni-directional intensity of 150 keV electrons, calculated from this model when assuming a single SEP source, at five hours after particle injection. Here, the position of the different spacecraft are shown, along with their magnetic connectivity to the inner model boundary, assumed to be the Alfven surface approximately located at \(r\sim 10\)\(R_{\odot}\) (0.05 au). The dashed magnetic field line connects to the maximum of the injected SEP distribution, which was assumed to be a Gaussian spatial distribution with a broadness of 5\({}^{\circ}\). Below this, we show the temporal profile of the omni-directional intensity at the different spacecraft, the assumed profile of the SEP source, and lastly the corresponding particle anisotropies. As in previous work, we assume a Reid-Axford profile for the SEP source with an acceleration time of 0.1 hours and an escape time of one hour. For the temporal profile of the differential intensity two sets of solutions are shown: The first, in dashed lines, are model solutions using the default set-up of Strauss et al. (2017), while the solid curves are solutions where the parallel mean-free-path is reduced, in an admittedly ad-hoc fashion, by a factor of 5. This is done to account for the possibly more disturbed nature of the inner heliosphere during this event as discussed in Sect. 5, in contrast to the basic model that assumes quiet solar minimum conditions. Of course, a smaller parallel mean-free-path, namely more pitch-angle scattering, leads to a slower rise to maximum, a slower decay phase, and a smaller level of anisotropy. The exact levels of the transport parameters appropriate to reproduce this specific event will be the topic of a future, more detailed, modeling endeavor. Interestingly, and the main conclusion from this basic set-up, is the fact that the model, while reproducing the general trends observed by most of the spacecraft, consistently under-estimates the SEP intensity at the position of the Parker Solar Probe spacecraft for a range of transport parameters.
As a possible remedy for this discrepancy, the right panels of Fig. 22 show the modeling scenario of multiple SEP sources, releasing particles into the inner heliosphere at different positions. The dotted and dashed lines in the top panel show the position of these four injection sources where these are chosen to approximately correspond to the inferred position of the four observed radio bursts from Sect. 4.5. The magnitude of the four injections are, however, not well constrained and chosen here rather arbitrarily to roughly correspond to the measurements. The normalization of these injections are chosen such that the total fluence of the electrons introduced into the heliosphere is the same for the left and right panels. For the default model set-up (i.e., the dashed curves), the different injections are visible in the calculated temporal profiles of the magnetically well-connected spacecraft, while any such prominent peaks disappear for the case of more pitch-angle scattering. More importantly, the level of the simulated profile at Parker Solar Probe is also now more consistent with the observations.
## 8 Discussion
The SEP event associated with the flare-CME on 17 April 2021 was observed by five well-separated spacecraft in the inner heliosphere, with additional constrains provided by observations at Mars. The multi-vantage point observations portray a complex picture of the event, which involves significantly different characteristics of both the energetic electron and the proton/ion event and an asymmetry in the longitudinal distribution of their intensities. We find evidence that the reason for the wide SEP spread involves a number of different mechanisms with varying importance for different vantage points, which we will discuss in the following.
While the associated CME was relatively slow and narrow (speed: \(\sim\)880 km s\({}^{-1}\) and width: \(\sim\)46\({}^{\circ}\), see Sect. 4.2) as compared to other widespread SEP events with high-energy particles (e.g., Lario et al., 2017; Kouloumakos et al., 2019), the solar flare emission in HXR was exceptionally long-lasting (one hour) and complex (cf. Sect. 4.1). The radio event was also equally long-lasting (Sect. 4.5) and showed four distinct type III burst groups indicating particle injection episodes over a period of about 40 min. Several type II bursts were also observed, which suggests particle acceleration at the different flank regions of the associated shock. Although the event was observed by a fleet of five well-separated spacecraft, no full optical coverage of the solar surface was available, leaving some sectors of potential flare locations unobserved. However, our comprehensive analysis of the available X-ray, EUV, white-light, and radio observations suggests that the event was caused by activity related to a single source active region at the Sun (cf. Sect. 4).
The earliest SEP onsets and inferred injection times are found for BepiColombo, which was the best-connected spacecraft to the flaring active region. Significantly further, but similarly far separated from the active region in heliomotiude were Parker Solar Probe (east of the flare) and Solar Orbiter (west of the flare). However, both spacecraft observed dramatically different SEP characteristics suggesting a longitudinal asymmetry: Parker Solar Probe observed a more intense and impulsive event while Solar Orbiter observed a more gradual, less intense and delayed event. While the intensity difference could also be explained by the different radial distances of the spacecraft, the strongly different intensity-time profiles rather suggest a different magnetic connection to the source region, which could be different portions of the CME-driven shock front (e.g., Cane et al., 1988) and/or combination of differently directed SEP injections.
Our detailed radio analysis reveals that two of the four observed TIII radio burst episodes were directed signifi
Figure 22: Transport modeling results for 150 keV electrons. The left panels represent the standard case of a single SEP injection into interplanetary space, while the right panels are for multiple injections. The top panels are normalized contour plots of the SEP intensity at five hours after the initial injection, while the bottom panels show the resulting particle intensities, as a function of time, at a number of spacecraft positions, the different SEP injections, and the resulting particle anisotropies. More details are given in the main text.
cantly more towards Parker Solar Probe as compared to the other radio bursts. These two injection episodes are likely the main source of the electron event at Parker Solar Probe, which is also supported by the inferred injection times (cf. Sect. 6) and the results of the transport modeling (Sect. 7). These injections may also have contributed to the Parker Solar Probe proton event as we infer an earlier injection of protons than the shock connection time (cf. Sect. 6). This would also explain the comparatively high intensities observed at Parker Solar Probe when compared to Solar Orbiter. However, the long-lasting anisotropy of the proton event at Parker Solar Probe of about nine hours (see Fig. 16), which is not observed in the case of the electrons, suggests a long-lasting proton injection, most likely related to the CME-driven shock. A similar picture is presented by Solar Orbiter, at which a clear anisotropy, lasting about seven hours, was observed for protons but not for electrons (Fig. 18), suggesting that the shock played an important role in creating the proton event at Solar Orbiter. Also, BepiColombo observes a long-lasting anisotropy for protons (about seven hours), while electrons only show significant anisotropies during the rise phase of the event (Fig. 15), which is in agreement with a short, likely flare-related, injection. While an initial flare contribution to the proton event cannot be excluded for BepiColombo and Parker Solar Probe, a later shock contribution is, therefore, likely.
An even less intense and more delayed SEP event was observed by STEREO A and near-Earth spacecraft, which were far separated in heliolongitude (126\({}^{\circ}\) for STEREO A and 144\({}^{\circ}\) for Earth, see Table 2). At both locations, no significant anisotropies were observed for electrons and only at STEREO A a short anisotropic period during the early phase of the event was visible in LET proton measurements (Fig. 20). Although we determine potential shock connection times for both positions already before any inferred injection time of SEPs reaching STEREO A or Earth, the observed significantly lower intensities and missing anisotropies suggest that no direct connection with the shock nor flare-related injection was established but rather that perpendicular diffusion was involved in distributing the SEPs. However, the short anisotropic period observed by STEREO/LET for 4-6 MeV protons could be the trace of a shorter-lasting connection to the shock. The presence of interplanetary structures such as pre-event ICMEs and the SIRs at the spacecraft locations could further have modified the magnetic field topology and enhanced scattering conditions, which likely contributed to the unclear and delayed SEP onset times. However, a wider SEP injection region either provided by the extended shock front of 180\({}^{\circ}\) or the four type III burst-related injection episodes, which covered a longitudinal angle of about 110\({}^{\circ}\), may have been a key ingredient in producing the widespread SEP event reaching also STEREO A and Earth. Especially TIII(2) and TIII(4), marking injections close to the longitudinal location angle of Parker Solar Probe (see Sect. 4.5), would provide another, significantly closer injection region for the Earth's location, facilitated over the western limb, as compared to the location of the flare. This could explain the comparatively early electron onset time detected by SOHO/EPHIN around 22:00 UT on 17 April.
At Mars, which was magnetically well-connected with STEREO A during the onset of the event (see Fig. 1, left), we did not observe a clear SEP increase associated with the early phase of the event. However, the associated CME-driven shock could have reached Mars on 22 April and an energetic particle increase was observed, which could have been related to the shock.
## 9 Summary and conclusions
The 17 April 2021 SEP event is the second widespread event of solar cycle 25 and the first one that was ever observed by five well-separated space missions in the inner heliosphere (within 1 au) constrained also by observations at Mars. It is an energetic event showing electrons up to the MeV range and 25 MeV protons reaching all inner heliospheric spacecraft positions, which span a longitudinal range of 210\({}^{\circ}\). BepiColombo observations by Mio/SPM even show the presence of \(>\)200 MeV protons. The closest observer to the Sun was Parker Solar Probe (\(r=0.42\) au) followed by BepiColombo (\(r=0.63\) au) and Solar Orbiter (\(r=0.84\) au). As outlined in Sect. 8, the interplanetary SEP event was likely formed by a combination of different processes with varying importance at different spacecraft positions. For instance, the observations suggest a different origin of the electron and proton SEP event. This is most clear in the case of the three best-connected observers, BepiColombo, Parker Solar Probe, and Solar Orbiter: at Parker Solar Probe and Solar Orbiter we find significantly earlier inferred injection times for electrons (at all energies) than for \(\sim\)25 MeV protons. At BepiColombo only the near-relativistic electron injection is found to be earlier than that of the protons. Also the much longer-lasting anisotropies observed in the proton event compared to electrons suggest an extended injection for protons only. Furthermore, different spacecraft were likely fed by different injections related to the various radio features with different injection directions as suggested by the radio directivity analyses (see Sect. 4.5). The timing analysis (see Sect. 4) shows that BepiColombo detected electrons injected already during the first episode (TIII(1)), while Parker Solar Probe likely detected only electrons from the later episodes, mainly from TIII(2) and TIII(4), which were directed towards Parker Solar Probe.
A possible alternative source of the two type III groups, namely TIII(2) and TIII(4) observed by Parker Solar Probe, could be the shock wave. In Sect. 4.5, we show that TIII(2) and TIII(4) were strongly polarized. This high degree of polarization indicates that the source is a region with strong magnetic fields. A highly compressive shock wave may provide such conditions where the electron beams are accelerated via a shock drift acceleration mechanism (SDA; Ball & Melrose 2001; Mann et al. 2018). The energy gained by the electrons in these cases may also dependent on other factors such as the upstream electron distribution. If we were to assume that the thermal electrons (\(\sim\)1% of the speed of light, Halekas et al. 2020) are being accelerated, then the maximum energy gain in a short period through SDA can be a factor of 14 which leads to a leads to 14% speed of light or \(\sim\)10 keV. However, a small portion of the tail electrons may be accelerated to higher energies. Our multi-spacecraft analysis further emphasizes that the location where the TIII(2) and TIII(4) originated was in the direction of Parker Solar Probe. There is a strong possibility that some of the type III bursts within TIII(4) were accelerated by the shock wave since they are observed to be emanating from TII(HB) (cf. Sect. 4.5). However, the lack of meter-decameter wave measurements limits us from
corroborating the generation of TIII(2) by the shock wave. In order to grasp the mechanisms of electron acceleration in the corona, full meter-decameter measurement would be necessary (cf. Jebaraj et al., 2023a).
Due to the not yet available Parker Solar Probe electron measurements in units of intensity, we cannot compare the electron intensity levels of Parker Solar Probe with other spacecraft. However, because the time profiles at Parker Solar Probe are much more impulsive and peaked for both electrons and protons compared to Solar Orbiter, situated at a comparable absolute longitudinal separation angle, it is plausible that the overall ordering of intensities observed at the different spacecraft is similar for electrons and protons. Based on this assumption we performed initial interplanetary transport modeling of this event for electrons (see Sect. 7) supporting the idea that SEPs were released from several different source longitudes. These show that without a SEP source near the magnetic footpoint of Parker Solar Probe, the measured intensity at that spacecraft cannot be reproduced by the model, independently of the adopted transport coefficients. While these simulations show promise, a more detailed modeling study is required, taking the disturbed nature of the interplanetary medium into consideration while sufficiently optimizing the transport coefficients used in the model. The detailed 3D modeling of the particle transport will be based on the treatment developed by Droge et al. (2010), which employs focused particle transport along the large-scale heliospheric magnetic field as well as diffusion perpendicular to the field. We will also take into account the disturbances in the large-scale magnetic field caused by preceding CMEs as comes out from the observations and predictions from the ENLIL model used in the current version of the paper. Such an extensive study is currently ongoing.
An important feature, which is not observed for the electron event, is the presence of long-lasting periods of proton anisotropies as observed by BepiColombo, Parker Solar Probe, and Solar Orbiter. This requires an extended, likely shock-associated proton injection. However, a flare contribution to the early phase of the proton event at these spacecraft cannot be excluded and is especially likely for Parker Solar Probe, for which the shock-connection time is determined to be established only after the inferred proton injection time. In the case when the scattering of protons, in particular through 90\({}^{\circ}\) pitch angle, is reduced a long-lasting anisotropy can arise as well.
In the case of the two farthest separated observers, STEREO A at a longitudinal separation angle of 126\({}^{\circ}\), and Earth at 144\({}^{\circ}\), SEP intensities were significantly lower, showing a more gradual profile and significantly delayed onsets, which suggests that these observers did not establish a direct magnetic connection with any of the potential SEP source regions. Missing anisotropies together with the aforementioned characteristics suggest that perpendicular diffusion was involved in distributing the SEPs to these far separated longitudes. The presence of interplanetary structures such as the pre-event ICMEs and the SIRs may have contributed to modifying the magnetic field topology and enhancing scattering conditions leaving also room for a potential direct magnetic connection that was masked by a strongly disturbed parallel transport. However, even in the case of perpendicular transport being involved, we consider it likely that the widespread SEP observations were supported by an extended injection region. This could either have been provided by the extended shock front (\(\sim 180^{\circ}\)) or by the different injection directions marked by the four radio type III burst episodes covering in total a longitudinal range of about 110\({}^{\circ}\). A likely evidence for an extended shock front is the presence of multiple type II radio bursts, namely TII(1), TII(2), and TII(HB) (see Sect. 4.5), which are emitted at different locations on the expanding shock front. Our analysis of the radio intensity and directivity suggests that the sources of TII(1) and TII(2) were directed towards Solar Orbiter and STEREO A, while that of TII(HB) was clearly directed towards Parker Solar Probe.
The study of the 17 April 2021 widespread SEP event allowed us to perform a comprehensive multi-spacecraft analysis combining remote-sensing and in-situ observations of six well-separated observer positions and taking full advantage of the various complementary data sets. The advanced spacecraft fleet enabled us to characterize signatures of a very complex SEP event, which would not have been possible with fewer observers. We were able to identify significant differences between the electron and proton SEP event, as observed by the different spacecraft, with a more likely flare association of the electron event and a more likely shock source for the proton event. However, a mixing of both cannot be excluded. Thanks to the position of Parker Solar Probe, we were able to observe otherwise hidden SEP features that highlight the role of significantly different injection directions of the four different injection episodes, which we consider a new scenario that has to be taken into account as a potential contributor to widespread events.
Future case studies of additional widespread events with the currently available spacecraft fleet will hopefully allow us to further characterize the necessary ingredients of widespread events and the different scenarios that are able to produce these rather rare events.
###### Acknowledgements.
Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. The STIX instrument is an international collaboration between Switzerland, Poland, France, Czech Republic, Germany, Austria, Ireland, and Italy. We acknowledge funding by the European Union's Horizon 2020 research and innovation program under grant agreements No. 1010001459 (SERPENTINE), and No.870405 (EURFORIA 2.0), DepiColombo is a joint ESA - JAXA science mission with instruments and contributions directly funded by ESA Member States and JAXA. Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA's Living with a Star (LNS) program (contract NNNOSA6A01C). Support from the LNS management and technical team has played a critical role in the success of the Parker Solar Probe mission. Work in the University of Turku was performed under the umbrella of Finnish Centre of Excellence in Research of Sustainable Space (Academy of Finland Grant No. 336809). N.D. is grateful for support by the Turku Collegium for Science, Medicine and Technology of the University of Turku, Finland. N.D. and I.C.J. are grateful for support by the Academy of Finland (SHOCKSEE, grant No. 346902). L.R.G. thanks Toni Galvin for her assistance in the use of STEREO/PLASTIC data and Leila Maps, Dusan Odstreli, Nick Aarge, and Shalea Jones-Mecholsky regarding the use of NSA-ENLIL model. The UAH team acknowledges the financial support by the Spanish Ministerio de Ciencia, Innovacion y Universidades FEDER/MCIU/AEI Projects ESP2017-88436-R and PID2019-104863RB-I00/AEI/10.13039/501100011033. L.C.J. acknowledges funding by the BRIAN-be project SWM (Soc-L.M.) under Modelling with EUFORIA for the new heliospheric missions). A.K. acknowledges support from NASA's NNN06AA01C (SOS-SIS Phase-E) contract. V.K. acknowledges the support by NASA under grants No. 18-2H5W0218-2-0010 and 19-HSR-19 2-0143. E.P. acknowledges support from NASA's PSP-GI (grant No. 80NNSC22RX0349), O2R (grant No. 80NNSC200K0285), and LNS-SC (grant No. 80NNSC22K0893) programmes. E.A. acknowledges support from the Academy of Finland (Postdoctoral Researcher Grant 322455). W.D. and Y.K. acknowledge ISSI for the possibility to discuss
the questions related to particle propagation in interplanetary space during the meeting of the team No. 459 (led by G. Li and L. Wang). B.S.-C. acknowledges support through UK-STFC Ernest Rutherford Fellowship ST/V0004115/1 and STFC grant N7/V00029.7-1. The work of F.S. was supported by DLR grant No. 50 OT 1904. N.W. acknowledges support from NASA program NNH17ZDA001N-LWS and from the Research Foundation - Flagrants (FWO-Vlaanderen, fellowship No. 1184319N). C.O.L. acknowledges support from NASA's IWB program (Grant No. 80NSSC21K1325) and the MAVEN project funded through the NASA Mars Exploration Program. C.OL. and C.M.S. acknowledge support from the IMPACT Investigation by the NASA Heliophysics Division through the STEREO Project Office at NASA GSFC (Grant No. 80NSSC1K1446). M.L. acknowledges support from the Italian Space Agency and the National Institute of Astrophysics through the ASI-INAF - 2020-35-H10 agreement for the development of the ASPIS prototype of scientific data centre for Space Weather. ENLIL simulation results have been provided by the CCMC at NASA Goddard Space Flight Center (GSFC) through their public Runs on Request system ([http://ccme.gsfc.nasa.gov](http://ccme.gsfc.nasa.gov); run ID 1Laura_Rodriguez-Garcia_041322_SM. L.). The WSA model was developed by N. Arge, currently at GSFC, and the ENLIL Model was developed by D. Odstrcil, currently at George Mason University.
|
2304.12101 | Deviations of the intersection of Brownian Motions in dimension four
with general kernel | In this paper, we find a natural four dimensional analog of the moderate
deviation results of Chen (2004) for the mutual intersection of two independent
Brownian motions $B$ and $B'$. In this work, we focus on understanding the
following quantity, for a specific family of kernels $H$, \begin{equation*}
\int_0^1 \int_0^1 H (B_s - B'_t) \text{d}t \text{d}s . \end{equation*} Given
$H(z) \propto \frac{1}{|z|^{\gamma}}$ with $0 < \gamma \le 2$, we find that the
deviation statistics of the above quantity can be related to the following
family of inequalities from analysis, \begin{equation} \label{eq:maxineq}
\inf_{f: \|\nabla f\|_{L^2}<\infty} \frac{\|f\|^{(1-\gamma/4)}_{L^2} \|\nabla
f\|^{\gamma/4}_{L^2}}{ [\int_{(\mathbb{R}^4)^2} f^2(x) H(x-y) f^2(y) \text{d}x
\text{d}y]^{1/4}}. \end{equation} Furthermore, in the case that $H$ is the
Green's function, the above will correspond to the generalized
Gagliardo-Nirenberg inequality; this is used to analyze the Hartree equation in
the field of partial differential equations. Thus, in this paper, we find a new
and deep link between the statistics of the Brownian motion and a family of
relevant inequalities in analysis. | Arka Adhikari, Izumi Okada | 2023-04-24T13:52:27Z | http://arxiv.org/abs/2304.12101v1 | # Deviations of the intersection of Brownian motions in dimension four with general kernel
###### Abstract.
In this paper, we find a natural four dimensional analog of the moderate deviation results of Chen [5] for the mutual intersection of two independent Brownian motions \(B\) and \(B^{\prime}\). In this work, we focus on understanding the following quantity, for a specific family of kernels \(H\),
\[\int_{0}^{1}\int_{0}^{1}H(B_{s}-B_{t}^{\prime})\mathrm{d}t\mathrm{d}s.\]
Given \(H(z)\propto\frac{1}{|z|^{\gamma}}\) with \(0<\gamma\leq 2\), we find that the deviation statistics of the above quantity can be related to the following family of inequalities from analysis,
\[\inf_{f:\|\nabla f\|_{L^{2}}<\infty}\frac{\|f\|_{L^{2}}^{(1-\gamma/4)}\|\nabla f \|_{L^{2}}^{\gamma/4}}{\int_{[\mathbb{R}^{4})^{2}}f^{2}(x)H(x-y)f^{2}(y) \mathrm{d}x\mathrm{d}y]^{1/4}}. \tag{0.1}\]
Furthermore, in the case that \(H\) is the Green's function, the equation (0.1) will correspond to the generalized Gagliardo-Nirenberg inequality; this is used to analyze the Hartree equation in the field of partial differential equations. Thus, in this paper, we find a new and deep link between the statistics of the Brownian motion and a family of relevant inequalities in analysis.
Key words and phrases:moderate deviation, Green's function, Brownian motion, Gagliardo-Nirenberg inequality 2010 Mathematics Subject Classification: 60F15,60G50 Research supported by NSF grant DMS 2102842 (A.A.) and JSPS KAKENHI Grant-in-Aid for Early-Career Scientists (No. JP20K14329) (I.O.).
Introduction
### Motivation and Related Background
In this paper, we find a four dimensional analog of the moderate deviation results of Chen [5] for the mutual intersection of two independent Brownian motions; other related papers include [2, 3, 5, 6, 8, 11]. Let \(\tau_{1}\) and \(\tau_{2}\) be two independent exponential random variables with rate \(1\) and \(B,B^{\prime}\) be two independent Brownian motions starting at \(0\). Consider a kernel \(H\) of the form \(K*K\) with \(K\) a positive function, such that \(|H(z)|\leq\frac{C}{|z|^{\gamma}}\) for some constant \(C>0\) and \(0<\gamma\leq 2\). We study the moderate deviation for the following quantity,
\[\mathcal{G}_{H}:=\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}H(B_{t}-B_{s}^{\prime} )\mathrm{d}t\mathrm{d}s.\]
Then, the constant \(\alpha_{H}\) that determines the large deviation behavior of \(\mathcal{G}_{H}\) can be expressed as
\[2\log\alpha_{H}:=\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E} [(\mathcal{G}_{H})^{n}].\]
This is related to the following optimization problem. For a given parameter \(\theta\), denote,
\[M(\theta):=\sup_{\begin{subarray}{c}\|g\|_{L^{2}}=1\\ \|\nabla g\|_{L^{2}}<\infty\end{subarray}}\theta\left[\int_{(\mathbb{R}^{4})^ {2}}g^{2}(x)H(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y\right]^{1/2}-\frac{1}{2}\int _{\mathbb{R}^{4}}|\nabla g(x)|^{2}\mathrm{d}x.\]
\(\alpha_{H}\) is exactly the constant for which \(M(\alpha_{H}^{-1})=1\).
When the kernel \(H\) has nice scaling properties, namely, \(H(z)\propto\frac{1}{|z|^{\gamma}}\) for \(0<\gamma\leq 2\), one can check that \(\alpha_{H}\) will be related to the optimal constant \(\kappa_{H}\) in the following generalized Gagliardo-Nirenberg inequality (ref. [10, Theorem 2.3]),
\[\left[\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)H(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y \right]^{1/4}\leq\kappa_{H}\|g\|_{L^{2}}^{1-\gamma/4}\|\nabla g\|_{L^{2}}^{ \gamma/4}. \tag{1.1}\]
Indeed, in such a case, \(\alpha_{H}=\kappa_{H}^{2}\left(\frac{\gamma}{2}\right)\left(\frac{2\gamma}{4 -\gamma}\right)^{\frac{\gamma-4}{4}}\). Eventually, we will obtain the following moderate deviation result:
\[\lim_{T\to\infty}T^{-\frac{2}{\gamma}}\log P\bigg{(}\int_{0}^{1}\int_{0}^{1}H (B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s\geq T\bigg{)}=-\frac{\gamma}{2}( \frac{4-\gamma}{4})^{\frac{4-\gamma}{\gamma}}\alpha_{H}^{-\frac{4}{\gamma}}=- \kappa_{H}^{-\frac{8}{\gamma}}.\]
Furthermore, we remark that the only kernels \(H\) that can satisfy an inequality of the form
\[\left[\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)H(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y \right]^{1/4}\leq\kappa_{H}\|g\|_{L^{2}}^{1-c}\|\nabla g\|_{L^{2}}^{c}\]
must satisfy \(|H(z)|=\mathrm{O}\left(\frac{1}{|z|^{4c}}\right)\). In addition, one can check directly that if \(H(z)=\frac{1}{|z|^{4-\gamma}}\), then it has a convolutional square root of the form \(\frac{1}{|z|^{4-\gamma/2}}\). In this sense,
we can relate the large deviation statistics of a generalized intersection of Brownian motions with the most important inequalities of the form (1.1).
In what follows, we will restrict to the case that our kernel is the Green's function of the Brownian motion in \(d=4\), that is, \(G(x)\propto|x|^{-2}\) since the proof is the same if \(H(z)\propto\frac{1}{|z|^{\gamma}}\) with \(0<\gamma\leq 2\). In this case, we will use \(\mathcal{G}\) to denote our central quantity of interest. Before we proceed with discussing the details of the proof and consequences of our results, we give some comments on why introducing this is the natural interaction between two Brownian motions in \(d=4\). Although one can make sense of the notion of self-intersection in \(d=4\), the answer is not interesting since two Brownian motions are with exceedingly high probability, will only intersect finitely many times before never intersecting again. Thus, the mutual intersection does not have good scaling properties in \(d=4\).
By contrast, one would expect that self-intersection moderated by the Green's function kernel in \(d=4\) would have the same scaling behavior as the usual self-intersection in \(d=2\). If we apply the self-similar scaling \(B_{\cdot}\to\frac{1}{\sqrt{t}}B_{t\cdot}\), we see that \(\int_{0}^{T}\int_{0}^{T}G(B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s\) has the same distribution as \(T\int_{0}^{1}\int_{0}^{1}G(B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s\). This is exactly the same critical scaling behavior as the self-intersection in \(d=2\).
Beyond just giving us a generalization of the moderate deviation results of Chen [5] to \(d=4\), our results and proofs reveal connections between the properties of Brownian motions and central quantities in the analysis of differential equations. In \(d=2\), Chen [5] revealed the connection between the constant that appears in the moderate deviation analysis of the intersection and the optimal constant in an appropriate Gagliardo-Nirenberg inequality in \(d=2\) i.e.,
\[\inf_{f}\frac{\|f\|_{L_{2}}^{1/2}\|\nabla f\|_{L_{2}}^{1/2}}{\|f\|_{L_{4}}}\cdot\]
Here, we find a relationship between the exact constant in the moderate deviation study of \(\mathcal{G}\) and what is known as the generalized Gagliardo-Nirenberg inequality in the study of partial differential equations (ref. [10, 13]) i.e.,
\[\inf_{f}\frac{\|f\|_{L_{2}}^{1/2}\|\nabla f\|_{L_{2}}^{1/2}}{\left[\int_{( \mathbb{R}^{4})^{2}}f^{2}(x)|x-y|^{-2}f^{2}(y)\mathrm{d}x\mathrm{d}y\right]^{ 1/4}}.\]
If we look at [10, Theorem 2.3], this inequality is derived from the Hardy-Littlewood-Sobolev inequality and is used to study the Hartree equation. Hence, we find a new relationship between the intersection of Brownian motions and the field of analysis.
The final application of our result lies in the study of the capacity of the range of a random walk. The second author was originally motivated to study this question to extend the results of the paper [9]. The original goal was to derive a similar result for moderate deviations of a random walk and to further explore the link between the capacity in general dimension \(d\) and the self-intersection in dimension \(d-2\). As seen in [9], the asymptotics of the capacity of a random walk is controlled by the self-intersection moderated by the Green's function. In a forthcoming paper, we can prove a moderate deviation principle for the capacity.
### Strategy and Mathematical Description
As is well-known in the study of large deviations, the moderate deviation behavior of a positive random variable can be determined via the exact asymptotics of large moments of the random variable.
For example, in our case, to understand the moderate deviation behavior of \(\mathcal{G}\), one would need to compute quantities such as,
\[\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{n}].\]
Chen [5] has performed such computation in the context of the intersection. Indeed, the main tool that he has applied is a nice expression for general \(n\)-th moments with regards to the mutual-intersection, Le Gall's formula. On a formal level, if one considers just two Brownian motions,we have that,
\[\int_{0}^{n}\int_{0}^{n}\delta(B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s= \int_{x\in\mathbb{R}^{d}}\mathrm{d}x\int_{0}^{n}\delta(B_{t}-x)\mathrm{d}t\int _{0}^{n}\delta(B_{s}^{\prime}-x)\mathrm{d}s,\]
where \(\delta\) is the usual Kronecker delta function. Thus,
\[\mathbb{E}\left[\left(\int_{0}^{n}\int_{0}^{n}\delta(B_{t}-B_{s} ^{\prime})\mathrm{d}t\mathrm{d}s\right)^{m}\right]\] \[= \int_{(\mathbb{R}^{d})^{m}}\mathrm{d}x_{1}\ldots\mathrm{d}x_{m} \left(\mathbb{E}\left[\sum_{\rho}\int_{0\leq t_{1}\leq t_{2}\leq\ldots\leq t_{ n}}\prod_{j=1}^{m}\delta(B_{t_{j}}-x_{\rho(j)})\mathrm{d}t_{j}\right] \right)^{2}.\]
The main benefit of this formula is that, through the introduction of the points \(x_{1},\ldots,x_{m}\), we see that we can separate \(B\) and \(B^{\prime}\) and treat the expectations separately. Furthermore, one can explicitly compute the expectations above and write it in terms of the transition probability of the Brownian motion. Under the right setup, one can observe that the computation above resembles a Markov transition probability. Thus, after careful manipulation, one can eventually relate the quantity above to finding the eigenvalues of an appropriate symmetric operator; finding this maximum eigenvalue can now be readily phrased as an optimization problem over an appropriate subspace. For example, one now has access to formulas resembling the Feynman-Kac formula, which allow one to compute functions of the form \(\mathbb{E}[\int_{0}^{1}f(B_{t})\mathrm{d}t]\).
However, none of these heuristics can possibly work if one does not separate \(B\) and \(B^{\prime}\) from each other; if one essentially has to deal with two time parameters simultaneously, then there is no way to relate this quantity to a Markov transition probability. When considering the intersection of two Brownian motions moderated by the Green's function, there is not an obvious way to split the two Brownian motions. Namely, the function \(G(B_{s}-B_{t}^{\prime})\propto\frac{1}{|B_{s}-B_{t}^{\prime}|^{2}}\) does not naturally lend itself to a splitting of \(B\) and \(B^{\prime}\) and it appears that a computation of the moments fundamentally has to deal with some correlation between \(B\) and \(B^{\prime}\).
However, in this paper, we are able to find a means of circumventing this difficulty. We first express \(G(B_{s}-B_{t}^{\prime})\) as \(\int_{z\in\mathbb{R}^{d}}\tilde{G}(B_{s}-z)\tilde{G}(B_{t}^{\prime}-z)\mathrm{ d}z\). Here \(\tilde{G}\) is the convolutional square root of \(G\); on a formal level, this allows one to separate out \(B\) and \(B^{\prime}\) from each other in the formula. If we perform the splitting, we get access to multiple computational tools, such as the Feynman-Kac formula. Indeed, one can obtain a lower bound on asymptotic moments relatively straighforwardly via an appropriate application of the Feynman-Kac formula. However, there are still multiple challenges to get an appropriate upper bound.
The main tool to derive an upper bound is to approximate the moment computation by a Markov transition kernel. If one has a Markov transition kernel representation of the upper bound, then one can represent the upper bound in
terms of finding the largest eigenvalue of an appropriate linear operator. However, there are multiple difficulties to deal with in order to derive a Markov transition kernel approximation. In the context of the computation in the Brownian motion, a computation of the \(n\)-th moment naturally expresses computations as a sum over permutation over \(n\) points \(x_{1},\ldots,x_{n}\)(see equation (2.3)). A natural Markov kernel approximation would replace the sum of configurations \(y_{1},\ldots,y_{n}\), where any configuration \(y_{1},\ldots,y_{n}\) is allowed to be a permutation of \(x_{1},\ldots,x_{n}\) over configurations \(y_{1},\ldots,y_{n}\) where each of the \(y_{i}\) is allowed to be one of the \(x_{1},\ldots,x_{n}\) independently of each other. However, such an approximation will lose exponential factors unless that size of \(|\{x_{1},\ldots,x_{n}\}|\) is far less than \(n\); this is only possible if the total state space is finite. To make this justification rigorous, we had to appropriately discretize \(\mathbb{R}^{4}\) and argue that there was little loss in making such manipulations. In addition, such a justification involved regularizing the singularity of \(G\) near the origin.
Furthermore, the natural Markov transition kernel representation that can be derived from the moments is a rather cumbersome expression involving multiple functions with awkward normalization conditions phrased in terms of the convolutional square root \(\tilde{G}\) (see equation (2.2)). In order to relate this upper bound to the lower bound, one must find a way to transform the optimization problem equation (2.2) to the constant coming from the modified Gagliardo-Nirenberg inequality. The particular form of the modified Gagliardo-Nirenberg inequality is not merely incidental to the proof, it was a necessity in order to bridge the different ways of obtaining the lower and upper bounds for the asymptotic moments.
### Main Results
Let \(G(z)=\int_{0}^{\infty}p_{t}(z)\mathrm{d}t\) and \(p_{t}(z)\) is the transition density that a Brownian motion will reach point \(z\) at time \(t\). Our main result proves a moderate deviation principle for \(\mathcal{G}([0,1]):=\int_{0}^{1}\int_{0}^{1}G(B_{t}-B_{s}^{\prime})\mathrm{d}t \mathrm{d}s\).
As we have stated, the Green's function for the Brownian motion is an inverse polynomial; we can relate \(\mathcal{G}([0,T])=_{d}T\mathcal{G}([0,1])\). Thus, once we obtain a large deviation principle for \(\mathcal{G}([0,1])\), we can extend it to that of \(\mathcal{G}([0,T])\). We find that \(\mathcal{G}([0,1])\) is related to best constant of the modified Gagliardo-Nirenberg inequality. Namely, it is the smallest constant \(\tilde{\kappa}(4,2)\) such that the following inequality should hold:
\[\left[\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y \right]^{1/4}\leq\tilde{\kappa}(4,2)\|g\|_{L^{2}}^{1/2}\|\nabla g\|_{L^{2}}^{ 1/2}. \tag{1.2}\]
**Theorem 1.1**.: _We have the following large deviation behavior on \(\mathcal{G}([0,1])\). For \(d=4\),_
\[\lim_{t\to\infty}\frac{1}{t}\log P(\mathcal{G}([0,1])\geq t)=-\tilde{\kappa}^ {-4}(4,2).\]
Now, we also claim the result for general kernel.
**Corollary 1.2**.: _Assume \(H(z)\propto\frac{1}{|z|^{\gamma}}\) with \(0<\gamma\leq 2\). For \(d=4\),_
\[\lim_{T\to\infty}T^{-\frac{2}{\gamma}}\log P\bigg{(}\int_{0}^{1}\int_{0}^{1}H (B_{t}-B_{s}^{\prime})\text{dt}\,\text{ds}\geq T\bigg{)}=-\kappa_{H}^{-\frac{ 8}{\gamma}},\]
_where \(\kappa_{H}\) is the optimal constant in (1.1)._
Next, we also consider the following self-intersection local time of the Brownian motion in \(d=4\) moderated by the Green's function:
\[\beta_{t}:=\int_{0}^{t}\int_{0}^{s}G(B_{l}-B_{s})\mathrm{d}l\mathrm{d}s-E\bigg{[} \int_{0}^{t}\int_{0}^{s}G(B_{l}-B_{s})\mathrm{d}l\mathrm{d}s\bigg{]}.\]
Note that \(\int_{0}^{t}\int_{0}^{s}G(B_{l}-B_{s})\mathrm{d}l\mathrm{d}s\) does not exist but as in [1], we can define \(\beta_{t}\) by renormalization. By [1, 9], we find that: there exists a value \(\gamma_{\beta}\) such that
\[Ee^{\gamma\beta_{1}}\begin{cases}<\infty&\text{ if }\gamma<\gamma_{\beta},\\ =\infty&\text{ if }\gamma>\gamma_{\beta}.\end{cases}\]
The ordinary self-intersection local time of the Brownian motion in \(d=2\) was estimated in [2]. From our mdoerate deviation results on \(\mathcal{G}\), we can show moderate deviation resutils on \(\beta_{t}\); these correspond to [2, Theorems 1.1 and 1.2]. We can also obtain results corresponding to [2, Theorems 1.3-1.5] as a corollary by using very similar methods; as such, we omit the proof.
**Theorem 1.3**.: _We have_
\[\lim_{t\to\infty}\frac{1}{t}\log P(\beta_{1}\geq t)=-\tilde{\kappa}^{-4}(4,2). \tag{1.3}\]
_In particular, \(\gamma_{\beta}=\tilde{\kappa}^{-4}(4,2)\)._
Finally, we introduce the result of our forthcoming paper regarding the moderate deviation of the capacity of a simple random walk, which is one of the motivation of this paper. We have the following moderate deviation behavior for \(\mathfrak{G}_{n}:=\sum_{i=1}^{n}\sum_{l=1}^{n}G_{d}(S_{i}-\tilde{S}_{l})\), where \(S\) and \(\tilde{S}\) are independent simple random walks on \(\mathbb{Z}^{4}\) and \(G_{d}\) is the discrete Green's function. Let \(b_{n}=o(n)\) and \(\lim_{n\to\infty}b_{n}=\infty\). Then, we have, for \(\lambda>0\),
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log P(\mathfrak{G}_{n}\geq\lambda nb_{n})=- \tilde{\kappa}^{-4}(4,2)\lambda.\]
As we have mentioned earlier in the introduction, we can use our main result to obtain a moderate deviation principle for the capacity of a range of a random walk. Let \(\tau_{A}\) denote the first positive hitting time of a finite set \(A\) by a simple random walk \(S\). Define
\[\mathrm{Cap}(A):=\sum_{x\in A}P^{x}(\tau_{A}=\infty)\]
and \(\mathcal{R}[a,b]:=\{S_{a},\dots,S_{b}\}\). If \(a=0\), we simply write it as \(\mathcal{R}_{b}\). As observed in the papers [1, 9], the capacity of the range of the random walk can be carefully decomposed as the sum of the capacities of the first and second halves of the random walk as well as a term representing the'mutual capacity' of interaction between the first and second halves. Namely, one has that,
\[\mathrm{Cap}(\mathcal{R}_{2n})=\mathrm{Cap}(\mathcal{R}_{n})+\mathrm{Cap}( \mathcal{R}[n+1,2n])-\chi_{c}(\{S_{i}\}_{i=0}^{n},\{S_{i}\}_{i=n+1}^{2n})\]
for some function \(\chi_{c}\). Once one does this, one will observe that the main contribution to the large deviation behavior will come from the terms \(\chi_{c}\). As investigated in [9], the term \(\chi_{c}\) can be marginally simplified to be of the form of \(\frac{\pi^{2}}{64(\log n)^{2}}\mathfrak{G}_{n}\). Thus, we estimate the following : for some \(b_{n}\to\infty\),
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log P\bigg{(}\mathrm{Cap}(\mathcal{R}_{n})-E \mathrm{Cap}(\mathcal{R}_{n})\leq-\frac{\lambda n}{(\log n)^{2}}b_{n}\bigg{)}.\]
Finally, we explain the contents of this paper. The central Theorem 1.1 is divided into Sections 2 and 3. This relates the large deviations of Theorem 1.1 to an optimization problem defined by the constant \(\rho\) as in equation (2.2). To relate this constant to a more fitting form, we have an intermediate Section 4 which relates the quantity \(\rho\) to the modified Gagliardo-Nirenberg inequality. In Section 5, we estimate the self-intersection of the Brownian motion moderated by the Green's function, which corresponds to the proof of Theorem 1.3. Appendix A contains estimates that regularize the singularity of the Green's function around the \(0\). At the beginning of Section 3, we split \(G\) into a component supported near the origin and another away from the origin; the results of Appendix A show that the component supported near the origin does not contribute asymptotically to the large deviation statistics. Appendix B allows us to analyze the modified Kernels obtained via the discretization and compactification procedure in Section 3; in particular, it appears in the proof of Lemma 3.4 to remove the effects of discretization.
## 2. Large Deviation for the intersection of Brownian Motions: The Proof of Theorem 1.1
In this section, we will consider the large deviation of the intersection moderated by the Green's function kernel of the Brownian motion. Recall that our basic quantity \(\mathcal{G}\) is given by
\[\mathcal{G}:=\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}G(B_{t}-B_{s}^{\prime}) \mathrm{d}t\mathrm{d}s.\]
Here, as before, \(\tau_{1}\), \(\tau_{2}\) are exponential random variables. This is in contrast to \(\mathcal{G}([0,1])\), in which both Brownian motions vary from time \(0\) to \(1\). We remark that \(\mathcal{G}([0,1])\) has the following scaling property,
\[\mathcal{G}([0,t])=_{d}t\mathcal{G}([0,1]). \tag{2.1}\]
In the following result, which is the main theorem in this section, we compute the moment. Our strategy will be to write \(G\) in terms of its convolutional square root \(G=\tilde{G}*\tilde{G}\). One can directly compute the convolutional square root as \(\tilde{G}(x)=\int_{0}^{\infty}\frac{1}{\sqrt{\pi t}}p_{t}(x)\mathrm{d}t\). Thus, we see that \(\tilde{G}\) is positive an has the asymptotics, \(\tilde{G}(x)\propto\frac{1}{|x|^{3}}\). Also, we use \(P_{\tau}\) to denote the probability density that a Brownian motion killed by an exponential variable with rate \(1\) reached the point \(x\) at some time. Namely, \(P_{\tau}(x)=\int_{0}^{\infty}e^{-t}p_{t}(x)\mathrm{d}t\).
**Theorem 2.1**.: _Consider \(\mathcal{G}\) as defined earlier. We have the following expression for the large moments:_
\[\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{n}]= 2\log\rho.\]
_Here, \(\rho\) is the solution to the following optimization problem, that is,_
\[\rho:=\sup_{\begin{subarray}{c}f\in L_{G}^{2}\\ k:\int_{\mathbb{R}^{4}}k^{2}(z)dz=1\end{subarray}}\int_{(\mathbb{R}^{4})^{4}}f( \tilde{z},\tilde{e})\sqrt{k}(\tilde{z})\tilde{G}(\tilde{e})P_{\tau}(\tilde{z} +\tilde{e}-z-e)\tilde{G}(e)\sqrt{k}(z)f(z,e)d\tilde{z}\,d\tilde{e}dz\,de, \tag{2.2}\]
_and \(f\in L_{G}^{2}\) is the space of functions that satisfies \(\int_{(\mathbb{R}^{4})^{2}}f^{2}(z,e)\tilde{G}(e)dz\,de=1\)._
Note that we see that we can write \(\mathcal{G}\) as,
\[\mathcal{G}:=\int_{\mathbb{R}^{4}}\mathrm{d}z\int_{0}^{\tau_{1}}\tilde{G}(B_{t}-z )\mathrm{d}t\int_{0}^{\tau_{2}}\tilde{G}(B_{s}^{\prime}-z)\mathrm{d}s.\]
At this point, we can try to take powers of the following expression and compute the resulting moments:
\[\mathbb{E}[\mathcal{G}^{n}]=\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}z_{1}\dots \mathrm{d}z_{n}\left[\sum_{\rho}\int_{(\mathbb{R}^{4})^{n}}\prod_{i=1}^{n} \tilde{G}(x_{i}-z_{\rho(i)})P_{\tau}(x_{i}-x_{i-1})\mathrm{d}x_{1}\dots\mathrm{ d}x_{n}\right]^{2}. \tag{2.3}\]
Analysing this expression carefully will allow one to deduce Theorem 2.1. By scaling, we can almost relate this to the more standard expression \(\mathcal{G}\). One issue here is that in order to apply the scaling argument, one needs the times \(\tau_{1}\) and \(\tau_{2}\) to match. This clearly cannot be true for random, independent \(\tau_{1}\) and \(\tau_{2}\). However, we have inequalities to relate the expressions \(\mathcal{G}\) with \(\mathcal{G}([0,1])\). If we consider the general expression \(\mathcal{G}_{t_{1},t_{2}}:=\int_{\mathbb{R}^{4}}\mathrm{d}z\int_{0}^{t_{1}} \tilde{G}(B_{t}-z)\mathrm{d}t\int_{0}^{t_{2}}\tilde{G}(B_{s}^{\prime}-z) \mathrm{d}s\), we have the following analog of Le Gall's moment formula,
\[\mathbb{E}[(\mathcal{G}_{t_{1},t_{2}})^{n}]\] \[=\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}z_{1}\dots\mathrm{d}z_{n}\] \[\times\sum_{\rho_{x}}\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}x_{1} \dots\mathrm{d}x_{n}\int_{[0,t_{1}]^{n}}\mathrm{d}s_{1}\dots\mathrm{d}s_{n} \prod_{i=1}^{n}\tilde{G}(x_{i}-z_{\rho_{x}(i)})p_{s_{i}-s_{i-1}}(x_{i}-x_{i-1})\] \[\times\sum_{\rho_{y}}\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}y_{1} \dots\mathrm{d}y_{n}\int_{[0,t_{2}]^{n}}\mathrm{d}r_{1}\dots\mathrm{d}r_{n} \prod_{i=1}^{n}\tilde{G}(y_{i}-z_{\rho_{y}(i)})p_{r_{i}-r_{i-1}}(y_{i}-y_{i-1}).\]
By the Cauchy-Schwartz inequality, we can relate the moments over different times \(t_{1}\neq t_{2}\) to moments using the same time. Noting that \(\tilde{G}\) is a positive quantity, we have that
\[\mathbb{E}[(\mathcal{G}_{t_{1},t_{2}})^{n}]\leq\mathbb{E}[(\mathcal{G}_{t_{1 },t_{1}})^{n}]^{1/2}\mathbb{E}[(\mathcal{G}_{t_{2},t_{2}})^{n}]^{1/2}=t_{1}^{n /2}t_{2}^{n/2}\mathbb{E}[\mathcal{G}([0,1])^{n}]. \tag{2.4}\]
In the lower bound direction, it is clear that, additionally, it is clear that,
\[\mathbb{E}[\mathcal{G}^{n}]\geq\mathbb{E}[(\mathcal{G}_{\min(\tau_{1},\tau_{2 }),\min(\tau_{1},\tau_{2})})^{n}]=\mathbb{E}[(\min(\tau_{1},\tau_{2}))^{n}] \mathbb{E}[\mathcal{G}([0,1])^{n}]. \tag{2.5}\]
Combining manipulations on the exponential function along with equations (2.5) and (2.4) allow one to relate the moments of \(\mathcal{G}\) with those of \(\mathcal{G}([0,1])\). With equation (2.4) and (2.1) in hand, one can perform standard manipulations on exponential functions to obtain the following limiting result on moments of \(\mathcal{G}([0,1])\).
**Corollary 2.2**.: _Consider the quantity \(\mathcal{G}([0,1])\). We have the following moment estimates on \(\mathcal{G}([0,1])\):_
\[\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{n!}\mathbb{E}[\mathcal{G}([0,1])^{n} ]=2\log\rho+\log 2, \tag{2.6}\]
_and \(\rho\), again, is the optimization problem from equation (2.2)._
Proof.: Given the scaling property in equation (2.1), the Cauchy-Schwartz inequality for the moment given in equation (2.4), and the lower bound in (2.5), this follows from the computation on the moments of exponential random variables in the proof of [7, Theorem 3.3.2].
The optimization problem \(\rho\) may not seem recognizable in this form, but in Proposition 4.1, we show that \(\rho\) is the same as \(\frac{\tilde{\kappa}(4,2)^{2}}{\sqrt{2}}\), where \(\tilde{\kappa}(4,2)\) is the optimal constant in the modified Gagliardo-Nirenberg inequality (1.2). Proposition 4.1 shows that \(\rho=\frac{\tilde{\kappa}^{2}(4,2)}{2}\). Using this information on the constant \(\rho\) along with standard large deviation estimates derived from moment estimates on positive quantities, we can derive the proof of the main theorem.
Proof of Theorem 1.1.: Once you substitute the expression \(\rho=\frac{\tilde{\kappa}^{2}(4,2)}{\sqrt{2}}\) from Proposition 4.1 to the moment estimates in Corollary 2.2, this follows from [7, Theorem 1.2.8].
Proof of Corollary 1.2.: By the same proof as that of Green's function, we can obtain the corresponding result to Proposition 4.1 and Theorem 2.1. Note that
\[\int_{0}^{T}\int_{0}^{T}H(B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s=_{d}T^{ \frac{4-\gamma}{2}}\int_{0}^{1}\int_{0}^{1}H(B_{t}-B_{s}^{\prime})\mathrm{d}t \mathrm{d}s.\]
Then, if we repeat the proof of [7, Theorem 3.3.2 and (2.2.20)], we have
\[\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{(\gamma/2)}}\mathbb{ E}[(\int_{0}^{1}\int_{0}^{1}H(B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s)^{n}]\] \[= 2\log\alpha_{H}+(2-\gamma/2)\log 2-(2-\gamma/2)\log(2-\gamma/2)\]
and hence we have, again, from [7, Theorem 1.2.8],
\[\lim_{T\to\infty}T^{-\frac{2}{\gamma}}\log P\bigg{(}\int_{0}^{1}\int_{0}^{1}H( B_{t}-B_{s}^{\prime})\mathrm{d}t\mathrm{d}s\geq T\bigg{)}=-\frac{\gamma}{2}( \frac{4-\gamma}{4})^{\frac{4-\gamma}{\gamma}}\alpha_{H}^{-\frac{4}{\gamma}}.\]
Since \(\alpha_{H}=\kappa_{H}^{2}\left(\frac{\gamma}{2}\right)\left(\frac{2\gamma}{4- \gamma}\right)^{\frac{\gamma-4}{4}}\), we obtain the desired result.
### Lower Bound for the intersection of Brownian Motions
We will prove Theorem 2.1 by proving corresponding upper and lower bounds. In this subsection, we prove the following lower bound estimate on the moments of \(\mathcal{G}\).
**Theorem 2.3**.: _Consider \(\mathcal{G}\). We have the following lower bound for the large moments:_
\[\liminf_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{ n}]\geq 2\log\rho.\]
_Here, \(\rho\) is the optimization problem defined in (2.2)._
Proof.: Recall our moment expression (2.3). One fact of the convolutional square root \(\tilde{G}(x_{i},z_{\rho_{x}(i)})\) is that the value only depends on the difference \(e_{\rho(i)}:=x_{i}-z_{\rho_{x}(i)}\). Rewriting the expression in terms of these variables gives us that the moment is given by,
\[\mathbb{E}[\mathcal{G}^{n}]\] \[= \int_{(\mathbb{R}^{4})^{n}}\mathrm{d}z_{1}\ldots\mathrm{d}z_{n} \left[\sum_{\rho}\int_{(\mathbb{R}^{4})^{n}}\prod_{i=1}^{n}\tilde{G}(e_{\rho(i )})P_{\tau}(z_{\rho(i)}+e_{\rho(i)}-z_{\rho(i-1)}-e_{\rho(i-1)})\mathrm{d}e_{1 }\ldots\mathrm{d}e_{n}\right]^{2}.\]
In the above expression, one should consider \(\tilde{G}(e)\) as a measure on the set of \(e\) variables. From direct computation, one can see that \(\tilde{G}\) is a non-negative function and can function as a measure. This is key to the strategy.
Now, we let \(k(z)\) be an \(L^{2}\) function on \(z\). Namely, \(\int k^{2}(z)\mathrm{d}z=1\). Then, by applying the Cauchy-Schwartz inequality, we see that,
\[\sqrt{\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{n}]}=\sqrt{\frac{ 1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{n}]\prod_{i=1}^{n}\int_{\mathbb{R}^{4}}k^{ 2}(z_{i})\mathrm{d}z_{i}}\\ \geq\frac{1}{n!}\int_{(\mathbb{R}^{4})^{2n}}\sum_{\rho}\prod_{i=1 }^{n}k(z_{i})\mathrm{d}z_{i}\prod_{i=1}^{n}\tilde{G}(e_{\rho(i)})P_{\tau}(z_{ \rho(i)}+e_{\rho(i)}-z_{\rho(i-1)}-e_{\rho(i-1)})\mathrm{d}e_{1}\ldots\mathrm{ d}e_{n}\\ =\int_{(\mathbb{R}^{4})^{2n}}\sqrt{k(z_{n})}\prod_{i=2}^{n}\sqrt{ k(z_{i})}P_{\tau}(z_{i}+e_{i}-z_{i-1}-e_{i-1})\sqrt{k(z_{i-1})}\tilde{G}(e_{i}) \\ \times\sqrt{k}(z_{1})P_{\tau}(z_{1}+e_{1})\tilde{G}(e_{1}) \mathrm{d}e_{1}\mathrm{d}z_{1}\ldots\mathrm{d}e_{n}\mathrm{d}z_{n}. \tag{2.7}\]
All the terms that appear above are positive. Thus, we can restrict \(\tilde{G}\) to its portion of its support and still derive a lower bound. Let \(\tilde{G}_{R,0}(z)\) denote the restriction of \(\tilde{G}(z)\) to a portion of its support to when \(|z|\leq R\). Furthermore, we also assume that \(\sqrt{k}\) has a finite support \(S\). These are all technical assumptions that we will remove later.
To complete our lower bound, we also need to introduce a new quantity:
\[\delta:=\min_{x\in S+R}P_{\tau}(x).\]
With this quantity in hand, a lower bound on the last line of (2.7) will be
\[\delta\int_{(\mathbb{R}^{4})^{2n}}\sqrt{k(z_{n})}\prod_{i=2}^{n} \sqrt{k(z_{i})}P_{\tau}(z_{i}+e_{i}-z_{i-1}-e_{i-1})\sqrt{k(z_{i-1})} \tilde{G}_{R,0}(e_{i})\\ \times\sqrt{k}(z_{1})\mathrm{d}e_{1}\mathrm{d}z_{1}\ldots\mathrm{ d}e_{n}\mathrm{d}z_{n}. \tag{2.8}\]
Now, we consider the following space of functions with corresponding inner product:
\[L^{2}_{\tilde{G},R}:=\left\{f:\int_{(\mathbb{R}^{4})^{2}}f^{2}(z,e)\tilde{G}_{R,0}(e)\mathrm{d}z\mathrm{d}e=1\right\},\] \[\langle f_{1},f_{2}\rangle=\int_{(\mathbb{R}^{4})^{2}}f_{1}(z,e)f _{2}(z,e)\tilde{G}_{R,0}(e)\mathrm{d}z\mathrm{d}e.\]
We also define the following operator on this space,
\[T_{k}(f)(\tilde{z},\tilde{e}):=\sqrt{k}(\tilde{z})\int_{(\mathbb{R}^{4})^{2}} P_{\tau}(\tilde{z}+\tilde{e}-z-e)\sqrt{k}(z)\tilde{G}_{R,0}(e)f(z,e)\mathrm{d}z \mathrm{d}e.\]
We see that \(T_{k}\) is a symmetric operator on our space \(L^{2}_{\tilde{G},0}\). Namely, we have,
\[\langle f_{1},T_{k}f_{2}\rangle=\int_{(\mathbb{R}^{4})^{4}}f_{1}(\tilde{z}, \tilde{e})\tilde{G}_{R,0}(\tilde{e})\sqrt{k}(\tilde{z})P_{\tau}(\tilde{z}+ \tilde{e}-z-e)\sqrt{k}(z)\tilde{G}_{R,0}(e)f_{2}(z,e)\mathrm{d}z\mathrm{d}e \mathrm{d}\tilde{z}\mathrm{d}\tilde{e}. \tag{2.9}\]
Note that we have introduced the operator \(T_{k}\) we can rewrite the last line of (2.8) as,
\[\delta\langle\sqrt{k},T_{k}^{n-1}\sqrt{k}\rangle. \tag{2.10}\]
Let \(h_{max}(z,e)\) be the eigenfunction corresponding to the largest eigenvalue of \(T_{k}\). Let \(h(z,e)\) be an approximator of \(h_{max}(z,e)\) with the further property that it has a lower bound \(>0\) on the support of \(\sqrt{k}\). From the form of (2.9), we see that \(h(z,e)\) has no need to have support outside of the support of \(\operatorname{supp}(k)\times B_{R}\). \(B_{R}\) being the ball of radius \(R\) around \(0\). Also, let us define a new quantity as follows,
\[\epsilon:=\min_{(z,e)\in\operatorname{supp}(k)\times B_{R}}\frac{h(z,e)}{ \sqrt{k}(z)}.\]
Note that \(\epsilon\) exists due to our assumption that \(h\) has a lower bound greater than \(0\) on the set above and, furthermore, the support of \(h\) cannot be outside \(\operatorname{supp}(k)\times B_{R}\). Thus, we can thus replace (2.10) with the lower bound,
\[\delta\epsilon^{2}\langle h,T_{k}^{n-1}h\rangle\geq\delta\epsilon^{2}\langle h,h_{max}\rangle^{2}\langle h_{max},T_{k}^{n-1}h_{max}\rangle,\]
when \(n\) is odd. We can derive a similar lower bound when \(n\) is even. Thus, we see that,
\[\frac{1}{n}\log\sqrt{\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{n}]}\geq\frac{ 1}{n}\log(\delta\epsilon^{2}\langle h,h_{max}\rangle^{2})+\]
\[\log\sup_{f\in L_{G,R}^{2}}\int_{(\mathbb{R}^{4})^{4}}f(\tilde{z},\tilde{e}) \tilde{G}_{R,0}(\tilde{e})\sqrt{k}(\tilde{z})P_{\tau}(\tilde{z}+\tilde{e}-z-e )\sqrt{k}(z)\tilde{G}_{R,0}(e)f(z,e)\mathrm{d}\tilde{z}\mathrm{d}\tilde{e} \mathrm{d}z\mathrm{d}e.\]
Now, as one considers the limit \(n\to\infty\), the term \(\frac{1}{n}\log(\delta\epsilon^{2}\langle h,h_{max}\rangle)\) makes no contribution. Thus,
\[\liminf_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[ \mathcal{G}^{n}]\] \[\geq 2\log\sup_{f\in L_{G,R}^{2}}\int_{(\mathbb{R}^{4})^{4}}f( \tilde{z},\tilde{e})\tilde{G}_{R,0}(\tilde{e})\sqrt{k}(\tilde{z})P_{\tau}( \tilde{z}+\tilde{e}-z-e)\sqrt{k}(z)\tilde{G}_{R,0}(e)f(z,e)\mathrm{d}\tilde{z }\mathrm{d}\tilde{e}\mathrm{d}z\mathrm{d}e.\]
Next, we observe that if a function is in \(L_{\tilde{G},R}^{2}\) then it is in \(L_{\tilde{G},\tilde{R}}^{2}\) for any \(\tilde{R}\geq R\). Thus, we may first replace the restricted maximum with \(\tilde{G}_{R,0}\) with,
\[\sup_{f\in L_{G}^{2}}\int_{(\mathbb{R}^{4})^{4}}f(\tilde{z},\tilde{e})\tilde{ G}(\tilde{e})\sqrt{k}(\tilde{z})P_{\tau}(\tilde{z}+\tilde{e}-z-e)\sqrt{k}(z) \tilde{G}(e)f(z,e)\mathrm{d}\tilde{z}\mathrm{d}\tilde{e}\mathrm{d}z\mathrm{d }e,\]
where \(L_{G}^{2}\) is the following space:
\[L_{G}^{2}:=\{f:\int_{(\mathbb{R}^{4})^{2}}f^{2}(z,e)\tilde{G}(e)\mathrm{d}z \mathrm{d}e=1\}.\]
Finally, since the choice of \(k\) was arbitrary, we may finally consider the maximum over all \(k\). Thus, we ultimately derive,
\[\liminf_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[ \mathcal{G}^{n}]\] \[\geq 2\log\sup_{\begin{subarray}{c}f\in L_{G}^{2}\\ k:\int_{\mathbb{R}^{4}}k^{2}(z)\mathrm{d}z=1\end{subarray}}\int_{(\mathbb{R}^{4 })^{4}}f(\tilde{z},\tilde{e})\sqrt{k}(\tilde{z})\tilde{G}(\tilde{e})P_{\tau}( \tilde{z}+\tilde{e}-z-e)\tilde{G}(e)\sqrt{k}(z)f(z,e)\mathrm{d}\tilde{z} \mathrm{d}\tilde{e}\mathrm{d}z\mathrm{d}e\]
and we obtain the desired result.
## 3. Upper Bound for the intersection of Brownian Motions
In this section, we will establish the following result, which gives the corresponding upper bounds of the moments of \(\mathcal{G}\). The following theorem, combined with Theorem 2.3, will give us Theorem 2.1.
**Theorem 3.1**.: _Consider \(\mathcal{G}\). We have the following upper bound for the large moments:_
\[\limsup_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{ n}]\leq 2\log\rho, \tag{3.1}\]
_where \(\rho\) is the optimization problem defined in (2.2)._
Proof.: The derivation of the upper bound is far more technical. The singularity of \(G\) near the origin is an obstacle; it prevents one from bounding \(G\) from above by a constant in appropriate locations. However, to the scale that we are concerned, the origin has a vanishingly small contribution to the asymptotic moments. Similarly, there are some issues due to the infinite support of \(G\). We first split \(G\) in a main term away from the origin and \(\infty\) and an error term around the origin and \(\infty\).
We first define the function \(\tilde{G}_{R,\delta}(z)\) as \(\tilde{G}(z)\) when \(\delta\leq|z|\leq R\). The value will be \(0\) when \(|z|\geq R\). Finally, \(\tilde{G}_{R,\delta}(z)=\tilde{G}_{R,\delta}(\delta)\) when \(|z|\leq\delta\). Once we have introduced these cutoffs, we observe the following,
\[G(x-y)=\tilde{G}_{R,\delta}*\tilde{G}_{R,\delta}(x-y)+G^{\circ}(x-y).\]
The function \(G^{\circ}(x)\) can be bounded by \(G(x)\mathbb{1}[|x|\leq\delta]+[f(\delta)+g(R)]\), where \(f(\delta)\) and \(g(R)\) are some functions that go to \(0\) as \(\delta\) goes to \(0\) and \(R\) goes to \(\infty\) respectively.
Furthermore, we remark that for general random variables \(F\) and \(H\) that,
\[\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[(F+H)^{n}]\leq\frac{1}{n}\log \left[\left(\frac{1}{(n!)^{2}}\mathbb{E}[F^{n}]\right)^{1/n}+\left(\frac{1}{( n!)^{2}}\mathbb{E}[H^{n}]\right)^{1/n}\right]^{n}. \tag{3.2}\]
If \(\rho_{F}\) is the limit \(\frac{1}{n}\log\left[\frac{1}{(n!)^{2}}\mathbb{E}[F^{n}]\right]\), then we see that \(\rho_{F+H}\leq\log(\exp[\rho_{F}]+\exp[\rho_{H}])\).
Now, it is clear that
\[\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}[f(\delta)+g(R)]\mathrm{d}s\mathrm{d}t \leq\tau_{1}\tau_{2}[f(\delta)+g(R)].\]
Thus, we see that,
\[\lim_{\delta\to 0}\lim_{R\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E} \left[\left(\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}[f(\delta)+g(R)]\mathrm{d}s \mathrm{d}t\right)^{n}\right]=-\infty.\]
From the results in the Appendix, we have from Lemma A.3 that,
\[\lim_{\delta\to 0}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}\left[\left(\int_{0} ^{\tau_{1}}\int_{0}^{\tau_{2}}G(B_{t}-B_{s}^{\prime})\mathbb{1}[|B_{t}-B_{s}^ {\prime}|\leq\delta]\mathrm{d}t\mathrm{d}s\right)\right]=-\infty. \tag{3.3}\]
Hence, we can use these facts as well as (3.2) to assert that
\[\lim_{\delta\to 0}\lim_{R\to\infty}\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^ {2}}\mathbb{E}[(G^{\circ}(x-y))^{n}]=0.\]
Provided now that one can show the following lemma, we will be done.
**Lemma 3.2**.: _It holds that,_
\[\limsup_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}\left[\left(\int_{ 0}^{\tau_{1}}\int_{0}^{\tau_{2}}(\tilde{G}_{R,\delta}*\tilde{G}_{R,\delta})(B_{ t}-B_{s}^{\prime})\text{dt}\,\text{ds}\right)^{n}\right]\leq 2\log\rho. \tag{3.4}\]
_We denote the quantity inside the expectation on the first line as \(\mathcal{G}_{R,\delta}\)._
### The proof of Lemma 3.2
In this section, we will prove the following intermediary result.
**Lemma 3.3**.: _Recall the notation \(\mathcal{G}_{R,\delta}\) from Lemma 3.2. For any choice of \(M\) and \(\epsilon\), we have that_
\[\limsup_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[(\mathcal{G} _{R,\delta})^{n}]\leq 2\log\rho_{M,R,\delta,\epsilon},\]
_where \(\rho_{M,R,\delta,\epsilon}\) is given by the following optimization problem:_
\[\sup_{\begin{subarray}{c}\sum_{q}k^{2}(q)=1\\ \sum_{q}\int_{\mathbb{R}^{4}}\text{def}^{2}(q,e)\tilde{G}_{R,\delta}^{ \epsilon}(e)=1\end{subarray}}\sum_{\tilde{q},q}\int_{(\mathbb{R}^{4})^{2}}f( \tilde{q},\tilde{e})\tilde{G}_{R,\delta}^{\epsilon}(\tilde{e})P_{\tau,M}( \tilde{q}+\tilde{e}-q-e)\tilde{G}_{R,\delta}^{\epsilon}(e)f(q,e)\text{de}\,d \tilde{e}.\]
_Here, \(P_{\tau,M}\) is a compactified version of the random walk transition given by_
\[P_{\tau,M}(z)=\sqrt{\sum_{l\in\mathbb{Z}^{4}}P_{\tau}^{2}(Ml+z)},\]
_and \(\tilde{G}_{R,\delta}^{\epsilon}\) is a version of \(\tilde{G}_{R,\delta}\) given by_
\[\tilde{G}_{R,\delta}^{\epsilon}(e)=\sup_{|d|\leq\epsilon}\tilde{G}_{R,\delta}( e+d).\]
In the next section, we will show that \(\limsup_{M\to\infty}\limsup_{\epsilon\to 0}\rho_{M,R,\delta,\epsilon}\leq\rho\), which will complete the proof of Lemma 3.2.
Proof.: We will have to find an appropriate discretization in order to understand this term carefully. The first step is to write our moment as a norm of a vector in some appropriate vector space and then apply the triangle inequality. We consider a space of vectors whose entries are indexed by \((l_{1},\ldots,l_{n})\in(\mathbb{Z}^{4})^{n}\). The norm of such a vector will be given by \(\sum_{l_{1},\ldots,l_{n}}(X_{l_{1},\ldots,l_{n}})^{2}\).
Now, consider the vector \(X^{\rho,e_{1},\ldots,e_{n}}\) whose \(l_{1},\ldots,l_{n}\) entry is given by,
\[[X^{\rho,e_{1},\ldots,e_{n}}(z_{1},\ldots,z_{n})]_{l_{1},\ldots,l _{n}}\] \[= \prod_{i=1}^{n}\tilde{G}_{R,\delta}(e_{\rho(i)})P_{\tau}(Ml_{ \rho(i)}+z_{\rho(i)}+e_{\rho(i)}-Ml_{\rho(i-1)}-z_{\rho(i-1)}-e_{\rho(i-1)}).\]
Then, we see that we can write, \(\mathbb{E}[\mathcal{G}^{n}]\) as,
\[\frac{1}{(n!)^{2}}\mathbb{E}[(\mathcal{G}_{R,\delta})^{n}]=\int_{\{(-\frac{M} {2},\frac{M}{2})^{4}\}^{n}}\text{d}z_{1}\ldots\text{d}z_{n}\bigg{|}\bigg{|} \frac{1}{n!}\sum_{\rho}\int_{e_{1},\ldots,e_{n}}\text{d}e_{1}\ldots\text{d}e_{ n}X^{\rho,e_{1},\ldots,e_{n}}(z_{1},\ldots,z_{n})\bigg{|}\bigg{|}^{2}.\]
Then, we apply the triangle inequality to state that this is less than,
\[\leq\int_{\{(-\frac{M}{2},\frac{M}{2}]^{4}\}^{n}}\text{d}z_{1}\ldots\text{d}z _{n}\left[\frac{1}{n!}\sum_{\rho}\int_{e_{1},\ldots,e_{n}}||X^{\rho,e_{1}, \ldots,e_{n}}(z_{1},\ldots,z_{n})||\right]^{2}.\]
Recall the definition,
\[P_{\tau,M}(z)=\sqrt{\sum_{l\in\mathbb{Z}^{4}}P_{\tau}^{2}(Ml+z)}.\]
We see that,
\[||X^{\rho,e_{1},\ldots,e_{n}}(z_{1},\ldots,z_{n})||=\prod_{i=1}^{n}\tilde{G}_{R, \delta}(e_{\rho(i)})P_{\tau,M}(z_{\rho(i)}+e_{\rho(i)}-z_{\rho(i-1)}-e_{\rho(i- 1)}).\]
Thus, we see that,
\[\frac{1}{(n!)^{2}}\mathbb{E}[\mathcal{G}^{n}]\] \[\leq\int_{\{(-\frac{M}{2},\frac{M}{2}]^{4}\}^{n}}\mathrm{d}z_{1} \ldots\mathrm{d}z_{n}\left[\frac{1}{n!}\sum_{\rho}\int_{(\mathbb{R}^{4})^{n}} \mathrm{d}e_{1}\ldots\mathrm{d}e_{n}\prod_{i=1}^{n}\tilde{G}_{R,\delta}(e_{ \rho(i)})P_{\tau,M}(z_{\rho(i)}+e_{\rho(i)}-z_{\rho(i-1)}-e_{\rho(i-1)})\right] ^{2}. \tag{3.5}\]
We still need to discretize the region \((-\frac{M}{2},\frac{M}{2}]^{4}\). Fix \(\epsilon\) of the form \(\frac{M}{2I}\) for some large integer \(I\). Let \(Q_{\epsilon}=(-\epsilon,\epsilon]^{4}\). Let \(P_{\epsilon}\) be a grid of points in \([-\frac{M}{2},\frac{M}{2}]^{4}\) such that the disjoint union \(\cup_{p\in P_{\epsilon}}p+Q_{\epsilon}=(-\frac{M}{2},\frac{M}{2}]^{4}\). A quantity that will be useful in trying to understand the discretization would be the following,
\[F^{\tilde{G}_{R,\delta}}(z_{1},\ldots,z_{n}):=\int_{(\mathbb{R}^{4})^{n}} \mathrm{d}e_{1}\ldots\mathrm{d}e_{n}\prod_{i=1}^{n}\tilde{G}_{R,\delta}(e_{i} )P_{\tau,M}(z_{i}+e_{i}-z_{i-1}-e_{i-1}).\]
Now, we discuss what happens to the function \(F^{\tilde{G}_{R,\delta}}(z_{1},\ldots,z_{n})\) under a small change to each of its entries \(F^{\tilde{G}_{R,\delta}}(z_{1}+d_{1},\ldots,z_{n}+d_{n})\) where the perturbations \(d_{i}\) are understood to be small, i.e.,\(|d_{i}|\leq\mathfrak{d}\) for some fixed small constant \(\mathfrak{d}\). Namely, we see that if we change variable \(\hat{e}_{i}=e_{i}+d_{i}\) then an alternative way to write \(F^{\tilde{G}_{R,\delta}}(z_{1}+d_{1},\ldots,z_{n}+d_{n})\) would be,
\[\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}\hat{e}_{1}\ldots\mathrm{d}\hat{e}_{n} \prod_{i=1}^{n}\tilde{G}_{R,\delta}(\hat{e}_{i}-d_{i})P_{\tau,M}(z_{i}+\hat{e} _{i}-z_{i-1}-\hat{e}_{i-1}).\]
Recall the definition \(\tilde{G}_{R,\delta}^{\mathfrak{d}}\) as,
\[\tilde{G}_{R,\delta}^{\mathfrak{d}}(e)=\sup_{|d|\leq\mathfrak{d}}\tilde{G}_{R,\delta}(e+d),\]
we thus see that
\[F^{\tilde{G}_{R,\delta}}(z_{1}+d_{1},\ldots,z_{n}+d_{n})\leq F^{\tilde{G}_{R, \delta}^{\mathfrak{d}}}(z_{1},\ldots,z_{n}),\]
provided that all \(|d_{i}|\leq\mathfrak{d}\). Consider the function space \(L^{2}(Q_{\epsilon})\) with norm given by,
\[||f||_{L^{2}(Q_{\epsilon})}^{2}=\int_{(Q_{\epsilon})^{n}}f^{2}(z_{1},\ldots,z_ {n})\mathrm{d}z_{1}\ldots\mathrm{d}z_{n}.\]
Thus, we can rewrite the right hand side of (3.5) as,
\[\sum_{p_{1},\ldots,p_{n}\in P_{\epsilon}}\left\|\frac{1}{n!}\sum_{\rho}Y_{p_{1},\ldots,p_{n}}^{\rho}\right\|^{2},\]
where \(Y_{p_{1},\ldots,p_{n}}^{\rho}\) is the function with values,
\[Y_{p_{1},\ldots,p_{n}}^{\rho}(z_{1},\ldots,z_{n})=F^{\tilde{G}_{R,\delta}}(z_{ \rho(1)}+p_{\rho(1)},\ldots,z_{\rho(n)}+p_{\rho(n)}).\]
As before, we apply a slightly different triangle inequality to deduce that
\[\frac{1}{(n!)^{2}}\mathbb{E}[(\mathcal{G}_{R,\delta})^{n}]\leq\sum_ {p_{1},\ldots,p_{n}\in P_{\epsilon}}\left[\frac{1}{n!}\sum_{\rho}||Y_{p_{1}, \ldots,p_{n}}^{\rho}||\right]^{2}\] \[=(\epsilon^{4n})\sum_{p_{1},\ldots,p_{n}\in P_{\epsilon}}\left[ \frac{1}{n!}\sum_{\rho}\left(\frac{1}{(\epsilon)^{4n}}\int_{[-\epsilon/2, \epsilon/2]^{4}}\mathrm{d}d_{1}\ldots\mathrm{d}d_{n}F^{\tilde{G}_{R,\delta}}( p_{\rho(1)}+d_{1},\ldots,p_{\rho(n)}+d_{n})^{2}\right)^{1/2}\right]^{2}\] \[\leq\epsilon^{4n}\sum_{p_{1},\ldots,p_{n}\in P_{\epsilon}}\left[ \frac{1}{n!}\sum_{\rho}F^{\tilde{G}_{R,\delta}^{\epsilon}}(p_{\rho(1)},\ldots, p_{\rho(n)})\right]^{2}\] \[=\epsilon^{4n}\sum_{p_{1},\ldots,p_{n}\in P_{\epsilon}}\left[ \frac{1}{n!}\sum_{\rho}\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}e_{1}\ldots \mathrm{d}e_{n}\prod_{i=1}^{n}\tilde{G}_{R,\delta}^{\epsilon}(e_{\rho(i)})P_{ \tau,M}(p_{\rho(i)}+e_{\rho(i)}-p_{\rho(i-1)}-e_{\rho(i-1)})\right]^{2}.\]
Let us consider the term inside the brackets. Consider the point measure \(\mu\) given by,
\[\mu_{p}=\frac{1}{n}\sum_{i=1}^{n}\delta_{p_{i}},\]
thus, we have a point measure supported at each point \(p_{i}\). Related to the measure \(\mu\), we can also define the following function on the points \(q\) of \(P_{\epsilon}\):
\[\phi_{\mu}(p)=\sqrt{\mu(p)}.\]
This function is normalized so that,
\[\sum_{p}(\phi_{\mu}(p))^{2}=1.\]
We thus have that,
\[\frac{1}{n!}\sum_{\rho}\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}e_{1 }\ldots\mathrm{d}e_{n}\prod_{i=1}^{n}\tilde{G}_{R,\delta}^{\epsilon}(e_{\rho( i)})P_{\tau,M}(p_{\rho(i)}+e_{\rho(i)}-p_{\rho(i-1)}-e_{\rho(i-1)})\] \[=\frac{1}{n!}\sum_{\rho}\sum_{q_{1},\ldots,q_{n}}\mathbb{1}\left( p_{\rho(i)}=q_{i},\forall i\right)\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}e_{1} \ldots\mathrm{d}e_{n}\prod_{i=1}^{n}\tilde{G}_{R,\delta}^{\epsilon}(e_{i})P_{ \tau,M}(q_{i}+e_{i}-q_{i-1}-e_{i-1})\] \[=\frac{1}{n!}\sum_{q_{1},\ldots,q_{n}}\mathbb{1}\left(\mu_{p}= \mu_{q}\right)\prod_{r\in P_{\epsilon}}(n\mu_{p}(r))!\int_{(\mathbb{R}^{4})^{n }}\mathrm{d}e_{1}\ldots\mathrm{d}e_{n}\prod_{i=1}^{n}\tilde{G}_{R,\delta}^{ \epsilon}(e_{i})P_{\tau,M}(q_{i}+e_{i}-q_{i-1}-e_{i-1})\] \[=\frac{1}{n!}\sum_{q_{1},\ldots,q_{n}}\mathbb{1}\left(\mu_{p}= \mu_{q}\right)\prod_{r\in P_{\epsilon}}\frac{(n\mu_{p}(r))!}{(\phi_{\mu_{p}}(r ))^{n\phi_{\mu_{p}}(r)}}\] \[\times\int_{(\mathbb{R}^{4})^{n}}\mathrm{d}e_{1}\ldots\mathrm{d} e_{n}\sqrt{\phi_{\mu_{p}}(q_{n})}\prod_{i=2}^{n}\sqrt{\phi_{\mu_{p}}(q_{i})} \tilde{G}_{R,\delta}^{\epsilon}(e_{i})P_{\tau,M}(q_{i}+e_{i}-q_{i-1}-e_{i-1}) \sqrt{\phi_{\mu_{p}}(q_{i-1})}\] \[\times\tilde{G}_{R,\delta}^{\epsilon}(e_{1})\sqrt{\phi_{\mu_{p}}(q _{1}+e_{1})}P_{\tau,M}(q_{1}+e_{1})\]
and it is bounded by
\[[\max_{z}P_{\tau,M}(z)]\frac{1}{n!}\prod_{r\in P_{e}}\frac{(n\mu_{p}( r))!}{(\phi_{\mu_{p}}(r))^{n\mu_{p}(r)}}\] \[\times\sum_{q_{1},\ldots,q_{n}}\int_{(\mathbb{R}^{4})^{n}} \mathrm{d}e_{1}\ldots\mathrm{d}e_{n}\sqrt{\phi_{\mu_{q}}(q_{n})}\prod_{i=2}^{n} \sqrt{\phi_{\mu_{p}}(q_{i})}\tilde{G}^{e}_{R,\delta}(e_{i})P_{\tau,M}(q_{i}+e_ {i}-q_{i-1}-e_{i-1})\sqrt{\phi_{\mu_{p}}(q_{i-1})}\] \[\times\tilde{G}^{e}_{R,\delta}(e_{1})\sqrt{\phi_{\mu_{p}}(q_{1}+e _{1})}. \tag{3.6}\]
We can, again, represent the last line as an operator computation. Consider the following space of functions,
\[L^{\,2}_{G,R,\delta,e}:=\left\{f:\epsilon^{4}\sum_{q}\int_{ \mathbb{R}^{4}}f^{2}(q,e)\tilde{G}^{e}_{R,\delta}(e)\mathrm{d}e=1\right\},\] \[\langle f_{1},f_{2}\rangle=\epsilon^{4}\sum_{q}\int_{\mathbb{R}^{ 4}}f_{1}(q,e)f_{2}(q,e)\tilde{G}^{e}_{R,\delta}(e)\mathrm{d}e.\]
On this space, we consider the following operator,
\[T_{k,R,\delta,\epsilon}(f)(\tilde{q},\tilde{e})=\sqrt{k}(\tilde{q})\sum_{q} \int_{\mathbb{R}^{4}}\mathrm{d}eP_{\tau,M}(\tilde{q}+\tilde{e}-q-e)\tilde{G}^{ e}_{R,\delta}(e)\sqrt{k}(q).\]
This is a symmetric operator on our space \(L^{2}_{G,R,\delta,\epsilon}\). We can rewrite the last line of (3.6) as,
\[[\max_{z}P_{\tau,M}]\frac{1}{n!}\prod_{r\in P_{e}}\frac{(n\mu_{p}(r))!}{(\phi_ {\mu_{p}}(r))^{n\mu_{p}(r)}}\left[\int_{\mathbb{R}^{4}}\mathrm{d}e\tilde{G}^{ e}_{R,\delta}(e)\right]\left\langle\sqrt{\frac{\phi_{\mu_{p}}}{\int_{\mathbb{R}^{4}} \mathrm{d}e\tilde{G}^{e}_{R,\delta}(e)}},T^{n-1}_{\phi_{\mu_{p}},R,\epsilon} \sqrt{\frac{\phi_{\mu_{p}}}{\int_{\mathbb{R}^{4}}\mathrm{d}e\tilde{G}^{e}_{R, \delta}(e)}}\right\rangle.\]
We needed to introduce the normalization factor \(\int_{\mathbb{R}^{4}}\mathrm{d}e\tilde{G}^{e}_{R,\delta}(e)\) so that the inner product of the function \(\sqrt{\frac{\phi_{\mu_{p}}}{\int_{\mathbb{R}^{4}}\mathrm{d}e\tilde{G}^{e}_{R, \delta}(e)}}\) with itself has norm less than \(1\). Observe that,
\[\sum_{q}\int_{\mathbb{R}^{4}}\left[\sqrt{\frac{\phi_{\mu_{p}}(q)}{\int_{ \mathbb{R}^{4}}\mathrm{d}e\tilde{G}^{e}_{R,\delta}(e)}}\right]^{2}\tilde{G}^{ e}_{R,\delta}(e)\mathrm{d}e=\sum_{q}\phi_{\mu_{p}}(q)=\sum_{q}\sqrt{\mu_{p}(q)} \leq\sum_{q}\mu_{p}(q)=1.\]
As restriction of the domain to \(R\) is needed in order to ensure that \(\int_{\mathbb{R}^{4}}\tilde{G}^{e}|_{R}(e)\mathrm{d}e\) is finite. The inner product can be bounded as,
\[\left\langle\sqrt{\frac{\phi_{\mu_{p}}}{\int_{\mathbb{R}^{4}} \mathrm{d}e\tilde{G}^{e}_{R,\delta}(e)}},T^{n-1}_{\phi_{\mu_{p}},R,\epsilon} \sqrt{\frac{\phi_{\mu_{p}}}{\int_{\mathbb{R}^{4}}\mathrm{d}e\tilde{G}^{e}_{R, \delta}(e)}}\right\rangle\] \[\leq\left[\max_{\begin{subarray}{c}\sum_{q}k^{2}(q)=1\\ \sum_{q}\int_{\mathbb{R}^{4}}\mathrm{d}e\tilde{f}^{2}(q,e)\tilde{G}^{e}_{R, \delta}(e)=1\end{subarray}}\sum_{\tilde{q},q}\int_{(\mathbb{R}^{4})^{2}}f( \tilde{q},\tilde{e})\tilde{G}^{e}_{R,\delta}(\tilde{e})P_{\tau,M}(\tilde{q}+ \tilde{e}-q-e)\tilde{G}^{e}_{R,\delta}(e)f(q,e)\mathrm{d}e\mathrm{d}\tilde{e} \right]^{n-1}.\]
We denote the quantity in brackets above by \(\rho_{M,R,\delta,\epsilon}\).
Returning to bounding \(\mathbb{E}[(\mathcal{G}_{R,\delta})^{n}]\), we see that this is bounded by,
\[\frac{1}{(n!)^{2}}\mathbb{E}[(\mathcal{G}_{B,R})^{n}]\] \[\leq |\max_{z}P_{\tau,M}(z)|^{2}\left[\int_{\mathbb{R}^{4}}\tilde{G}_{R, \delta}^{\epsilon}(e)\mathrm{d}e\right]^{2}(\rho_{M,R,\delta,\epsilon})^{2n-2} \sum_{p_{1},\ldots,p_{n}}\left(\frac{1}{n!}\prod_{r\in P_{\epsilon}}\frac{(n \mu_{p}(r))!}{(\phi_{\mu_{p}}(r))^{n\mu_{p}(r)}}\right)^{2}.\]
Then, we see that,
\[\limsup_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[(\mathcal{G}_{ R,\delta})^{n}]\leq 2\log\rho_{M,R,\delta,\epsilon}+\frac{1}{n}\log\sum_{p_{1}, \ldots,p_{n}}\left(\frac{1}{n!}\prod_{r\in P_{\epsilon}}\frac{(n\mu_{p}(r))!} {(\phi_{\mu_{p}}(r))^{n\mu_{p}(r)}}\right)^{2}.\]
The latter term above can be shown to go to \(0\). If we note equation [7, (3.1.11)], we see that an upper bound on the logarithmically scaled moments of \(\mathcal{G}_{R,\delta}\) is bounded by \(2\rho_{M,R,\delta,\epsilon}\).
### Analyzing \(\rho_{M,R,\delta,\epsilon}\)
The goal of this section is to remove the dependence of \(\epsilon\) and \(M\) in the definition of the optimization \(\rho_{M,R,\delta,\epsilon}\). We will prove the following two Lemmas. The first will remove the dependence on \(\epsilon\). The second will remove the dependence on \(M\).
**Lemma 3.4**.: _Recall \(\rho_{M,R,\delta,\epsilon}\) from Lemma 3.3. As we remove the \(\epsilon\) regularization, we argue that_
\[\limsup_{\epsilon\to 0}\rho_{M,R,\delta,\epsilon}\leq\rho_{M,R,\delta}.\]
_Here,_
\[\rho_{M,R,\delta}:=\sup_{\begin{subarray}{c}\sum_{q}k^{2}(q)=1\\ \int_{(-\frac{M}{2},\frac{M}{2}!^{4}}\text{dq}\int_{\mathbb{R}^{4}}\text{ def}^{2}(q,e)\tilde{G}_{R,\delta}(e)=1\end{subarray}}\int_{((-\frac{M}{2},\frac{M}{2}]^{4})^ {2}}d\tilde{q}dq\int_{(\mathbb{R}^{4})^{2}}d\tilde{e}def(\tilde{q},\tilde{e}) \sqrt{k}(\tilde{q})\tilde{G}_{R,\delta}(\tilde{e})\]
\[\times P_{\tau,M}(\tilde{q}+\tilde{e}-q-e)\tilde{G}_{R,\delta}(e)\sqrt{k}(q)f (q,e).\]
**Lemma 3.5**.: _Recall \(\rho_{M,R,\delta}\) from Lemma 3.4. As we remove the \(M\) compactification, we have,_
\[\limsup_{M\to\infty}\rho_{M,R,\delta}\leq\rho.\]
These two lemmas are now enough to prove Lemma 3.2.
Proof of Lemma 3.2.: We have that from Lemma 3.3 that the asymptotic moments of \(\mathcal{G}_{B,R}\) are bounded by \(\rho_{M,R,\delta,\epsilon}\) for any arbitrary choice of \(M\) and \(\epsilon\). By using Lemmas 3.4 and 3.5 we can take the limit as \(\epsilon\to 0\) and \(\delta\to 0\) in order to deduce that the asymptotic moments of \(\rho_{M,R,\delta,\epsilon}\) can be bounded by \(\rho\), as desired.
Now we can turn to the proofs of Lemmas 3.4 and 3.5.
Proof of Lemma 3.4.: Note that \(\rho_{M,R,\delta,\epsilon}\) corresponds to the maximization problem,
\[\epsilon^{2d}\sum_{z_{1},z_{2}\in P_{\epsilon}}\int_{(\mathbb{R}^{4})^{2}} \mathrm{d}e_{1}\mathrm{d}e_{2}f(z_{1},e_{1})\sqrt{k(z_{1})}G_{R,\delta}^{ \epsilon}(e_{1})P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})G_{R,\delta}^{\epsilon}(e_ {2})\sqrt{k(z_{2})}f(z_{2},e_{2}).\]
Fix some \(\epsilon_{0}\), for \(\epsilon\leq\epsilon\), we can find some function \(f(\epsilon_{0})\) such that \(f(\epsilon_{0})\to 1\) as \(\epsilon_{0}\to 0\). Furthermore, \(G_{R,\delta}^{\epsilon}(z)\leq f(\epsilon_{0})G_{R,\delta}(z)+1[R\leq|z|\leq R +\epsilon_{0}]\). Notice that
once we fix \(\epsilon_{0}\), we can apply Theorem B.5 to the function \(f(\epsilon_{0})G_{R,\delta}(z)+\mathbb{1}[R\leq|z|\leq R+\epsilon_{0}]\) and show that,
\[\limsup_{\epsilon\to 0}\sup_{f,k}\epsilon^{2d}\sum_{z_{1},z_{2} \in P_{\epsilon}}\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}f(z _{1},e_{1})\sqrt{k(z_{1})}G_{R,\delta}^{\epsilon}(e_{1})P_{\tau,M}(z_{1}+e_{1 }-z_{2}-e_{2})G_{R,\delta}^{\epsilon}\sqrt{k(z_{2})}f(z_{2},e_{2})\] \[\leq\sup_{f,k}\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{ 2}\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}f(z_{1},e_{1}) \sqrt{k(z_{1})}[f(\epsilon_{0})G_{R,\delta}(e_{1})+\mathbb{1}[R\leq|e_{1}|\leq R +\epsilon_{0}]]\] \[\times P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})[f(\epsilon_{0})G_{R, \delta}(e_{2})+\mathbb{1}[R\leq|e_{2}|\leq R+\epsilon_{0}]]\sqrt{k(z_{2})}f(z _{2},e_{2}). \tag{3.7}\]
Now, we assert that in general, we have for any \(L>0\) and positive functions,
\[\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{2}\int_{( \mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}k(z_{1},e_{1})M_{1}(e_{1})P_ {\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})k(z_{2},e_{2})M_{2}(e_{2})\] \[\leq L\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{2}\int_{ (\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}k(z_{1},e_{1})M_{1}(e_{1})P _{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})k(z_{2},e_{2})M_{1}(e_{2})\] \[+\frac{1}{L}\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{2} \int_{(\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}k(z_{1},e_{1})M_{2}(e _{1})P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})k(z_{2},e_{2})M_{2}(e_{2}). \tag{3.8}\]
To see this, we introduce the convolutional square root of \(P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})=\int_{[-M,M]^{4}}\mathrm{d}k\tilde{P}_{ \tau,M}(z_{1}+e_{1}-k)\tilde{P}_{\tau,M}(k-z_{2}-e_{2})\). Observe that \(\tilde{P}_{\tau,M}(y)=\tilde{P}_{\tau,M}(-y)\) by symmetry. Thus, we have that
\[\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{2}\int_{( \mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}k(z_{1},e_{1})M_{1}(e_{1})P _{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})k(z_{2},e_{2})M_{2}(e_{2})\] \[=\int_{[-M,M]^{4}}\mathrm{d}k\left[\int_{[-M,M]^{4}}\mathrm{d}z_ {1}\int_{\mathbb{R}^{4}}\mathrm{d}e_{1}k(z_{1},e_{1})M_{1}(e_{1})\tilde{P}_{ \tau,M}(z_{1}+e_{1}-k)\right]\] \[\times\left[\int_{[-M,M]^{4}}\mathrm{d}z_{2}\int_{\mathbb{R}^{4}} \mathrm{d}e_{2}k(z_{2},e_{2})M_{2}(e_{2})\tilde{P}_{\tau,M}(z_{2}+e_{2}-k)\right]\]
and it is bounded by
\[L\int_{[-M,M]^{4}}\mathrm{d}k\left[\int_{[-M,M]^{4}}\mathrm{d}z_ {1}\int_{\mathbb{R}^{4}}\mathrm{d}e_{1}k(z_{1},e_{1})M_{1}(e_{1})\tilde{P}_{ \tau,M}(z_{1}+e_{1}-k)\right]^{2}\] \[+\frac{1}{L}\int_{[-M,M]^{4}}\mathrm{d}k\left[\int_{[-M,M]^{4}} \mathrm{d}z_{2}\int_{\mathbb{R}^{4}}\mathrm{d}e_{2}k(z_{2},e_{2})M_{2}(e_{2}) \tilde{P}_{\tau,M}(z_{2}+e_{2}-k)\right]^{2}\] \[=L\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{2}\int_{( \mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}k(z_{1},e_{1})M_{1}(e_{1})P _{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})k(z_{2},e_{2})M_{1}(e_{2})\] \[+\frac{1}{L}\int_{([-M,M]^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_{2} \int_{(\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}k(z_{1},e_{1})M_{2}(e _{1})P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})k(z_{2},e_{2})M_{2}(e_{2}).\]
Applying equation (3.8) to the last line of (3.7), we can bound the last line by,
\[[f(\epsilon_{0})^{2}+L]\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}z_{1} \mathrm{d}z_{2}\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}f(z_{1},e_{1})\sqrt{k(z_{1})}G_{R,\delta}(e_{1})P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})\] \[\times G_{R,\delta}(e_{2})\sqrt{k(z_{2})}f(z_{2},e_{2})\] \[+[L^{-1}+1]\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}z_{1}\mathrm{d}z_ {2}\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}e_{1}\mathrm{d}e_{2}f(z_{1},e_{1}) \sqrt{k(z_{1})}\mathbb{1}[R\leq|e_{1}|\leq R+\epsilon_{0}]\] \[\times P_{\tau,M}(z_{1}+e_{1}-z_{2}-e_{2})\mathbb{1}[R\leq|e_{1}| \leq R+\epsilon_{0}]\sqrt{k(z_{2})}f(z_{2},e_{2}).\]
The final term on the last line below can be bounded from above by \(\sup_{z}\int_{\mathbb{R}^{4}}\mathbb{1}[R\leq|z-y|\leq R+\epsilon_{0}]\mathbb{1 }[R\leq|y|\leq R+\epsilon_{0}]\mathrm{d}y\leq[R+\epsilon_{0}]^{4}-R^{4}\). This is a consequence of the lower bound from Section 2.1. If we now first take \(\epsilon_{0}\to 0\) and then finally \(L\to 0\), this gives us our desired conclusion from Lemma 3.4.
Now, we turn a sketch of the proof of Lemma 3.5.
Proof of Lemma 3.5.: We omit the proof since the proof is very similar to that to [7, Lemma 3.2.4].
## 4. The relationship between \(\rho\) and the modified Gagliardo-Nirenberg constant
The goal of this section is to show that constant \(\rho\) which shown determines the large deviation behavior of \(\mathcal{G}\) can be more simply represented as a constant that occurs more naturally in analysis. Namely, the modified Gagliardo-Nirenberg constant as in [10, Equation (6)].
Before we present our main theorem, we discuss some notation. Recall that we let \(p_{t}(x)\) be the transition density for a Brownian motion to reach point \(x\) at time \(t\) and \(G(x)=\int p_{t}(x)dt\), \(\tilde{G}\) be the convolutional square root of \(G\), so that \(\tilde{G}*\tilde{G}=G\) and \(P_{\tau}(x)=\int e^{-t}p_{t}(x)dt\).
**Proposition 4.1**.: _Recall the optimization problem:_
\[\rho:=\sup_{\begin{subarray}{c}f\in L_{G}^{2}\\ k\cdot\int_{\mathbb{R}^{4}}k^{2}(z)\,dz=1\end{subarray}}\int_{(\mathbb{R}^{4} )^{4}}f(\tilde{z},\tilde{e})\sqrt{k}(\tilde{z})\tilde{G}(\tilde{e})P_{\tau}( \tilde{z}+\tilde{e}-z-e)\tilde{G}(e)\sqrt{k}(z)f(z,e)d\tilde{z}d\tilde{e}dz\,de.\]
_Let \(\tilde{\kappa}(4,2)\) be the optimal constant in the modified Gagliardo-Nirenberg inequality. Namely, the best constant such that,_
\[\left(\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y)g^{2}(y)\,dx\mathrm{d}y\right) ^{1/4}\leq\tilde{\kappa}(4,2)||g||_{L^{2}}^{1/2}||\nabla g||_{L^{2}}^{1/2}.\]
_Then,_
\[\rho=\frac{\tilde{\kappa}^{2}(4,2)}{\sqrt{2}}.\]
Proof.: _Part 1: Showing \(\rho\geq\frac{\tilde{\kappa}^{2}(4,2)}{\sqrt{2}}\)_
First, we show that \(\rho\) is greater than the value of a certain optimization problem, which can more readily be shown to related to \(\tilde{\kappa}(4,2)\). Let \(h\) be a function such that,
\[\int_{(\mathbb{R}^{4})^{2}}h^{2}(x)G(x-y)h^{2}(y)\mathrm{d}x\mathrm{d}y=1.\]
Substitute \(k(x)=\int_{\mathbb{R}^{4}}h^{2}(y)\tilde{G}(x-y)\mathrm{d}y\) and we find \(f(x,e)=h(x+e)\sqrt{\int_{\mathbb{R}^{4}}h^{2}(y)\tilde{G}(x-y)\mathrm{d}y}\). Indeed,
\[\int_{\mathbb{R}^{4}}k^{2}(x)\mathrm{d}x =\int_{(\mathbb{R}^{4})^{3}}g^{2}(z_{1})\tilde{G}(x-z_{1})\tilde{ G}(z_{2}-x)g^{2}(z_{2})\mathrm{d}x\mathrm{d}z_{1}\mathrm{d}z_{2}\] \[=\int_{(\mathbb{R}^{4})^{2}}g^{2}(z_{1})G(z_{2}-z_{1})g^{2}(z_{2} )\mathrm{d}z_{1}\mathrm{d}z_{2}=1.\]
In addition,
\[\int_{(\mathbb{R}^{4})^{2}}f^{2}(x,e)\tilde{G}(e)\mathrm{d}x \mathrm{d}e =\int_{(\mathbb{R}^{4})^{3}}h^{2}(x+e)\tilde{G}(e)\tilde{G}(x-y)h^ {2}(y)\mathrm{d}x\mathrm{d}y\mathrm{d}e\] \[=\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}x\mathrm{d}yh^{2}(x)h^{2}( y)G(x-y)=1.\]
If we let \(J(x)=h(x)\int_{\mathbb{R}^{4}}h^{2}(x+\psi)G(\psi)\mathrm{d}\psi\), then
\[\rho\geq\sup\int_{(\mathbb{R}^{4})^{2}}J(x)P_{\tau}(x-y)J(y)\mathrm{d}x \mathrm{d}y, \tag{4.1}\]
where the supremum is taken over all functions satisfying
\[\int_{(\mathbb{R}^{4})^{2}}h(x)^{2}G(x-y)h(y)^{2}\mathrm{d}x\mathrm{d}y=1.\]
Let
\[M(\theta)=\sup_{\begin{subarray}{c}g:\int_{\mathbb{R}^{4}}g^{2}\mathrm{d}x=1 \\ \int_{\mathbb{R}^{4}}|\nabla g|^{2}\mathrm{d}x<\infty\end{subarray}}\theta\left( \int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y\right) ^{1/2}-\frac{1}{2}\int_{\mathbb{R}^{4}}|\nabla g|^{2}\mathrm{d}z.\]
Then,
\[M\left(\frac{1}{\rho}\right)=\sup_{\begin{subarray}{c}g:\int_{\mathbb{R}^{4}} g^{2}\mathrm{d}x=1\\ \int_{\mathbb{R}^{4}}|\nabla g|^{2}\mathrm{d}x<\infty\end{subarray}}\frac{1}{ \rho}\left(\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y)g^{2}(y)\mathrm{d}x\mathrm{ d}y\right)^{1/2}-\frac{1}{2}\int_{\mathbb{R}^{4}}|\nabla g|^{2}\mathrm{d}z.\]
Our two intermediate goals are to first show that \(M(\rho^{-1})=\frac{\tilde{\kappa}^{4}(4,2)}{2\rho^{2}}\) and secondly to show that \(M(\rho^{-1})\leq 1\). Together, these imply \(\rho\geq\frac{\tilde{\kappa}^{2}(4,2)}{\sqrt{2}}\). We can first check that, by the modified Gagliardo-Nirenberg inequality, that, for any function \(g\) with \(\int_{\mathbb{R}^{4}}g^{2}\mathrm{d}x=1\), we have
\[\begin{split}&\frac{1}{\rho}\left(\int_{(\mathbb{R}^{4})^{2}}g^{2}(x )G(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y\right)^{1/2}-\frac{1}{2}\int_{\mathbb{R }^{4}}|\nabla g|^{2}\mathrm{d}z\\ &\leq\frac{\tilde{\kappa}^{2}(4,2)}{\rho}\left[\int_{\mathbb{R}^{ 4}}|\nabla g|^{2}\mathrm{d}z\right]^{1/2}-\frac{1}{2}\int_{\mathbb{R}^{4}}| \nabla g|^{2}\mathrm{d}z\\ &\leq\frac{\tilde{\kappa}^{4}(4,2)}{2\rho^{2}}+\frac{1}{2}\int_{ \mathbb{R}^{4}}|\nabla g|^{2}\mathrm{d}z-\frac{1}{2}\int_{\mathbb{R}^{4}}| \nabla g|^{2}\mathrm{d}z=\frac{\tilde{\kappa}^{4}(4,2)}{2\rho^{2}}.\end{split} \tag{4.2}\]
Now we show that there is a function \(f\) such that the supremum \(\frac{\tilde{\kappa}^{4}(4,2)}{2\rho^{2}}\) is actually attained. From [10, Theorem 2.2], we know that there is a function \(\tilde{f}\) that satisfies the equality conditions in the modified Gagliardo-Nirenberg inequality such that
its \(L^{2}\) norm is \(1\). Consider the rescaled version \(f^{\lambda}=\lambda^{2}\tilde{f}(\lambda x)\). This transformation preserves the \(L^{2}\) norm while \(||\nabla f^{\lambda}||_{L^{2}}=\lambda||\nabla\tilde{f}||_{L^{2}}\). One can also check that,
\[\left[\int_{(\mathbb{R}^{4})^{2}}(f^{\lambda}(x))^{2}G(x-y)(f^{\lambda}(y))^{2 }\mathrm{d}x\mathrm{d}y\right]^{1/4}=\lambda^{1/2}\left[\int_{(\mathbb{R}^{4}) ^{2}}(\tilde{f})^{2}G(x-y)(\tilde{f})^{2}\mathrm{d}x\mathrm{d}y\right]^{1/4}.\]
By appropriately tuning \(\lambda\), one can check that all inequalities in (4.2) become equalities and the maximum is attained. Let \(f\) be denote the function at which this supremum is attained. This proves the equality \(M(\rho^{-1})\) with \(\frac{\tilde{\kappa}^{4}(4,2)}{2\rho}\).
Now, we turn to showing that \(M(\rho^{-1})\leq 1\). This involves manipulating the function \(f\) at which the maximum is attained carefully though the use of Lagrange multipliers. By the Lagrange multiplier condition for the supremum of \(M(\rho^{-1})\),
\[\frac{1}{\rho}\frac{f(x)\int_{\mathbb{R}^{4}}G(x-y)f(y)^{2}\mathrm{d}y}{[\int _{(\mathbb{R}^{4})^{2}}f^{2}(x)G(x-y)f^{2}(y)\mathrm{d}x\mathrm{d}y]^{1/2}}+ \frac{1}{2}\Delta f(x)=M(\rho^{-1})f(x).\]
Let
\[\overline{f}=\frac{f}{[\int_{(\mathbb{R}^{4})^{2}}f^{2}(x)G(x-y)f^{2}(y) \mathrm{d}x\mathrm{d}y]^{1/4}}.\]
Then, we obtain
\[\frac{1}{\rho}\overline{f}(x)\int_{\mathbb{R}^{4}}G(x-y)\overline{f}^{2}(y) \mathrm{d}y+\frac{1}{2}\Delta\overline{f}(x)=M(\rho^{-1})\overline{f}(x),\]
where the normalization is set by
\[\int_{(\mathbb{R}^{4})^{2}}\overline{f}^{2}(x)G(x-y)\overline{f}^{2}(y) \mathrm{d}x\mathrm{d}y=1.\]
Let \(W(x)=\overline{f}(x)\int_{\mathbb{R}^{4}}\overline{f}^{2}(y)G(x-y)\mathrm{d}y\). Then,
\[\int_{\mathbb{R}^{4}}\frac{1}{\rho}W(x)P_{\tau}W(x)\mathrm{d}x+\int_{\mathbb{ R}^{4}}\frac{1}{2}\Delta\overline{f}(x)P_{\tau}W(x)\mathrm{d}x=\int_{ \mathbb{R}^{4}}M(\rho^{-1})\overline{f}(x)P_{\tau}W(x)\mathrm{d}x,\]
where \(P_{\tau}W(x)=\int_{\mathbb{R}^{4}}P_{\tau}(y)W(x-y)\mathrm{d}y\). Since \(\int_{\mathbb{R}^{4}}W(x)P_{\tau}W(x)\mathrm{d}x\leq\rho\) by the optimization problem inequality (4.1),
\[1+\int_{\mathbb{R}^{4}}\frac{1}{2}\Delta\overline{f}(x)P_{\tau}W(x)\mathrm{d} x\geq M(\rho^{-1})\int_{\mathbb{R}^{4}}\overline{f}(x)P_{\tau}W(x)\mathrm{d}x.\]
Then, since \(P_{\tau}=I+2^{-1}\Delta\circ P_{\tau}\) and \(\int_{\mathbb{R}^{4}}\overline{f}(x)W(x)\mathrm{d}x=1\) by the normalization condition on \(\overline{f}\),
\[\frac{1}{2}\int_{\mathbb{R}^{4}}\Delta\overline{f}(x)P_{\tau}W(x )\mathrm{d}x= \frac{1}{2}\int_{\mathbb{R}^{4}}\overline{f}(x)\Delta P_{\tau}W(x )\mathrm{d}x\] \[= -\int_{\mathbb{R}^{4}}\overline{f}(x)W(x)dx+\int_{\mathbb{R}^{4} }\overline{f}(x)P_{\tau}W(x)\mathrm{d}x\] \[= -1+\int_{\mathbb{R}^{4}}\overline{f}(x)P_{\tau}W(x)\mathrm{d}x.\]
Therefore, \(1\geq M(\rho^{-1})\) and hence \(\rho\geq\frac{\tilde{\kappa}(4,2)^{4}}{\sqrt{2}}\).
_Part 2: Showing \(\rho\leq\frac{\tilde{\kappa}^{2}(4,2)}{\sqrt{2}}\)_
If we recall the problem \(M(\rho^{-1})\), showing that \(\rho\leq\frac{\tilde{\kappa}^{2}(4,2)}{\sqrt{2}}\) is ultimately equivalent to showing \(M(\rho^{-1})\geq 1\). To do this, it suffices to find a good candidate function
for the optimization problem defining \(M(\rho^{-1})\). We find the proposed candidate function by considering the maximizer of the following auxiliary function. Consider the following problem:
\[c_{0}:=\inf\{\int_{\mathbb{R}^{4}}f^{2}(x)dx+\frac{1}{2}\int_{\mathbb{R}^{4}}| \nabla f(x)|^{2}\mathrm{d}x\text{ s.t. }\int_{(\mathbb{R}^{4})^{2}}f^{2}(x)G(x-y)f^{2}(y)\mathrm{d}x\mathrm{d}y=1\}.\]
We will first argue that \(\rho\leq c_{0}^{-1}\). As we will discuss in more detail in equation (4.4), the intuition regarding the main relationship between \(c_{0}\) and \(M(\theta)\) is that \(M(\theta)\) will exactly be \(1\) when \(\theta=c_{0}\) (or more exactly that \(M(\theta)>1\) if \(\theta>c_{0}\)). Thus, we will be done if we show \(\rho\leq c_{0}^{-1}\). Given \(f\) and \(k\) such that,
\[\int_{(\mathbb{R}^{4})^{2}}f^{2}(z,e)\tilde{G}(e)\mathrm{d}z\mathrm{d}y=1, \quad\int_{\mathbb{R}^{4}}k^{2}(z)\mathrm{d}z=1,\]
and consider
\[F(\lambda):=\int_{\mathbb{R}^{4}}f(\lambda-e,e)\sqrt{k}(\lambda-e)\tilde{G}(e )\mathrm{d}e.\]
It suffices to show
\[A:=\int_{(\mathbb{R}^{4})^{2}}P_{\tau}(x-y)F(x)F(y)\mathrm{d}x\mathrm{d}y\leq c _{0}^{-1}.\]
We remark here that the quantity on the right hand side of the definition symbol is exactly the term in the optimization problem defining \(\rho\) when we use test functions \(f\) and \(k\) from earlier.
We first claim that for any \(h\), we have that,
\[\int_{\mathbb{R}^{4}}F(x)h(x)\mathrm{d}x\leq\left[\int_{(\mathbb{R}^{4})^{2}}h ^{2}(x)G(x-y)h^{2}(y)\mathrm{d}x\mathrm{d}y\right]^{1/4}.\]
To see this, we see that,
\[\int_{\mathbb{R}^{4}}F(x)h(x)\mathrm{d}x =\int_{(\mathbb{R}^{4})^{2}}f(x-e,e)\sqrt{k}(x-e)\tilde{G}(e)h(x) \mathrm{d}x\mathrm{d}e\] \[=\int_{(\mathbb{R}^{4})^{2}}f(x,e)\sqrt{k}(x)\tilde{G}(e)h(x+e) \mathrm{d}x\mathrm{d}e\] \[\leq\left[\int_{(\mathbb{R}^{4})^{2}}f(x,e)^{2}\tilde{G}(e) \mathrm{d}x\mathrm{d}e\right]^{1/2}\left[\int_{\mathbb{R}^{4}}k(x)\left(\int h ^{2}(x+e)\tilde{G}(e)\mathrm{d}e\right)\mathrm{d}x\right]^{1/2}\] \[\leq\left[\int_{\mathbb{R}^{4}}k(x)^{2}\mathrm{d}x\right]^{1/4} \left[\int_{\mathbb{R}^{4}}\left(\int_{\mathbb{R}^{4}}h^{2}(x+e)\tilde{G}(e) \mathrm{d}e\right)^{2}\mathrm{d}x\right]^{1/4}\] \[=\left[\int_{(\mathbb{R}^{4})^{3}}h^{2}(x+e_{1})\tilde{G}(e_{1}) \tilde{G}(e_{2})h^{2}(x+e_{1})\mathrm{d}x\mathrm{d}e_{1}\mathrm{d}e_{2}\right] ^{1/4}\] \[=\left[\int_{(\mathbb{R}^{4})^{2}}h^{2}(x)G(x-y)h^{2}(y)\mathrm{ d}x\mathrm{d}y\right]^{1/4}. \tag{4.3}\]
Then, inequality (4.3) shows that
\[A=\langle P_{\tau}F,F\rangle\leq\|P_{\tau}F\|_{G},\]
where \(\|P_{\tau}F\|_{G}:=[\int_{(\mathbb{R}^{4})^{2}}(P_{\tau}F)^{2}(x)G(x-y)(P_{\tau}F) ^{2}(y)dxdy]^{1/4}\) and \(\langle\cdot\rangle\) is the standard \(L^{2}\) inner product. Then,
\[A=\langle P_{\tau}F,F\rangle=\langle P_{\tau}F,(I-2^{-1}\Delta)P_ {\tau}F\rangle\] \[=\|P_{\tau}F\|_{G}^{2}\left\langle\frac{P_{\tau}F}{\|P_{\tau}F\|_ {G}},(I-2^{-1}\Delta)\frac{P_{\tau}F}{\|P_{\tau}F\|_{G}}\right\rangle\geq\|P_ {\tau}F\|_{G}^{2}c_{0}\geq A^{2}c_{0}.\]
By dividing by \(Ac_{0}\), we see that \(c_{0}^{-1}\geq A\). As this is true for arbitrary functions \(f\) and \(k\), this implies that \(\rho\leq c_{0}^{-1}\). Hence, for any \(0<\epsilon<\rho\) there is \(f\) such that
\[\frac{1}{\rho-\epsilon}>\int_{\mathbb{R}^{4}}f^{2}(x)dx+\frac{1}{2}\int_{ \mathbb{R}^{4}}|\nabla f(x)|^{2}\mathrm{d}x.\]
Now, we can return to proving \(M(\rho^{-1})\leq 1\). If we set \(g(x)=f(x)/\|f\|_{2}\), we have
\[\frac{1}{\rho-\epsilon}[\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y) g^{2}(y)\mathrm{d}x\mathrm{d}y]^{1/4}-\frac{1}{2}\int_{\mathbb{R}^{4}}|\nabla g (x)|^{2}\mathrm{d}x\] \[= \frac{1}{\rho-\epsilon}-\frac{1}{2}\int_{\mathbb{R}^{4}}|\nabla g (x)|^{2}\mathrm{d}x\] \[\geq \left\{\int_{\mathbb{R}^{4}}f^{2}(x)dx+\frac{1}{2}\int_{\mathbb{R }^{4}}|\nabla f(x)|^{2}\mathrm{d}x\right\}\|f\|_{L^{2}}^{-2}-\frac{1}{2}\|f\|_ {L^{2}}^{-2}\|\nabla f\|_{L^{2}}^{2}=1 \tag{4.4}\]
for \(\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y=1\). This shows that for any \(\epsilon\) that \(1\leq M\left(\frac{1}{\rho-\epsilon}\right)\). Since \(\epsilon\) is arbitrary, this implies that \(1\leq M(\frac{1}{\rho})\) and hence \(\rho\leq\frac{\bar{k}(4,2)^{2}}{\sqrt{2}}\).
## 5. Self-intersection: The Proof of Theorem 1.3
In this section, we will provide the proof of Theorem 1.3.
Proof.: We set
\[B(I)=\beta_{t-s}\circ\theta_{s}\]
and
\[A(I,J)=\int_{I}\int_{J}G(B_{s}-B_{t})\mathrm{d}s\mathrm{d}t\]
such as [2, (2.3), (2.4)]. Note that \(B([1/2,1])=_{d}\beta_{1/2}=_{d}\beta_{1}/2\) and \(A([0,1/2];[1/2,1])=_{d}1/2\int_{0}^{1}\int_{0}^{1}G(B_{t}-B_{s}^{\prime}) \mathrm{d}t\mathrm{d}s\). Moreover, \(B([1/2,1])\) is independent of \(\beta_{1/2}\). Then, to show the upper bound of (1.3), we only have to repeat of the proof of the upper bound of [2, (3.3)].
Now we show the lower bound. Let
\[C_{n}=\sum_{k=1}^{n-1}A([0,k];[k,k+1])\]
for \(n=1,2\ldots\) We prove
\[\liminf_{n\to\infty}\frac{1}{n}\log\mathbb{E}\exp(\lambda C_{n}^{1/2})\geq \frac{\lambda^{2}\tilde{k}^{2}(4,2)}{4} \tag{5.1}\]
for \(\lambda>0\), which corresponds to [2, (3.9)]. Set \(L(t,x)=\int_{0}^{t}\tilde{G}(B_{s}-x)\mathrm{d}s\). Then, we have
\[\left(\iint_{0\leq s\leq t\leq n}G(B_{s}-B_{t})\mathrm{d}s\mathrm{d }t\right)^{1/2}= \frac{1}{\sqrt{2}}\left(\int_{\mathbb{R}^{4}}L^{2}(n,x)\mathrm{d}x \right)^{1/2}\] \[\geq \frac{1}{\sqrt{2}}\int_{\mathbb{R}^{4}}f(x)L(n,x)\mathrm{d}x\] \[= \frac{1}{\sqrt{2}}\int_{0}^{n}\tilde{G}*f(B_{t})\mathrm{d}t\]
for
\[\int_{\mathbb{R}^{4}}f^{2}(x)\mathrm{d}x=1. \tag{5.2}\]
Therefore, by Feynman-Kac formula,
\[\liminf_{n\to\infty} \frac{1}{n}\log\mathbb{E}\exp\left(\lambda\left(\iint_{0\leq s \leq t\leq n}G(B_{s}-B_{t})\mathrm{d}s\mathrm{d}t\right)^{1/2}\right)\] \[\geq \sup_{g}\bigg{\{}\frac{\lambda}{\sqrt{2}}\int_{\mathbb{R}^{4}} \tilde{G}*f(x)g^{2}(x)\mathrm{d}x-\frac{1}{2}\int_{\mathbb{R}^{4}}|\nabla g( x)|^{2}\mathrm{d}x\bigg{\}}.\]
Taking the supremum over \(f\) with (5.2), it is larger than or equal to
\[\sup_{g}\bigg{\{}\frac{\lambda}{\sqrt{2}}\left(\iint_{(\mathbb{R}^{4})^{2}}g^{ 2}(x)\tilde{G}(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y\right)^{1/2}-\frac{1}{2}\int _{\mathbb{R}^{4}}|\nabla g(x)|^{2}\mathrm{d}x\bigg{\}}.\]
Therefore, by the same proof as [2, (3.9)], we obtain (5.1).
## Appendix A Regularizing the singularity near the origin
There are difficulties with dealing with the singularity around the origin when deriving an upper bound for the high moments. In this section, we will consider the moments of the following function,
\[\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}\frac{\mathbbm{1}(|B_{t}-B_{s}^{\prime} |\leq\epsilon)}{|B_{t}-B_{s}^{\prime}|^{2}}\mathrm{d}t\mathrm{d}s,\]
where \(B_{t}\) and \(B_{s}^{\prime}\) are independent Brownian motions and \(\tau_{1},\tau_{2}\) are independent exponential random variables of rate \(1\). The new factor here is the introduction of the cutoff \(\mathbbm{1}(|B_{t}-B_{s}^{\prime}|)\). An expression of the \(n\)-th moment of this term is given by,
\[\mathbb{E}_{\tau}\left[\int_{[0,\tau_{1}]^{n}}\mathrm{d}t_{1}\ldots\mathrm{d }t_{n}\int_{[0,\tau_{2}]^{n}}\mathrm{d}s_{1}\ldots\mathrm{d}s_{n}\mathbb{E}_{B,B^{\prime}}\left(\prod_{i=1}^{n}\frac{\mathbbm{1}(|B_{t_{i}}-B_{s_{i}}^{ \prime}|\leq\epsilon)}{|B_{t_{1}}-B_{s_{i}}^{\prime}|^{2}}\right)\right].\]
The first expectation is with respect to the exponential random variable \(\tau\). The expectation inside is with respect to the Brownian motions \(B\) and \(B^{\prime}\).
Let us give some intuition on why this cutoff will give a subleading order term. If we are interested in computing the \(n\)-th moment of the term without cutoff, then the contribution mostly comes when \(\tau_{1},\tau_{2}\) is \(\approx n\). At this scale, the ordered consecutive differences (assume \(t_{1}\leq t_{2}\ldots\leq t_{n}\) then the consecutive differences would be \(t_{k}-t_{k-1}\)) would approximately be of \(O(1)\). The partial differences \(B_{t_{i}}-B_{t_{i-1}}\) would fluctuate to within \(O(1)\) as well. Thus, it becomes increasingly unlikely that they could be confined to a neighborhood of size \(O(\epsilon)\), as would be needed by the
term \(\mathbb{1}(|B_{t_{i}}-B_{s_{i}}^{\prime}|)\). In the remainder of this section, we will try to formalize this intuition.
We start with a lemma that controls some of the expectations that we would see.
**Lemma A.1**.: _We have the following estimates. There is some universal constant \(C\) not dependent on \(\epsilon\) such that,_
(A.1) \[\begin{split}\mathbb{E}_{B}\left[\frac{\mathbb{1}(|B_{t}-x|\leq \epsilon)}{|B_{t}-x|^{2}}\right]&\leq C\min\left(\frac{1}{|x|^{2} },\frac{\epsilon^{2}}{t^{2}},\frac{1}{t}\right)\leq\frac{C}{|x|}\min\left( \frac{\epsilon}{t},\frac{1}{\sqrt{t}}\right),\\ \mathbb{E}_{B}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{ t}-y|^{2}}\right]&\leq C\min\left(\frac{1}{|y|^{2}},\frac{ \epsilon^{2}}{t^{2}},\frac{1}{t}\right).\end{split}\]
Proof.: In the course of the proof, \(C\) is a constant that is allowed to change from line to line. We start with considering the expectation of \(\frac{1(|B_{t}-x|\leq\epsilon)}{|B_{t}-x|^{2}}\).
There are a few cases to consider. The first case is when \(\epsilon\) is larger than \(\sqrt{t}\). In this case, we may drop the restriction \(\mathbb{1}(|B_{t}-x|\leq\epsilon)\) and use the estimate (4.8) from [9, Lemma 4.2]. In the case that \(\frac{|x|}{4}\geq\sqrt{t}\geq\epsilon\), we write the integral as,
\[\frac{C}{t^{2}}\int_{|z-x|\leq\epsilon}\frac{\exp[-|z|^{2}/t]}{|z- x|^{2}}\mathrm{d}z =\frac{C}{t^{2}}\int_{|z^{\prime}|\leq\epsilon}\frac{\exp[-|z^{ \prime}|^{2}/t-2\langle z^{\prime},x\rangle/t-x^{2}/t]}{|z^{\prime}|^{2}} \mathrm{d}z^{\prime}\] \[\leq\frac{C\exp[-x^{2}/t]}{t^{2}}\int_{|z^{\prime}|\leq\epsilon} \frac{1}{|z^{\prime}|^{2}}\mathrm{d}z^{\prime}=\frac{C\epsilon^{2}\exp[-x^{2}/ (2t)]}{t^{2}}\leq\frac{C}{|x|^{2}}.\]
To get the last line, we used the fact that when \(\epsilon\leq\frac{|x|}{4}\) we have that \(2\langle z^{\prime},x\rangle\leq|x|^{2}/(2t)\). Furthermore, using that \(|x|^{2}\geq t\) and that \(\sqrt{t}\geq\epsilon\), we can say there is some constant \(C\) such that \(\frac{C}{|x|^{2}}\geq\frac{1}{t}\exp[-x^{2}/(2t)]\geq\frac{\epsilon^{2}}{t^{2} }\exp[-x^{2}/(2t)]\). When \(\sqrt{t}\geq\epsilon\geq\frac{|x|}{4}\) or \(\sqrt{t}\geq\frac{|x|}{4}\geq\epsilon\), we can bound the integral as follows,
\[\frac{C}{t^{2}}\int_{|z-x|\leq\epsilon}\frac{\exp[-|z|^{2}/t]}{|z-x|^{2}} \mathrm{d}z\leq\frac{C}{t^{2}}\int_{|z-x|\leq\epsilon}\frac{1}{|z-x|^{2}} \mathrm{d}z=\frac{C\epsilon^{2}}{t^{2}}\leq\frac{C}{t}\leq\frac{16C\epsilon}{|x |^{2}}.\]
This gives the first part of the lemma. Now, we consider the integral of \(\frac{1(|B_{t}-x|\leq\epsilon)}{|B_{t}-y|^{2}}\). When \(\epsilon\leq|x-y|/4\), then \(|B_{t}-x|\leq\epsilon\) will imply \(|B_{t}-y|\leq 3|x-y|/4\) and thus,
\[\mathbb{E}\left[\frac{\mathbb{1}\left(|B_{t}-x|\leq\epsilon\right)}{|B_{t}-y|^ {2}}\right]\leq\frac{4}{|x-y|^{2}}\mathbb{E}\left[\mathbb{1}\left(|B_{t}-x| \leq\epsilon\right)\right]\leq\frac{C}{|x-y|^{2}}\min\left[\frac{\epsilon^{4} }{t^{2}},1\right].\]
If instead \(\epsilon\geq|x-y|/4\), then we can say that \(\mathbb{1}\left(|B_{t}-x|\leq\epsilon\right)\leq\mathbb{1}\left(|B_{t}-y|\leq 5\epsilon\right)\), and we can then use the estimates on the moments of \(\mathbb{E}\left[\frac{1(|B_{t}-x|\leq\epsilon}{|B_{t}-x|^{2}}\right]\). Since we have that \(\epsilon\leq|x-y|/4\), we see that
\[\frac{C}{|x-y|^{2}}\frac{\epsilon^{4}}{t^{2}}\leq\frac{C\epsilon^{2}}{16t^{2}}.\]
If we knew instead that \(\sqrt{t}\leq\epsilon\), then we have that,
\[\frac{C}{|x-y|^{2}}\leq\frac{16C}{\epsilon^{2}}\leq\frac{16C}{t}.\]
Now, we need to prove that \(\frac{1}{|x-y|^{2}}\mathbb{E}[\mathbb{1}\left(|B_{t}-x|\leq\epsilon\right)] \leq\frac{C}{|y|^{2}}\) for some constant \(C\). If \(|x-y|\geq|y|/4\), we would be done. If not, then we have that
\(3|y|/4\). Furthermore, \(\epsilon\leq|x-y|/4\leq|y|/16\leq|x|/12\). Thus, we can write,
\[\mathbb{E}[\mathbb{1}(|B_{t}-x|\leq\epsilon)]=\frac{C}{t^{2}}\int_{|z|\leq \epsilon}\exp[-|z|^{2}/t-2\langle z,x\rangle/t-|x|^{2}/t]\mathrm{d}z.\]
We have that \(-2\langle z,x\rangle/t-|x|^{2}/2\leq-|x|^{2}/(2t)\) since \(|z|\leq\epsilon\leq|x|/12\). We thus have,
\[\frac{C}{t^{2}}\int_{|z|\leq\epsilon}\exp[-|z|^{2}/t-2\langle z,x \rangle/t-|x|^{2}/t]\mathrm{d}z \leq\frac{C}{t^{2}}\exp[-|x|^{2}/(2t)]\int_{|z|\leq\epsilon}\exp[- |z|^{2}/t]\mathrm{d}z\] \[\leq C\min(1,\frac{\epsilon^{4}}{t^{2}})\exp[-|x|^{2}/(2t)].\]
If \(\epsilon^{2}\leq t\), we use the fact that \(\epsilon^{2}\leq|x-y|^{2}/16\) and derive that
\[\frac{\epsilon^{4}}{|x-y|^{2}t^{2}}\exp[-|x|^{2}/(2t)]\leq\frac{1}{16t}\exp[ -|x|^{2}/(2t)]\leq\frac{C}{|x|^{2}}\leq\frac{C}{9|y|^{2}},\]
in addition if \(\epsilon^{2}\geq t\), we instead get that,
\[\frac{1}{|x-y|^{2}}\exp[-|x|^{2}/(2t)]\leq\frac{16}{\epsilon^{2}}\exp[-|x|^{2} /(2t)]\leq\frac{16}{t}\exp[-|x|^{2}/(2t)]\leq\frac{16C}{|x|^{2}}\leq\frac{256C} {9|y|^{2}}.\]
Then, we obtain the desired result.
As a consequence of these estimates, we can derive the following estimates,
**Lemma A.2**.: _There is a universal constant \(C\) not dependent on \(\epsilon\) such that the following estimates hold:_
(A.2) \[\begin{split}&\mathbb{E}_{B}\left[\frac{\mathbb{1}(|B_{t}-x|\leq \epsilon)}{|B_{t}-x||B_{t}-y|}\right]\leq C\min\left(\frac{1}{|x|},\frac{1}{|y |}\right)\min\left(\frac{\epsilon}{t},\frac{1}{\sqrt{t}}\right),\\ &\mathbb{E}_{B}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_ {t}-x|^{2}|B_{t}-y|}\right]\leq C\frac{1}{|x||x-y|}\min\left(\frac{\epsilon}{ t},\frac{1}{\sqrt{t}}\right).\end{split}\]
Proof.: The first inequality can be derived via the Cauchy-Schwarz inequality. That is, we have,
\[\mathbb{E}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{t}-x||B_{t}-y|} \right]\leq\mathbb{E}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{t}-x| ^{2}}\right]^{1/2}\mathbb{E}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_ {t}-y|^{2}}\right]^{1/2}.\]
If \(\frac{1}{|x|}\leq\frac{1}{|y|}\), we can bound \(\mathbb{E}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{t}-x|^{2}}\right] ^{1/2}\) by \(\frac{1}{|x|^{2}}\), using the first inequality of (A.1). the integral \(\mathbb{E}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{t}-y|^{2}}\right]\) can be bounded by \(\min\left(\frac{\epsilon^{2}}{t^{2}},\frac{1}{\sqrt{t}}\right)\) by the second inequality of (A.1). If instead \(\frac{1}{|y|}\leq\frac{1}{|x|}\), we can go the other way around. To deal with the second inequality of (A.2), we instead use,
\[\mathbb{E}_{B}\left[\frac{\mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{t }-x|^{2}|B_{t}-y|}\right] \leq\frac{1}{|x-y|}\mathbb{E}\left[\frac{\mathbb{1}(|B_{t}-x|\leq \epsilon)}{|B_{t}-x||B_{t}-y|}\right]+\frac{1}{|x-y|}\mathbb{E}\left[\frac{ \mathbb{1}(|B_{t}-x|\leq\epsilon)}{|B_{t}-x|^{2}}\right]\] \[\leq\frac{C}{|x-y||x|}\min\left(\frac{\epsilon}{t},\frac{1}{\sqrt {t}}\right).\]
The last line used the first inequality of (A.1) and the first inequality of (A.2).
_The main improvement in this lemma compared to [9, Lemma 4.2] is the change of the time bound to \(\min\left(\frac{\epsilon}{t},\frac{1}{\sqrt{t}}\right)\). We are now in good shape to bound the moments of \(\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}\frac{\mathbb{1}(|B_{t}-B_{t}^{\prime}| \leq\epsilon)}{|B_{t}-B_{t}^{\prime}|^{2}}\mathrm{d}tds\)._
**Lemma A.3**.: _There is some universal constant \(C\), not dependent on \(\epsilon\) or \(n\), such that we have the following moment estimates:_
\[\mathbb{E}\left(\int_{0}^{r_{1}}\int_{0}^{r_{2}}\frac{\mathbbm{1}(|B_{t}-B^{ \prime}_{s}|\leq\epsilon)}{|B_{t}-B^{\prime}_{s}|^{2}}\text{dtds}\right)^{n} \leq C^{n}\sqrt{\epsilon}^{n}(n!)^{2}.\]
_Here, \(\tau_{1},\tau_{2}\) are two independent exponential random variables with rate 1 and \(B\), \(B^{\prime}\) are two independent Brownian motions._
Proof.: We start with bounding the more general quantity,
\[\mathbb{E}_{B}\prod_{i=1}^{n}\frac{\mathbbm{1}(|B_{t_{i}}-y_{i}| \leq\epsilon)}{|B_{t_{i}}-y_{i}|^{2}}\] \[=\mathbb{E}_{B}\prod_{i=1}^{n-1}\frac{\mathbbm{1}(|B_{t_{i}}-y_{ i}|\leq\epsilon)}{|B_{t_{i}}-y_{i}|^{2}}\frac{\mathbbm{1}(|B_{t_{n}}-B_{t_{n-1}}-(y _{n}-B_{t_{n-1}})|\leq\epsilon)}{|(B_{t_{n}}-B_{t_{n-1}})-(y_{n}-B_{t_{n-1}}) |^{2}}\] \[\leq C\mathbb{E}_{B}\prod_{i=1}^{n-1}\frac{\mathbbm{1}(|B_{t_{i}} -y_{i}|\leq\epsilon)}{|B_{t_{i}}-y_{i}|^{2}}\frac{1}{|B_{t_{n-1}}-y_{n}|}\min \left(\frac{\epsilon}{t_{n}-t_{n-1}},\frac{1}{\sqrt{t_{n}-t_{n-1}}}\right).\]
To get the last inequality, we used the fact that the difference \(B_{t_{n}}-B_{t_{n-1}}\) is independent of the Brownian walk up to time \(t_{n-1}\) and is distributed according to a Brownian motion at time \(t_{n}-t_{n-1}\). We then used the first inequality of (A.1). At this point, we can proceed in an inductive fashion. We have,
\[\mathbb{E}_{B}\prod_{i=1}^{n-1}\frac{\mathbbm{1}(|B_{t_{i}}-y_{i}| \leq\epsilon)}{|B_{t_{i}}-y_{i}|^{2}}\frac{1}{|B_{t_{n-1}}-y_{n}|}\min\left( \frac{\epsilon}{t_{n}-t_{n-1}},\frac{1}{\sqrt{t_{n}-t_{n-1}}}\right)\] \[=\mathbb{E}_{B}\prod_{i=1}^{n-2}\frac{\mathbbm{1}(|B_{t_{i}}-y_{ i}|\leq\epsilon)}{|B_{t_{i}}-y_{i}|^{2}}\frac{\mathbbm{1}(|(B_{t_{n-1}}-B_{t_{n-2}}) -(y_{n-1}-B_{t_{n-2}})|\leq\epsilon)}{|(B_{t_{n-1}}-B_{t_{n-2}})-(y_{n-1}-B_{t _{n-2}})|^{2}|(B_{t_{n-1}}-B_{t_{n-2}})-(y_{n}-B_{t_{n-2}})|}\] \[\leq C\mathbb{E}_{B}\prod_{i=1}^{n-2}\frac{\mathbbm{1}(|B_{t_{i}} -y_{i}|\leq\epsilon)}{|B_{t_{i}}-y_{i}|^{2}}\frac{1}{|B_{t_{n-1}}-y_{n}||y_{n} -y_{n-1}|}\min\left(\frac{\epsilon}{t_{n-1}-t_{n-2}},\frac{1}{\sqrt{t_{n-1}-t _{n-2}}}\right).\]
Combining these steps we see that,
(A.3) \[\mathbb{E}_{B}\prod_{i=1}^{n}\frac{\mathbbm{1}(|B_{t_{i}}-y_{i}|\leq\epsilon) }{|B_{t_{i}}-y_{i}|^{2}}\leq C^{n}\prod_{i=1}^{n}\frac{1}{|y_{i}-y_{i-1}|}\min \left(\frac{\epsilon}{t_{i-1}-t_{i-2}},\frac{1}{\sqrt{t_{i-1}-t_{i-2}}}\right).\]
The \(n\)-th moment of \(\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}\frac{1\left(\left|B_{t}-B_{s}^{\prime} \right|\leq\epsilon\right)}{\left|B_{t}-B_{s}^{\prime}\right|^{2}}\mathrm{d}t \mathrm{d}s\) can be expressed as,
(A.4) \[n!\mathbb{E}_{\tau}\int_{0\leq t_{1}\leq t_{2}\ldots\leq t_{n} \leq\tau_{1}}\mathrm{d}t_{1}\ldots\mathrm{d}t_{n}\int_{[0,\tau_{2}]^{n}} \mathrm{d}s_{1}\ldots\mathrm{d}s_{n}\mathbb{E}_{B,B^{\prime}}\left[\prod_{i=1}^ {n}\frac{1}{\left|B_{t_{i}}-B_{s_{i}}^{\prime}\right|\leq\epsilon\right)}{ \left|B_{t_{i}}-B_{s_{i}}^{\prime}\right|^{2}}\right]\] \[\leq C^{n}n!\mathbb{E}_{\tau}\int_{0\leq t_{1}\leq t_{2}\ldots\leq t _{n}\leq\tau_{1}}\mathrm{d}t_{1}\ldots\mathrm{d}t_{n}\min\left(\frac{\epsilon }{t_{i}-t_{i-1}},\frac{1}{\sqrt{t_{i}-t_{i-1}}}\right)\] \[\times\int_{[0,\tau_{2}]^{n}}\mathrm{d}s_{1}\ldots\mathrm{d}s_{n }\prod_{i=1}^{n}\mathbb{E}_{B^{\prime}}\left[\prod_{i=1}^{n}\frac{1}{\left|B_ {s_{i}}^{\prime}-B_{s_{i-1}}^{\prime}\right|}\right]\] \[\leq n!\mathbb{E}_{\tau}C^{n}\int_{0\leq t_{1}\leq t_{2}\ldots \leq t_{n}\leq\tau_{1}}\mathrm{d}t_{1}\ldots\mathrm{d}t_{n}\min\left(\frac{ \epsilon}{t_{i}-t_{i-1}},\frac{1}{\sqrt{t_{i}-t_{i-1}}}\right)\] \[\times n!C^{n}\int_{0\leq s_{1}\leq s_{2}\ldots\leq s_{n}\leq\tau _{2}}\prod_{i=1}^{n}\frac{1}{\sqrt{s_{i}-s_{i-1}}}.\]
To obtain the second inequality, we used (A.3).
To get the final inequality, we used equation [9, (4.18)]. By scaling one has
\[\int_{0\leq s_{1}\leq s_{2}\ldots\leq s_{n}\leq\tau_{2}}\mathrm{d }s_{1}\ldots\mathrm{d}s_{n}\prod_{i=1}^{n}\frac{1}{\sqrt{s_{i}-s_{i-1}}}\] \[= \tau_{2}^{n/2}\int_{0\leq s_{1}\leq s_{2}\ldots\leq s_{n}\leq 1} \mathrm{d}s_{1}\ldots\mathrm{d}s_{n}\prod_{i=1}^{n}\frac{1}{\sqrt{s_{i}-s_{i-1 }}}\leq\frac{C^{n}\tau_{2}^{n/2}}{(n!)^{1/2}}.\]
One can see equation [9, (4.21)] for a reference. The more important term to deal with is,
\[\int_{0\leq t_{1}\leq t_{2}\ldots\leq t_{n}\leq\tau_{1}}\mathrm{d}t_{1}\ldots \mathrm{d}t_{n}\min\left(\frac{\epsilon}{t_{i}-t_{i-1}},\frac{1}{\sqrt{t_{i}-t_ {i-1}}}\right)\]
Let \(I_{k}\) denote the value of the integral,
\[I_{k}:=\int_{0\leq\theta_{1}\leq\theta_{2}\ldots\leq\theta_{k}\leq 1}\mathrm{d} \theta_{1}\ldots\mathrm{d}\theta_{k}\prod_{i=1}^{k}\frac{1}{\sqrt{\theta_{i}- \theta_{i-1}}}.\]
Notice that \(I_{k}\) satisfies the relation,
\[I_{k}=\int_{0}^{1}\frac{I_{k-1}(1-\theta_{1})^{(k-1)/2}}{\sqrt{\theta_{1}}} \mathrm{d}\theta_{1},\]
and \(I_{0}\) is also understood to be \(0\). By induction, we will prove the following inequality,
\[\int_{0\leq t_{1}\leq t_{2}\ldots\leq t_{n}\leq\tau_{1}}\mathrm{d}t_{1}\ldots \mathrm{d}t_{n}\min\left(\frac{\epsilon}{t_{i}-t_{i-1}},\frac{1}{\sqrt{t_{i}-t_ {i-1}}}\right)\leq 2^{n}\sqrt{\epsilon}^{n}\sum_{k=0}^{n}(\tau_{1})^{k/2}I_{k}\binom{n} {k}.\]
The base case \(n=1\) can be bounded from above by the integral
\[\int_{0}^{\tau_{1}}\min\left(\frac{1}{\sqrt{t}},\frac{\epsilon}{ t}\right)\mathrm{d}t\leq \int_{0}^{\epsilon}\frac{1}{\sqrt{t}}\mathrm{d}t+\int_{\epsilon}^{ \tau_{1}}\frac{\epsilon}{t}\mathrm{d}t\] \[\leq 2\sqrt{\epsilon}+\sqrt{\epsilon}\int_{\epsilon}^{\tau_{1}}\frac{1 }{\sqrt{t}}\mathrm{d}t\leq 2\sqrt{\epsilon}+\sqrt{\epsilon}\int_{0}^{\tau_{1}}\frac{1}{ \sqrt{t}}\mathrm{d}t=2\sqrt{\epsilon}I_{0}+\sqrt{\epsilon}\sqrt{\tau_{1}}I_{1}.\]
The second term integral above is understood to be \(0\) if \(\tau\) is less than \(\epsilon\). \(\sqrt{\epsilon}\) will still be an upper bound if \(\tau\) is less than \(\epsilon\).
Now, we can proceed with our induction. We have,
\[\int_{0\leq t_{1}\leq t_{2}\ldots\leq t_{n}\leq\tau_{1}}\mathrm{d}t _{1}\ldots\mathrm{d}t_{n}\min\left(\frac{\epsilon}{t_{i}-t_{i-1}},\frac{1}{ \sqrt{t_{i}-t_{i-1}}}\right)\] \[=\int_{0}^{\tau_{1}}\mathrm{d}t_{1}\min\left(\frac{1}{\sqrt{t_{1} }},\frac{\epsilon}{t_{1}}\right)\int_{t_{1}\leq t_{2}\leq\ldots\leq t_{n}\leq \tau_{1}}\mathrm{d}t_{2}\ldots\mathrm{d}t_{n}\prod_{i=2}^{n}\min\left(\frac{1} {\sqrt{t_{i}-t_{i-1}}},\frac{\epsilon}{t_{i}-t_{i-1}}\right)\] \[\leq\int_{0}^{\tau_{1}}\mathrm{d}t_{1}\min\left(\frac{1}{\sqrt{t_ {1}}},\frac{\epsilon}{t_{1}}\right)\int_{0\leq t_{1}^{\prime}\leq\ldots\leq t_ {n}^{\prime}\leq\tau_{1}-t_{1}}\mathrm{d}t_{2}^{\prime}\ldots\mathrm{d}t_{n} ^{\prime}\prod_{i=1}^{n-1}\min\left(\frac{1}{\sqrt{t_{i}^{\prime}-t_{i-1}^{ \prime}}},\frac{\epsilon}{t_{i}^{\prime}-t_{i-1}^{\prime}}\right)\] \[\leq\int_{0}^{\tau_{1}}\mathrm{d}t_{1}\min\left(\frac{1}{\sqrt{t_ {1}}},\frac{\epsilon}{t_{1}}\right)2^{n-1}\sqrt{\epsilon}^{n-1}\sum_{k=0}^{n-1 }(\tau_{1}-t_{1})^{k/2}I_{k}\binom{n-1}{k}.\]
and
\[\leq\int_{0}^{\epsilon}\frac{1}{\sqrt{t_{1}}}2^{n-1}\sqrt{\epsilon }^{n-1}\sum_{k=0}^{n-1}(\tau_{1})^{k/2}\binom{n-1}{k}\mathrm{d}t_{1}+\int_{ \epsilon}^{\tau_{1}}\frac{\epsilon}{t_{1}}2^{n-1}\sqrt{\epsilon}^{n-1}\sum_{k= 0}^{n-1}(\tau_{1}-t_{1})^{k/2}I_{k}\binom{n-1}{k}\mathrm{d}t_{1}\] \[\leq 2^{n}\sqrt{\epsilon}^{n}\sum_{k=0}^{n-1}(\tau_{1})^{k/2} \binom{n-1}{k}+\sqrt{\epsilon}\int_{0}^{\tau_{1}}\frac{1}{\sqrt{t_{1}}}2^{n-1 }\sqrt{\epsilon}^{n-1}\sum_{k=0}^{n-1}(\tau_{1}-t_{1})^{k/2}I_{k}\binom{n-1}{ k}\mathrm{d}t_{1}\] \[\leq 2^{n}\sqrt{\epsilon}^{n}\sum_{k=0}^{n}I_{k}(\tau_{1})^{k/2} \binom{n}{k}.\]
Thus, our expression for (A.4) can be bounded by
\[C^{n}(n!)^{2}2^{n}(\sqrt{\epsilon})^{n}\mathbb{E}_{\tau}\sum_{k=0 }^{n}\binom{n}{k}I_{k}I_{n}\tau_{1}^{k/2}\tau_{2}^{n/2} \leq C^{n}(n!)^{2}(\sqrt{\epsilon})^{n}\sum_{k=0}^{n}\binom{n}{k} \frac{1}{\sqrt{k!}}\frac{1}{\sqrt{n!}}\left(\frac{n}{2}\right)!\left(\frac{k}{ 2}\right)!\] \[\leq C^{n}(n!)^{2}(\sqrt{\epsilon})^{n}\sum_{k=0}^{n}\binom{n}{k} \leq C^{n}(n!)^{2}(\sqrt{\epsilon})^{n}.\]
Therefore, we obtain the desired result.
## Appendix B Analysis of the Constrained Optimization Problem
Fix a function \(M\) that is bounded and with finite support. In this section, we will analyze optimization problems of the following form,
\[K_{\epsilon}:=\{k:\epsilon^{4}\sum_{z\in\in\mathbb{Z}^{4}}k^{2}( z)=1\},\] \[F_{\epsilon,M}:=\{f:\epsilon^{4}\sum_{z\in\epsilon\mathbb{Z}^{4}} \int_{\mathbb{R}^{4}}f^{2}(z,e)M(e)\mathrm{d}e=1\}\]
and
\[O_{x_{1},x_{2},\epsilon,M}:=\sup_{k\in K_{\epsilon}}\sup_{f_{1},f _{2}\in F_{\epsilon,M}}\epsilon^{8}\sum_{z_{1}\in\epsilon\mathbb{Z}^{4},z_{2} \in\epsilon\mathbb{Z}^{4}}\] \[\times\int_{(\mathbb{R}^{4})^{2}}\sqrt{k}(z_{1})f_{1}(z_{1},e_{1} )M(e_{1})P_{\tau}(z_{1}+x_{1}+e_{1}-z_{2}-x_{2}-e_{2})M(e_{2})\sqrt{k}(z_{2})f_ {2}(z_{2},e_{2})\mathrm{d}e_{1}\mathrm{d}e_{2}.\]
Note that \(O_{x_{1},x_{2},\epsilon,M}\) can be understood as an upper bound for the norm of all operators of the following operator on \(F_{\epsilon,M}\):
\[(T_{x_{1},x_{2},\epsilon,M,k}f)(z,e)=\sqrt{k}(z)\epsilon^{4}\sum_{\tilde{z}\in \epsilon\mathbb{Z}^{4}}\int_{\mathbb{R}^{4}}P_{\tau}(z+x_{1}+e-\tilde{z}-x_{2}- \tilde{e})M(\tilde{e})\sqrt{k}(\tilde{z})f(\tilde{z},\tilde{e})\mathrm{d} \tilde{e}.\]
We thus see that,
\[O_{x_{1},x_{2},\epsilon,M}=\sup_{k\in K_{\epsilon}}\sup_{f_{1},f_{2}\in F_{ \epsilon,M}}\langle f_{1},T_{x_{2}}f_{2}\rangle.\]
Here, \(x_{1}\) and \(x_{2}\) are two points found in \([-\epsilon,\epsilon]^{4}\). The continuous analogue of these quantities can be expressed as follows:
\[K:=\{k:\int_{\mathbb{R}^{4}}\mathrm{d}zk^{2}(z)=1\},\] \[F_{M}:=\{f:\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}z\mathrm{d}ef^{2 }(z,e)M(e)\}\]
and
\[O_{M} :=\sup_{k\in K,f_{1},f_{2}\in F_{M}}\int_{(\mathbb{R}^{4})^{4}} \sqrt{k}(z_{1})f_{1}(z_{1},e_{1})M(e_{1})P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})\] \[\times M(e_{2})\sqrt{k}(z_{2})f(z_{2},e_{2})\mathrm{d}z_{1} \mathrm{d}z_{2}\mathrm{d}e_{1}\mathrm{d}e_{2}.\]
We remark that this \(O_{M}\) corresponds to the symmetric operator on \(F_{M}\) given by,
\[(T_{M,k}f)(z,e)=\sqrt{k}(z)\int_{(\mathbb{R}^{4})^{2}}P_{\tau}(z+e-\tilde{z}- \tilde{e})\sqrt{k}(\tilde{z})M(\tilde{e})\mathrm{d}\tilde{z}\mathrm{d}\tilde{ e}.\]
Thus, \(O_{M}\) would be the same whether we took the maximum over \(f_{1}\),\(f_{2}\) arbitrary or \(f_{1}=f_{2}\). Let \(C\) be a continuous function such that \(C(z)\leq P_{\tau}(z)\leq C(z)+P_{\tau}(z)\mathbb{1}(|z|\leq\delta)\). We let the quantities \(O_{M}^{C},O_{x_{1},x_{2},\epsilon,M}^{C}\) or \(O_{M}^{\delta}\), \(O_{x_{1},x_{2},\epsilon,M}^{\delta}\) denote the analogues of \(O_{x_{1},x_{2},\epsilon,M}\) or \(O_{M}\) with the function \(P_{\tau}\) in the definition replaced either by \(C\) or by \(P_{\tau}(z)\mathbb{1}(|z|\leq\delta).\) Clearly we have that,
\[O_{M}^{C}\leq O_{M}\leq O_{M}^{C}+O_{M}^{\delta},O_{x_{1},x_{2},\epsilon,M}^{C }\leq O_{x_{1},x_{2},\epsilon,M}\leq O_{x_{1},x_{2},\epsilon,M}^{C}+O_{x_{1},x _{2},\epsilon,M}^{\delta}.\]
We will first argue that independent of \(x_{1},x_{2}\) and \(\epsilon\), that \(O_{x_{1},x_{2},\epsilon,M}^{\delta}\) will go to \(0\) as \(\delta\) goes to \(0\).
One way to rewrite our maximization problems is as follows. We can consider the normalization
\[F(z)=\sqrt{\int_{\mathbb{R}^{4}}f^{2}(z,e)M(e)\mathrm{d}e},\quad N_{z}(e)= \frac{f(z,e)}{F(z)}.\]
We remark that \(\int_{\mathbb{R}^{4}}F(z)^{2}\mathrm{d}z=1\) and for each \(z\) we have \(\int_{\mathbb{R}^{4}}(N_{z}(e))^{2}\mathrm{d}e\).
Next we see another way to write the integral expression that appears in \(O_{M}\) (we have similar expressions for \(O_{x_{1},x_{2},\epsilon,M}\)) is as follows:
(B.1) \[\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}z\mathrm{d}\tilde{z}\sqrt{k}(z)F_{1}(z)F_{ 2}(\tilde{z})\sqrt{k}(\tilde{z})\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}\mathrm{d }\tilde{e}M(e)(N_{z})_{1}(e)P_{\tau}(z+e-\tilde{z}-\tilde{e})M(\tilde{e})(N_{ z})_{2}(\tilde{e})\]
We need a series of lemmas that analyze this expression based on understanding the value of the integral in the interior. We start with a lemma that derives a bound on what we will consider the 'canonical' version of the problem.
**Lemma B.1**.: _Consider the following problem:_
(B.2) \[\mathfrak{I}:=\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{[-1,1]^{4}}N_{i}(e)^{2}\,\text{d}e=1\end{subarray}}\int_{([-1,1]^{4})^{2 }}N_{1}(e_{1})\frac{1}{|e_{1}-e_{2}|^{2}}N_{2}(e_{2})\text{d}e_{1}\text{d}e_{2}.\]
_Then \(\mathfrak{I}\) is bounded._
Proof.: Assume for contradiction that \(\mathfrak{I}\) is not bounded. Then we can find a sequence of functions \(N_{1}^{B}\) and \(N_{2}^{B}\) supported on \([-1,1]^{4}\) and \(L^{2}\) norm \(1\) such that
(B.3) \[\int_{([-1,1]^{4})^{2}}N_{1}^{B}(e_{1})P_{\tau}(e_{1}-e_{2})N_{2}^ {B}(e_{2})\text{d}e_{1}\text{d}e_{2}\] \[\geq c\int_{([-1,1]^{4})^{2}}N_{1}^{B}(e_{1})\frac{1}{|e_{1}-e_{2 }|^{2}}N^{B}(e_{2})\text{d}e_{1}\text{d}e_{2}\geq B,\]
where \(c\) is a constant so that \(P_{\tau}(e_{1}-e_{2})\geq c\frac{1}{|e_{1}-e_{2}|^{2}}\), when \(e_{1},e_{2}\) are supported in \([-1,1]^{4}\).
Now consider the large deviation of the following quantity. Let \(I\) denote the indicator function \(I(x):=\mathbb{1}\left(x\in[-1,1]^{4}\right)\) and
\[\mathcal{I}:=\int_{0}^{\tau_{1}}\int_{0}^{\tau_{2}}(I*I)(B_{t}-B_{s}^{\prime}) \text{d}t\text{d}s.\]
Since the convolution \(I*I\) is bounded, we can see that \(\frac{1}{n}\log\frac{1}{(n!)^{2}}\mathbb{E}[(\mathcal{I})^{n}]<\infty\). However, similar to the proof given in Section 2.1, we can prove that
(B.4) \[\begin{split}&\liminf_{n\to\infty}\frac{1}{n}\log\frac{1}{(n!)^{ 2}}\mathbb{E}[(\mathcal{I})^{n}]\\ &\geq\sup_{\begin{subarray}{c}\int_{\mathbb{R}^{4}}k^{2}(z) \text{d}z=1\\ \int_{(\mathbb{R}^{4})^{2}}f_{1}^{2}(z,e)I(e)\text{d}z\text{d}e=1\end{subarray} }\int_{(\mathbb{R}^{4})^{4}}\sqrt{k(z_{1})}f_{1}(z_{1},e_{1})I(e_{1})P_{\tau} (z_{1}+e_{1}-z_{2}-e_{2})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\times I(e_{2})f_{2}(z_{2},e_{2})\sqrt{k(z_{2})}\text{d}z_{1}\text{d}z_{2}\text{d}e_{1}\text{d}e_{2}. \end{split}\]
Now, we choose \(f_{i}(z_{i},e_{i})\) of the following form. If \(|z_{i}|\leq\frac{1}{2}\), then we write \(f_{i}(z_{i},e_{i})\) as \(F(z_{i})N_{i}^{B}(e_{i}+z_{i})\), where \(F\) supported on \(\left[-\frac{1}{2},\frac{1}{2}\right]^{4}\) is a function with norm \(1\), namely, \(\int_{\mathbb{R}^{4}}F^{2}(z)\text{d}z=1\) and \(N_{i}^{B}\) is the function from (B.3). One can manifestly see that \(\int_{(\mathbb{R}^{4})^{2}}F^{2}(z)(N_{i}^{B}(e))^{2}I(e)\text{d}e=1\) by definition. We also fix \(k\) to be some function with \(L^{2}\) norm \(1\). With this choice of \(k\) and \(f_{1},f_{2}\), we see that
\[\int_{(\mathbb{R}^{4})^{2}}\text{d}z_{1}\text{d}z_{2}F(z_{1})\sqrt {k}(z_{1})F(z_{2})\sqrt{k}(z_{2})\] \[\times\int_{(\mathbb{R}^{4})^{2}}\text{d}e_{1}\text{d}e_{2}N_{1} ^{B}(e_{1}+z_{1})I(e_{1})P_{\tau}(e_{1}+z_{1}-e_{2}-z_{2})N_{2}^{B}(e_{2}+z_{2} )I(e_{2})\] \[\geq B\int_{(\mathbb{R}^{4})^{2}}\text{d}z_{1}\text{d}z_{2}F(z_{1 })\sqrt{k}(z_{1})F(z_{2})\sqrt{k}(z_{2}).\]
The last inequality merely follows from the condition on \(N_{i}^{B}\) for (B.3), once you use the observation that \(|e_{1}+z_{1}|\) will be \(<2\) when \(|e_{1}|<1\) and \(|z_{1}|<\frac{1}{2}\). Thus, in the domain of relevance, \(I(e_{i})\) will just be \(1\). We can freely take \(B\) to \(\infty\) while keeping \(F\) and \(k\) fixed. Thus, the supremum in (B.4) will be \(\infty\). This contradicts
that the fact that said supremum should be finite. Thus, it must be the case that the quantity \(I\), of interest, must be finite.
We can now proceed to relate the more general problem to a bound on the canonical problem.
**Lemma B.2**.: _Assume that \(M\) has support \([-S,S]^{4}\) and that \(M\) is bounded from above by \(B\) and from below by \(b\) on this support. Then we have the following estimates: If \(|z_{1}-z_{2}|\geq 4\sqrt{d}|S|\), we have that_
\[\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{i}^{2}(e)M(e)\text{\rm{d}}e=1\end{subarray}}\int_{( \mathbb{R}^{4})^{2}}N_{1}(e_{1})M(e)P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})N_{2}(e_{2} )M(e_{2})\text{\rm{d}}e_{1}\text{\rm{d}}e_{2}\] \[\leq P_{\tau}(\frac{z_{1}-z_{2}}{2})\sqrt{B[2S]^{4}}.\]
_If instead, we assume that \(|z_{1}-z_{2}|\leq 4\sqrt{d}S\), we have that,_
(B.5) \[\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{i}^{2}(e)M(e)\text{\rm{d}}e=1\end{subarray}}\int_{( \mathbb{R}^{4})^{2}}N_{1}(e_{1})M(e)P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})N_{2}(e_{ 2})M(e_{2})\text{\rm{d}}e_{1}\text{\rm{d}}e_{2}\leq\frac{B^{2}}{b(3S)^{2}} \mathfrak{I}.\]
\(\mathfrak{I}\) _is the quantity from (B.2)._
Proof.: Let us consider the case that \(|z_{1}-z_{2}|\geq 4\sqrt{d}|S|\). In this case, we can assert that for any \(e_{1},e_{2}\) in the support of \(S\), we have that \(|z_{1}-z_{2}+e_{1}-e_{2}|\geq\frac{|z_{1}-z_{2}|}{2}\). \(P_{\tau}\) depends only of the norm of its input and is monotone decreasing in its input, thus, \(P_{\tau}(z_{1}-z_{2}+e_{1}-e_{2})\leq P_{\tau}(\frac{z_{1}-z_{2}}{2})\) when \(e_{1}\) and \(e_{2}\) are in the support of \(M\).
Secondly, we also know that,
\[\int_{\mathbb{R}^{4}}N_{1}(e)M(e)\text{\rm{d}}e\leq[\int_{\mathbb{R}^{4}}(N_{ 1}(e))^{2}M(e)\text{\rm{d}}e]^{1/2}[\int_{\mathbb{R}^{4}}M(e)\text{\rm{d}}e]^{ 1/2}\leq\sqrt{B[2S]^{4}}.\]
Because \(M\) is bounded from above by \(B\), we can then assert that,
\[\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{i}^{2}(e)M(e)\text{\rm{d}}e=1\end{subarray}}\int_{( \mathbb{R}^{4})^{2}}N_{1}(e_{1})M(e_{1})P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})M(e_{ 2})N_{2}(e_{2})\text{\rm{d}}e_{1}\text{\rm{d}}e_{2}\] \[\leq P_{\tau}(\frac{z_{1}-z_{2}}{2})\sup_{\begin{subarray}{c}N_{ 1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{i}^{2}(e)M(e)\text{\rm{d}}e=1\end{subarray}}\int_{ \mathbb{R}^{4}}N_{1}(e_{1})M(e_{1})\text{\rm{d}}e_{1}\int_{\mathbb{R}^{2}}N_{ 2}(e_{2})M(e_{2})\text{\rm{d}}e_{2}\] \[\leq P_{\tau}(\frac{z_{1}-z_{2}}{2})\sqrt{B[2S]^{4}}.\]
If instead, we assume that \(|z_{1}-z_{2}|\leq 4\sqrt{d}|S|\), then we instead know that \(|z_{1}+z_{2}-e_{1}-e_{2}|\leq 6\sqrt{d}|S|\) and that
\[\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{i}^{2}(e)M(e)\text{d}e=1\end{subarray}}\int_{(\mathbb{R }^{4})^{2}}N_{1}(e_{1})M(e_{1})\times P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})M(e_{2} )N_{2}(e_{2})\text{d}e_{1}\text{d}e_{2}\] \[=\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{1}^{2}(e-z_{1}+z_{2})M(e-z_{1}+z_{2})\text{d}e=1\\ \int_{\mathbb{R}^{4}}N_{e}^{2}(e)M(e)\text{d}e=1\end{subarray}}\int_{(\mathbb{R }^{4})^{2}}N_{1}(e_{1}-z_{1}+z_{2})M(e_{1}-z_{1}+z_{2})\] \[\qquad\qquad\qquad\qquad\qquad\times P_{\tau}(e_{1}-e_{2})M(e_{2} )N_{2}(e_{2})\text{d}e_{1}\text{d}e_{2}\] \[\leq B^{2}\sup_{\begin{subarray}{c}\tilde{N}_{1},\tilde{N}_{2}: \\ b\int_{[-3S,3S]^{4}}\tilde{N}_{i}^{2}(e)\text{d}e=1\end{subarray}}\int_{([-3S,3 S]^{4})^{2}}\text{d}e_{1}\text{d}e_{2}\tilde{N}_{1}(e_{1})\times P_{\tau}(e_{1}-e_{2}) \tilde{N}_{2}(e_{2})\] \[=\frac{B^{2}(3S)^{2}}{b}\sup_{\begin{subarray}{c}\tilde{N}_{1}, \tilde{N}_{2}:\\ \int_{[-1,1]^{4}}N_{1}^{2}(e)\text{d}e=1\end{subarray}}\int_{[-1,1]^{4}}\text{ d}e_{1}\text{d}e_{2}\tilde{N}_{1}(e_{1})\frac{1}{|e_{1}-e_{2}|^{2}}\tilde{N}_{2}(e_{2}).\]
To get the second line above, we changed variable of \(e_{1}\to e_{1}-z_{1}+z_{2}\). We need to make more observations to derive the second to last line. Firstly, one can use the assumption that \(M\) is bounded by \(B\). This gives the factor of \(B^{2}\) outside. Secondly, the support of the shifted functions \(N_{1}(e_{1}-z_{1}+z_{2})\) must be restricted to the domain \([-3S,3S]^{4}\). The support of \(N_{2}(e_{2})\) must also be restricted to this domain. Thus, we can increase the domain of integration to \([-3S,3S]^{4}\). Finally, since \(M\) is bounded below by \(b\), we would also know that \(b\int_{[-S,S]^{4}}N_{1}^{2}(e)\text{d}e\leq 1\). By assumption, the functions \(N_{1}\) would also have support restricted to \([-S,S]^{4}\); thus, \(b\int_{[-3S,3S]^{4}}N_{1}^{2}(e)\text{d}e\leq 1\). Hence, the value of the integral could only increase if we considered functions like \(\tilde{N}_{i}\).
In addition, the final line is derived by bounding \(P_{\tau}(e_{1}-e_{2})\) by \(|e_{1}-e_{2}|^{-2}\) and using scaling and we obtain the result.
By a very similar technique, we can also prove the following estimates,
**Lemma B.3**.: _Assume that \(M\) has support \([-S,S]^{4}\) and that \(M\) is bounded from above by \(B\) and from below by \(b\) on this support. Then we have the following estimates: if \(|z_{1}-z_{2}|\geq 4\sqrt{d}|S|+2\delta\), we have that_
\[\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{1}^{2}(e)M(e)\text{d}e=1\end{subarray}}\int_{(\mathbb{R }^{4})^{2}}N_{1}(e_{1})M(e)P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})\] \[\qquad\qquad\qquad\qquad\times 1(z_{1}+e_{1}-z_{2}-e_{2}\leq\delta)N_{2 }(e_{2})M(e_{2})\text{d}e_{1}\text{d}e_{2}=0.\]
_If instead, we had that \(|z_{1}-z_{2}|\leq 4\sqrt{d}|S|+2\delta\), we can instead derive following estimates, for universal constant \(C\) (only depending on \(M\)):_
\[\sup_{\begin{subarray}{c}N_{1},N_{2}:\\ \int_{\mathbb{R}^{4}}N_{1}^{2}(e)M(e)\text{d}e=1\end{subarray}}\int_{(\mathbb{R }^{4})^{2}}N_{1}(e_{1})M(e)P_{\tau}(z_{1}+e_{1}-z_{2}-e_{2})\] \[\qquad\qquad\qquad\times 1(z_{1}+e_{1}-z_{2}-e_{2}\leq\delta)N_{2 }(e_{2})M(e_{2})\text{d}e_{1}\text{d}e_{2}\leq C\delta^{2}.\]
Proof.: When \(|x_{1}-x_{2}|\geq|z_{1}-z_{2}|+2\delta\), the assertion is clear. We now need to consider the case that \(|z_{1}-z_{2}|\leq 2\sqrt{d}|S|+2\delta\).
Consider the scaled integer lattice \(\mathbb{Z}_{\delta}^{4}:=\frac{\delta}{\sqrt{d}}\mathbb{Z}^{4}\). For each \(e\) in \([-S,S]^{4}\), let \(m\) be the closest integral point in \(\mathbb{Z}_{\delta}^{4}\) to \(e\). Hence, \(|m_{1}-e_{1}|\leq\delta\). If \(|z_{1}-z_{2}+e_{1}-e_{2}|\leq\delta\), then we must have that \(|m_{1}-(e_{2}+z_{2}-z_{1})|\leq 2\delta\). Thus, we can make the following integral bound,
(B.6) \[\begin{split}&\int_{(\mathbb{R}^{4})^{2}}N_{1}(e_{1})M(e)P_{\tau}(z_ {1}+e_{1}-z_{2}-e_{2})\\ &\qquad\qquad\qquad\mathbb{1}(z_{1}+e_{1}-z_{2}-e_{2})\leq \delta)N_{2}(e_{2})M(e_{2})\mathrm{d}e_{1}\mathrm{d}e_{2}\\ &\leq\sum_{\begin{subarray}{c}k\in\mathbb{Z}_{\delta}^{4}\\ |k|\leq 2\sqrt{d}|S|\end{subarray}}\int_{([-\frac{2\delta}{\sqrt{d}},\frac{2 \delta}{\sqrt{d}}]^{4})^{2}}\mathrm{d}\hat{e}_{1}\mathrm{d}\hat{e}_{2}N_{1}(k _{1}+\hat{e}_{1})M(k_{1}+\hat{e}_{1})\\ &\times P_{\tau}(\hat{e}_{1}-\hat{e}_{2})N_{2}(z_{2}+\hat{e}_{2}- z_{1}-k_{1})M(z_{2}+\hat{e}_{2}-z_{1}-k_{1}).\end{split}\]
The second line chooses the change of variables \(e_{1}\to k_{1}+\hat{e}_{1}\) and \(e_{2}\to\hat{e}_{2}+z_{2}-z_{1}-k_{1}\). The restriction that \(|z_{1}+e_{1}-z_{2}-e_{2}|\leq\delta\) will ensure that any tuple \((e_{1},e_{2})\) that corresponds to a non-zero term in the integral on the left hand side will belong to one of the squares on the right hand side.
We now define,
\[V_{1}(k) :=\int_{[-\frac{2\delta}{\sqrt{d}},\frac{2\delta}{\sqrt{d}}]^{4} }N_{1}(k+\hat{e})^{2}M(k_{1}+\hat{e})\mathrm{d}\hat{e},\] \[V_{2}(k) :=\int_{[-\frac{2\delta}{\sqrt{d}},\frac{2\delta}{\sqrt{d}}]^{4} }N_{2}(z_{2}+\hat{e}-z_{1}-k)^{2}M(z_{2}+\hat{e}-z_{1}-k)M(z_{2}+\hat{e}-z_{1} -k)\mathrm{d}\hat{e}.\]
One observation we will use is that \(\sum_{k\in\mathbb{Z}_{\delta}^{4}}V_{i}(k)\leq 4\) if we assume that \(\int_{(\mathbb{R})^{4}}(N_{i}(e))^{2}\mathrm{d}e=1\). Returning to the last line of (B.6) and using the inequality of (B.5), we can derive the desired bound,
\[\begin{split}&\int_{(\mathbb{R}^{4})^{2}}N_{1}(e_{1})M(e)P_{ \tau}(z_{1}+e_{1}-z_{2}-e_{2})\\ &\qquad\qquad\qquad\mathbb{1}(z_{1}+e_{1}-z_{2}-e_{2})\leq\delta )N_{2}(e_{2})M(e_{2})\mathrm{d}e_{1}\mathrm{d}e_{2}\\ \leq&\sum_{\begin{subarray}{c}k\in\mathbb{Z}_{\delta }^{4}\\ |k|\leq 2\sqrt{d}|S|\end{subarray}}\sqrt{V_{1}(k)}\sqrt{V_{2}(k)}\frac{B^{2}( 3\delta)^{2}}{bd}\mathfrak{I}\leq\frac{B^{2}(3\delta)^{2}}{bd}\mathfrak{I}( \sum_{k\in\mathbb{Z}_{\delta}^{4}}V_{1}(k))(\sum_{k\in\mathbb{Z}_{\delta}^{4} }V_{2}(k))\leq\frac{16B^{2}(3\delta)^{2}}{bd}\mathfrak{I}.\end{split}\]
As a corollary of this estimate, we can establish the following,
**Corollary B.4**.: _Let \(M\) be a function of finite support that is bounded from above and away from \(0\) on its support. Uniformly in \(x_{1},x_{2}\) and \(\epsilon\), we have that,_
\[O^{\delta}_{x_{1},x_{2},\epsilon,M}\leq C(\delta),\]
_where \(C(\delta)\) is a constant that, depends on \(M\) but not on \(x_{1},x_{2}\) or \(\epsilon\), and goes to \(0\) as \(\delta\) goes to \(0\)._
Proof.: By the alternative expression found in (B.1) and the bounds on the interior expression found in (B.3), we have the bound
(B.7) \[O^{\delta}_{x_{1},x_{2},\epsilon,M}\leq C\delta^{2}\quad\sup_{ \begin{subarray}{c}k,F_{1},F_{2}\\ \frac{1}{\epsilon^{4}}\sum_{z\in\epsilon\mathbb{Z}^{4}}k^{2}(z)=1\\ \frac{1}{\epsilon^{4}}\sum_{z\in\epsilon\mathbb{Z}^{4}}(F_{i}(z))^{2}=1\end{subarray}} \frac{1}{\epsilon^{2d}}\sum_{z_{1},z_{2}\in\epsilon\mathbb{Z}^{4}}\sqrt{k}(z_{ 1})F_{1}(z_{1})\] \[\times\mathbb{1}(|z_{1}-z_{2}|\leq 4\sqrt{d}S+2\delta)\sqrt{k}(z_{ 2})F_{2}(z_{2}).\]
Notice now that \(G_{i}(z)=\sqrt{k(z)}F_{i}(z)\) satisfies
\[\frac{1}{\epsilon^{4}}\sum_{z\in\epsilon\mathbb{Z}^{4}}G_{i}(z)^{4/3}\mathrm{d }z\leq\left[\frac{1}{\epsilon^{4}}\sum_{z\in\epsilon\mathbb{Z}^{4}}k(z)^{2} \right]^{1/3}\left[\frac{1}{\epsilon^{4}}\sum_{z\in\epsilon\mathbb{Z}^{4}}(F_{ i})^{2}(z)\right]^{2/3}\leq 1.\]
We thus see that,
\[\frac{1}{\epsilon^{2d}}\sum_{z_{1},z_{2}\in\epsilon\mathbb{Z}^{4} }G_{1}(z_{1})\mathbb{1}(|z_{1}-z_{2}|\leq 4\sqrt{d}S+2\delta)G(z_{2})\] \[\leq\left[\frac{1}{\epsilon^{2d}}\sum_{z_{1},z_{2}\in\epsilon \mathbb{Z}^{4}}(G_{1}(z_{1}))^{4/3}(G_{2}(z_{2}))^{4/3}\right]^{1/2}\] \[\times\left[\frac{1}{\epsilon^{4}}\sum_{y\in\epsilon\mathbb{Z}^{4 }}\mathbb{1}(|y|\leq 4\sqrt{d}S+2\delta)\frac{1}{\epsilon^{4}}\sum_{z_{1}\in \epsilon\mathbb{Z}^{4}}(G_{1}(z_{1}))^{2/3}(G_{2}(z_{1}+y))^{2/3}\right]^{1/2}\] \[\leq\left[\frac{1}{\epsilon^{4}}\sum_{y\in\epsilon\mathbb{Z}^{4 }}\mathbb{1}(|y|\leq 4\sqrt{d}S+2\delta)\left\{\frac{1}{\epsilon^{4}}\sum_{z_{1} \in\epsilon\mathbb{Z}^{4}}(G_{1}(z_{1}))^{4/3}\right\}^{1/2}\left\{\frac{1}{ \epsilon^{4}}\sum_{z_{1}\in\epsilon\mathbb{Z}^{4}}(G_{2}(z_{1}+y))^{4/3} \right\}^{1/2}\right]^{1/2}\] \[\leq[4\sqrt{d}S+2\delta]^{2}.\]
The sequence of steps follows from carefully applying the Cauchy-Schwarz inequality twice. Putting the above inequality into equation (B.7) will give us the desired bound on \(O^{\delta}_{x_{1},x_{2},\epsilon,M}\) uniformly in \(x_{1},x_{2}\) and \(\epsilon\).
At this point, we finally have enough tools to assert the following theorem.
**Theorem B.5**.: _Let \(M\) be a function of finite support that is bounded from above and bounded away from \(0\) on its support._
_Then, uniformly in \(x_{1},x_{2},\epsilon\) and \(M\),_
\[\lim_{\epsilon\to 0}O_{x_{1},x_{2},\epsilon,M}\leq O_{M}.\]
Proof.: It is manifestly clear that \(\lim_{\epsilon\to 0}O^{C}_{x_{1},x_{2},\epsilon,M}\leq O^{C}_{M}\), by using the continuity of the function \(C\). Namely, one can take functions in \(K_{\epsilon}\) and \(F_{\epsilon,M}\) and generate a function in \(K\), \(F\) respectively by setting \(k(z)=k_{\epsilon}\left(\lfloor\frac{z}{\epsilon}\rfloor\right)\) with \(k\) in \(K\) and \(k_{\epsilon}\) in \(K_{\epsilon}\) and similarly for \(F\). For any value \(\kappa\), one can find \(\epsilon\) small enough so that
\(|C(z+\alpha)-C(z)|\leq\kappa C(z)\) for \(|\alpha|\leq\epsilon\). Thus,
\[(\epsilon)^{2d}\sum_{z_{1}\in\mathbb{Z}^{4},z_{2}\in\mathbb{Z}^{4}} \int_{(\mathbb{R}^{4})^{2}}\sqrt{k_{\epsilon}}(z_{1})(f_{1})_{\epsilon}(z_{1}, e_{1})M(e_{1})C(z_{1}+x_{1}+e_{1}-z_{2}-x_{2}-e_{2})M(e_{2})\] \[\times\sqrt{k_{\epsilon}}(z_{2})(f_{2})_{\epsilon}(z_{2},e_{2}) \mathrm{d}e_{1}\mathrm{d}e_{2}\] \[\leq[1+\kappa]\int_{(\mathbb{R}^{4})^{2}}\mathrm{d}z_{1}\mathrm{d }z_{2}\int_{(\mathbb{R}^{4})^{2}}\sqrt{k_{\epsilon}}(z_{1})(f_{1})_{\epsilon} (z_{1},e_{1})M(e_{1})C(z_{1}+e_{1}-z_{2}-e_{2})M(e_{2})\] \[\times\sqrt{k_{\epsilon}}(z_{2})(f_{2})_{\epsilon}(z_{2},e_{2}) \mathrm{d}e_{1}\mathrm{d}e_{2}.\]
This is true for all \(f_{1},f_{2}\) and \(k\). By choosing the supremum of \(f_{1}\),\(f_{2}\) and \(k\) on the left hand side, we see that \(\lim_{\epsilon\to 0}O^{C}_{x_{1},x_{2},\epsilon,M}\leq[1+\kappa]O^{C}_{M}\). As \(\epsilon\to 0\), the factor \(\kappa\) goes to \(0\). This gives the claimed statement that \(\lim_{\epsilon}O^{C}_{x_{1},x_{2},\epsilon,M}\leq O^{C}_{M}\leq O_{M}\).
Finally, we observe that \(\lim_{\epsilon\to 0}O_{x_{1},x_{2},\epsilon,M}\leq\lim_{\epsilon\to 0}O^{C}_{x_{1},x_{2}, \epsilon,M}+\lim_{\epsilon\to 0}O^{\delta}_{x_{1},x_{2},\epsilon,M}\leq O_{M}+C(\delta)\). Here, we applied our earlier claim on \(O^{C}\) along with Corollary B.4. Since we can take \(\delta\to 0\) after all these steps, we have the desired inequality.
## Acknowledgment
The authors would like to thank Amir Dembo for his useful suggestions. The authors are also grateful to Makoto Nakamura for his helpful comments.
|
2307.04233 | The centaur-algebra of observables | This letter explores a transition in the type of von Neumann algebra for
asymptotically AdS spacetimes from the implementations of the different
gravitational constraints. We denote it as the \emph{centaur-algebra} of
observables. In the first part of the letter, we employ a class of flow
geometries interpolating between AdS$_2$ and dS$_2$ spaces, the centaur
geometries. We study the type II$_\infty$ crossed product algebra describing
the semiclassical gravitational theory, and we explore the algebra of bounded
sub-regions in the bulk theory following $T\overline{T}$ deformations of the
geometry and study the gravitational constraints with respect to the
quasi-local Brown-York energy of the system at a finite cutoff. In the second
part, we study arbitrary asymptotically AdS spacetimes, where we implement the
boundary protocol of an infalling observer modeled as a probe black hole
proposed by arXiv:2211.16512 to study modifications in the algebra. In both
situations, we show how incorporating the constraints requires a type II$_1$
description. | Sergio E. Aguilar-Gutierrez, Eyoab Bahiru, Ricardo Espíndola | 2023-07-09T17:24:12Z | http://arxiv.org/abs/2307.04233v5 | # The centaur-algebra of observables
###### Abstract
This letter explores a transition in the type of von Neumann algebra for open universes from the implementations of the different gravitational constraints. We denote it as the _centaur-algebra_ of observables. In the first part of the letter, we employ a class of flow geometries interpolating between AdS\({}_{2}\) and dS\({}_{2}\) spaces, the centaur geometries. We study the type II\({}_{\infty}\) crossed product algebra describing the semiclassical gravity theory, and we explore the algebra of bounded sub-regions in the bulk theory following \(T\overline{T}\) deformations of the geometry and study the constraints with respect to the quasi-local Brown-York energy of the system at a finite cutoff. In the second part, we study arbitrary asymptotically AdS spacetimes, where we implement the boundary protocol of an infalling observer modeled as a probe black hole proposed by [1] to study modifications in the algebra. In both situations, we show how incorporating the constraints requires a type II\({}_{1}\) description.
## I Introduction
Recently, there has been interest in the formal description of perturbative quantum gravity in terms of the algebra of diffeomorphism invariant observables, which have allowed us to rigorously define density matrices and the associated notion of generalized entropies[2; 3; 4; 5]. Pioneering work developing bulk emergence from the language of von Neumann algebra can be found in [2; 6] and check [7; 8; 9; 10; 11] for reviews.
This procedure begins with a type III\({}_{1}\) algebra describing the quantum fluctuations on a curved spacetime background. One incorporates dynamical gravitational gravity perturbatively by requiring that time translations act as gauge redundancies, which we denote throughout the letter as gravitational constraints. In the several examples considered, once gravitational corrections are included (either perturbatively or as an addition of a gravitational mode [4]), it has been shown that the algebra of observables becomes type II\({}_{\infty}\) when the gravitational dressing of operators is performed with respect to the asymptotic boundary region of an open universe; while if the dressing is with respect to a worldline observer in a closed universe, the algebra is type II\({}_{1}\)[2; 3; 4; 5]. More recently, the importance of this construction has been recognized even in the absence of gravitational constraints [12].
Let us denote \(\mathcal{A}\) as the algebra of bulk fluctuations associated with a spacetime region, acting on a Hilbert space \(\mathcal{H}\), and let \(T\) be the generator of the automorphism group (for simplicity we take it to be \(\mathbb{R}\)), with the respective group elements \(\left\{U=\mathrm{e}^{\mathrm{i}sT},\ \forall s\in\mathbb{R}\right\}\), such that
\[U\,a\,U^{-1}\in\mathcal{A}\,,\quad\forall a\in\mathcal{A}. \tag{1}\]
Let \(X\) be the generator of the unitary representation of the automorphism group acting on \(L^{2}(\mathbb{R})\). Then, one denotes the crossed product \(\rtimes\) algebra of \(\mathcal{A}\) and \(\mathbb{R}\) as
\[\hat{\mathcal{A}}=\mathcal{A}\rtimes\mathbb{R}\, \tag{2}\]
which is produced by adjoining bounded functions of \(T+X\) to \(\mathcal{A}\), i.e.
\[a\mathrm{e}^{\mathrm{i}sT}\otimes\mathrm{e}^{\mathrm{i}sX}\in\hat{\mathcal{A}},\quad\forall a\in\mathcal{A}\, \tag{3}\]
acting on a Hilbert space \(\hat{\mathcal{H}}\equiv\mathcal{H}\otimes L^{2}(\mathbb{R})\). When the automorphism is outer, i.e. \(U\notin\mathcal{A}\), and \(\mathcal{A}\) is a type III\({}_{1}\) algebra, the crossed product algebra results in a type II algebra [13]. Trace-class elements in type II algebras are defined as those with a well-defined trace. We can associate a density matrix \(\rho_{\Phi}\) to each state \(\ket{\Phi}\in\hat{\mathcal{H}}\),
\[\mathrm{Tr}(\rho_{\Phi}\hat{a})=\bra{\Phi}\hat{a}\ket{\Phi}\,\quad\forall\hat{a}\in\hat{ \mathcal{A}}. \tag{4}\]
Thus, von Neumann entropy for these states can be defined as,
\[S=-\,\mathrm{Tr}(\rho_{\Phi}\mathrm{log}\rho_{\Phi}). \tag{5}\]
For semiclassical states in \(\hat{\mathcal{H}}\), this entropy was shown to match with the generalized entropy1[4]. In the context of the eternal AdS black hole, the generator of the automorphism group, \(T\), is proportional to the time translation generator on both of the asymptotic boundaries.
Footnote 1: More precisely, what is matched is the entropy differences since the von Neumann entropy is defined up to a state-independent additive constant.
One needs different regularization procedures for \(T\) depending on whether the systems are described by a canonical [2] or micro-canonical ensemble [4]. More precisely, one should divide the generator by \(N\) in the canonical ensemble compared with the micro-canonical ensemble and do the construction in a perturbative series in
\(\sqrt{G_{N}}\sim 1/N\) (where \(N\) is the rank of the gauge group of the boundary CFT). The reason is that the states in the canonical ensemble have \(O(N)\) variance in the energy which diverges in the large \(N\) limit. The operator \(T+X\) is taken to be the Hamiltonian of the CFT and thus the crossed product algebra \(\hat{\mathcal{A}}\) actually describes the physical theory. These methods have also been developed for subregion algebras [12; 14; 15; 16]. 2 See [18; 19; 20; 21] for related developments in this area.
Footnote 2: Meanwhile, there are some expectations that non-perturbative corrections in quantum gravity might modify the algebra to type I once string theory corrections and black hole microstates are added in the algebra [2; 17].
Physical observables in perturbative quantum gravity are required to be diffeomorphism invariant. For open universes, this is naturally implemented by dressing the operators with respect to the boundary[22; 23; 24]. The reader is referred to [25; 26] for an alternative dressing of the operators with respect to the features of the state itself. Since in a gravitational theory the Hamiltonian is a boundary quantity, this dressing implies that the operators will not commute with the ADM Hamiltonian in general. On the other hand, for closed universes like dS space and subregions in an open universe, it was proposed in [3; 5] that one should perform the dressing with respect to the world-line of an observer. Thus, the dressed observables will translate under the action of the world-line energy of the observer. Both of these facts are encoded in the non-trivial action of \(T+X\) on the elements of \(\mathcal{A}\)3. Most of the previous works assume that the observer can be minimally modeled as a clock [3; 5; 27; 28; 29; 30; 31].
Footnote 3: Both in [3] and [5] the dressed observables are not given in the terms of the elements given in 3, rather in an equivalent description where the elements are \(e^{iTr}a^{\mathrm{e}-iTP}\) and \(e^{isX}\), where \(P\) is the conjugate variable to \(X\), which is taken to be the energy of the observer. This description can be related to 3 with a conjugation by \(e^{-iPT}\).
In this work, we explore _modifications in the algebra of observables for the semiclassical spacetime depending on how the gravitational constraints are implemented. We do so, first without having to add an observer by hand rather, considering the \(T\hat{T}\) deformation of the theory to study subregions in the bulk; and then with respect to the experience of an infalling observer from an asymptotic boundary_. In the former, we adopt a well-known setting for holography in dS space (see [32] for a review), referred to as interpolating geometries [33; 34]. In the latter case, we study the modifications in the algebra for general asymptotically AdS spacetimes.
The interpolating geometries are dilaton-gravity models that adopt near-AdS\({}_{2}\) space boundary conditions [35], while the interior is a near dS\({}_{2}\) space. They avoid a no-go theorem [36] forbidding any dS\({}_{D}\) region to reside in a causally accessible part of AdS\({}_{D}\) for \(D>2\). We expect that a better understanding of the algebra of observables in these kinds of backgrounds will lead to new insights on their holographic dual theory [37], and that of dS\({}_{2}\) JT gravity [38; 39; 40; 41; 42; 43; 44; 45].
JT gravity has been a productive test ground to study of von Neumann algebras in gravity beyond the semiclassical regime [46; 47], revealing the importance of different topologies in the description of the algebra of observables. However, the use of the centaur model in our work is not aimed at extending the discussion about the role of topologies, as above, but rather deriving the gravitational constraints imposed on the algebra from _from first principles_, which is the main novelty of our work.
After reviewing the semiclassical centaur geometry model and its crossed product algebra enlargement, we perform a \(T\overline{T}\) deformation of the theory, where the gravitational constraints are imposed with by the quasi-local Brown York energy, resulting in modifications of the algebra. Later, we study the experience of an infalling observer from the asymptotic boundary to the interior universe in the undeformed theory with the boundary theoretic protocol of [1] in asymptotically AdS space of arbitrary dimensions. In particular, the latter argument is also valid for the centaur geometries. In the previous two cases, we focus on how the description of the observer changes the algebra from type II\({}_{\infty}\) to type II\({}_{1}\) and the conditions required for such modification. We conclude with a brief summary of our main results and some future directions.
## II Setting
The first part of the letter is focused on the 2-dimensional flow models [33; 34; 37; 48; 49] which interpolate between an AdS\({}_{2}\) space and some internal space. They can be expressed in a unified way by the action
\[\begin{split} I=& I_{0}+\tfrac{1}{16\pi G_{N}}\int_{ \mathcal{M}}\mathrm{d}^{2}x\sqrt{|g|}(\Phi R-V(\Phi))\\ &+\tfrac{1}{8\pi G_{N}}\int_{\partial\mathcal{M}}\mathrm{d}x \sqrt{|h|}\,K\Phi_{b}+I_{m}[g,\,\chi]\,\end{split} \tag{6}\]
where \(I_{0}\) represents a topological term, \(\Phi\) is the dilaton field, \(\Phi_{b}\) is the asymptotic boundary value of the dilaton; and \(\chi\) represents the matter content of the theory, which is considered as generic quantum field theory (QFT). The resulting equations of motion are given:
\[\nabla_{\mu}\nabla_{\nu}\Phi-g_{\mu\nu}\nabla^{2}\Phi-\frac{1}{2} g_{\mu\nu}V(\Phi) =-8\pi G_{N}\left\langle t_{\mu\nu}\right\rangle, \tag{7}\] \[R =V^{\prime}(\Phi)\, \tag{8}\]
where \(\left\langle t_{\mu\nu}\right\rangle\) is the expectation value of the stress tensor for the matter fields, and the primes, \({}^{\prime}\), indicate differentiation with respect to the argument of the function. In absence of such fields, \(\epsilon^{\mu\nu}\partial_{\nu}\Phi\) is a Killing vector. Moreover, one can absorb the topological term of the action (6) in the definition of the dilaton \(\Phi\) and expand the solution about \(\Phi=\phi_{0}+\phi\). In the following, we work in the semiclassical limit \(\phi_{0}\gg\phi\), since the dilaton represents
the area of the transverse \(\mathrm{S}^{2}\) of the higher dimensional near-Narai black hole geometry.
To describe the geometry, we also employ some particular dilaton potential term \(V(\Phi)=2\Phi\tanh(\frac{\Phi}{\epsilon})\), where the \(\epsilon\to 0\) case represents a "sharp" transition between AdS space and the interior geometry, which can be AdS\({}_{2}\) or dS\({}_{2}\) space depending on the sign of the renormalized dilaton. For concreteness, we focus on the case where \(\Phi_{b}>0\) to obtain a transition between spacetimes of opposite sign curvature. In that case, the potential becomes,
\[V_{\mathrm{cent}}(\Phi)=2\eta\Phi+\tilde{\phi}\ ; \tag{9}\]
where
\[\eta=\begin{cases}+1&\mathrm{AdS}_{2}\,\\ -1&\mathrm{dS}_{2}\.\end{cases} \tag{10}\]
This construction is a double-sided geometry, i.e. two boundary particles are required to describe the bulk geometry. It becomes convenient to introduce the conformal metric \(\mathrm{d}s^{2}=\mathrm{e}^{2\omega(\rho,\,\tau)}(\mathrm{d}\tau^{2}+\mathrm{d }\rho^{2})\), with \(\omega(\rho,\,\tau)\) the conformal factor. A curve of the form \(\mathcal{C}=\{\tau(u),\,\rho(u)\}\) can parametrize the embedding of one of the boundary particles (say R). We impose Dirichlet boundary conditions in \(\mathcal{C}\), by scaling \(\Phi_{b}(u)\) and \(h(u)\) with \(\Lambda\gg 1\) as \(\left[\sqrt{h},\,\Phi_{b}\right]\rightarrow\Lambda[1,\,\Phi_{r}]\). The resulting on-shell action of (6), \(I_{\mathrm{on}}\), is given by [34]:
\[I_{\mathrm{on}}=\tfrac{1}{8\pi G_{N}}\int\mathrm{d}u\,\Phi_{r}(u)\big{(} \Lambda^{2}+\tfrac{1}{2}(\tau^{\prime}(u))^{2}-\{\tau(u),\,u\}\big{)}\, \tag{11}\]
where
\[\{\tau(u),\,u\}=\tfrac{\tau^{\prime\prime\prime}(u)}{\tau^{\prime}(u)}-\tfrac {3}{2}\Big{(}\tfrac{\tau^{\prime\prime}(u)}{\tau^{\prime}(u)}\Big{)}^{2} \tag{12}\]
is the Schwarzian derivative, and the term \(\Lambda^{2}\) can be eliminated with standard holographic renormalization [50; 51]. Notice that the \((\tau^{\prime}(u))^{2}\) term breaks the symmetry that we would have found in JT gravity, now the symmetry group is:
\[\mathbb{SL}(2,\,\mathbb{R})\to U(1)\, \tag{13}\]
such that under boundary time periodicity
\[u\sim u+2\pi/\ell. \tag{14}\]
the corresponding time on \(\mathcal{C}\) is periodic
\[\tau(u+2\pi/\ell)=\tau(u)+2\pi. \tag{15}\]
The one-sided Hamiltonian, \(H_{\mathrm{cent}}\), for each boundary particle corresponding to the boundary action (11) can be deduced as [52; 53]
\[H_{\mathrm{cent}}=\tfrac{\phi_{r}(u)}{8\pi G_{N}}\bigg{(}\tfrac{\tau^{\prime} (u)^{2}}{2}-\tfrac{\tau^{\prime\prime\prime}(u)}{\tau^{\prime}(u)}+\tfrac{3}{ 2}\bigg{(}\tfrac{\tau^{\prime\prime}(u)}{\tau^{\prime}(u)}\bigg{)}^{2}\bigg{)}. \tag{16}\]
The algebra of the \(L\) or \(R\) boundary theory without matter consists of bounded functions of \(H_{\mathrm{cent,\,R}}\) or \(H_{\mathrm{cent,\,L}}\) respectively. It also suffers from a similar factorisation puzzle as in JT gravity [46; 47; 54], namely that the algebras \(\mathcal{A}_{L}\) and \(\mathcal{A}_{R}\) commute with each other, since they share the same generator, \(H_{\mathrm{JT,\,L}}=H_{\mathrm{JT,\,R}}\). In case of the centaur geometry, the \(\mathrm{U}(1)\) charge corresponds to the modular Hamiltonian \(H_{\mathrm{mod}}=H_{\mathrm{cent,\,L}}-H_{\mathrm{cent,\,R}}\), and the Hamiltonian constraint on physical states \(|\psi\rangle\) which are invariant under this symmetry reads \(H_{\mathrm{mod}}\,|\psi\rangle=0\), which expresses that \(H_{\mathrm{cent,\,L}}=H_{\mathrm{cent,\,R}}\) for physical states. This issue is no longer present once matter is introduced, as follows. We define local matter operators, \(\chi\), with the appropriate canonical quantization relations respecting \(U(1)\) gauge invariance. The smearing over \(u\) allows to define bounded operators \(\mathcal{B}(\chi)\). We can then express the time translation generators along the \(L/R\) boundary particles as
\[H_{\mathrm{L/R}}=H_{\mathrm{cent,\,L/R}}+H_{\mathrm{matter,\,L/R}}\, \tag{17}\]
where the generator of \(\mathrm{U}(1)\) transformations corresponds to the modular Hamiltonian,
\[H=H_{R}-H_{L}. \tag{18}\]
Once we add matter to the theory, we can employ a generalized free-field approximation for constructing the total Hilbert space \(\mathcal{H}_{\mathrm{tot}}\),
\[\mathcal{H}_{\mathrm{tot}}=\mathcal{H}_{\mathrm{matt}}\otimes\mathcal{H}_{ \mathrm{grav}}\, \tag{19}\]
where the operators quantizing the metric and the dilaton \(\big{\{}h_{\mu\nu}^{\mathrm{grav}},\,\phi\big{\}}\) can be used to construct the states in \(\mathcal{H}_{\mathrm{grav}}\); meanwhile, \(\mathcal{H}_{\mathrm{matter}}\) can be constructed from strings of Fourier modes \(\big{\{}a,\,a^{\dagger}\big{\}}\), i.e. any matter field \(\chi\) can adopt a decomposition
\[\chi(\tau(u))=\int\mathrm{d}\omega\,(f_{\omega}(\tau(u))a_{\omega}+f_{\omega}^{ \ast}(\tau(u))a_{\omega}^{\dagger}). \tag{20}\]
## III Algebra for the Centaur Geometry
The full boundary algebra for a given side, such as \(R\), is generated by \(H_{R}\) and \(\chi\). This determines the type \(\mathrm{III}_{1}\) algebra of operators by constructing finite strings of the modes \(\big{\{}a,\,a^{\dagger}\big{\}}\) and bounded functions of \(H_{R}\). Let us denote, the gauge-constrained Hilbert space as \(\hat{\mathcal{H}}\) by all \(U(1)\) invariant states constructed from the operators (19) and their Hilbert space completion. Let \(\mathcal{A}_{R}\) be the von Neumann algebra consisting on the set of operators \(\Big{\{}\hat{\mathcal{O}}_{R}\Big{\}}\) in \(R\) which time evolve non-trivially along the asymptotic boundary by the modular flow (18) according to (1) with \(U=\mathrm{e}^{iH\tau}\) to describe the \(U(1)\) isometry group of the centaur geometry. We employ the Tomita-Takesaki construction of modular automorphisms [55] for type III
von Neumann algebras. We start from a thermofield-double state, \(\left|\psi_{\text{TFD}}\right>\), that is a cyclic and separating vacuum state that obeys the constraint equation
\[H\left|\psi_{\text{TFD}}\right>=0. \tag{21}\]
Then, we can generate cross-product algebra following (3) with \(T=H\), i.e. the modular time translation generator of the cross-product algebra; and \(X=H_{L}\). However, given the \(U(1)\) invariance, the automorphism group is an interval, rather than \(\mathbb{R}\) in (2). Consider now
\[\hat{a}_{R}\in\hat{\mathcal{A}}_{\text{R}}\in H_{\text{R}}\,. \tag{22}\]
Since \(H\left|\psi_{\text{TFD}}\right>=0\), (21) can be employed to evaluate the expectation value of a generic element \(\hat{a}\in\hat{\mathcal{A}}_{\text{R}}\)[2],
\[\text{Tr}\,\hat{a}=\beta_{\text{TFD}}\int_{X_{\text{min}}}^{X_{\text{max}}} \mathrm{d}X\,\mathrm{e}^{\beta_{\text{TFD}}X}\left<\psi_{\text{TFD}}\right|a(X )\left|\psi_{\text{TFD}}\right>\,. \tag{23}\]
We have introduced the integration limits \(X_{\text{min}}\) and \(X_{\text{max}}\) to indicate constraints in the on-sided Modular Hamiltonian. In the present case, although there is a U(1) symmetry, which would bound the allowed range of (16), we are considering the Hamiltonian \(H_{L}\) (17) in the presence of matter. Given that matter excitations are arbitrary, the allowed range becomes \(X\in[-\infty,\infty]\). Physically, the presence of the asymptotic boundary does not allow the modification to a type II\({}_{1}\) algebra where maximally entangled states can be defined.
The definition (23) obeys the properties
\[\begin{split}\text{Tr}\,\hat{a}\hat{b}=\text{Tr}\,\hat{b}\hat{a} &\hat{a},\ \hat{b}\in\hat{\mathcal{A}}\,\\ \text{Tr}\,\hat{a}^{\dagger}\hat{a}>0&\forall\,\hat{a }\neq 0\.\end{split} \tag{24}\]
As mentioned in the introduction, the crossed product will result in a type II algebra. Moreover, since the trace of the identity matrix \(\text{Tr}\,\mathbb{K}\rightarrow\infty\) then the algebra is type II\({}_{\infty}\). The trace must be finite for a dense set of operators in the algebra.
## IV \(T\overline{T}\) deformed theory
Our goal in this section is to try to address the algebra of observables in a bounded subregion [12; 14; 5; 16] for the centaur geometry by implementing a \(T\overline{T}\) deformation [56; 57; 58; 59; 60; 53].
We follow the conventions [53] to express \(T\overline{T}\) deformations parametrized by \(\lambda\in\mathbb{R}\) as
\[\frac{\mathrm{d}I}{\mathrm{d}\lambda}=\pi\int\mathrm{d}^{2}x\,\sqrt{-g}\big{(} T^{ij}T_{ij}-(T_{i}^{i})^{2}\big{)}\, \tag{25}\]
where \(T_{ij}\) is the Brown-York quasilocal stress tensor [61; 62] along a boundary surface \(r=\frac{1}{\sqrt{\alpha\lambda}}\), with \(\alpha\equiv 1/(2\pi G_{N})\), in static patch coordinates
\[\mathrm{d}s^{2}=-N(r)\mathrm{d}\tau^{2}+\frac{\mathrm{d}r^{2}}{N(r)}\,\quad\Phi=\Phi(r)\, \tag{26}\]
in the absence of matter; while equation (25) and the relation between the cutoff and \(\lambda\) is modified in presence of matter [56; 57]. Alternatively, the deformation can be interpreted as the result of introducing mixed boundary conditions for the undeformed theory [63]. In the former interpretation, the time translation generator along the left or right-sided cutoff surface is given by the quasi-local Brown-York Hamiltonian, \(H_{T\overline{T}}\). For a general dilaton-gravity theory with matter of the form (6) under the \(T\overline{T}\) deformation (25), the quasi-local Brown-York Hamiltonian obeys the relation [53]
\[\frac{\partial H_{T\overline{T}}}{\partial\lambda}=\frac{H_{T\overline{T}}^{ 2}-\frac{1}{16\lambda^{2}}\big{(}1+\frac{\sqrt{\alpha}}{2\Phi_{r}}V\big{(} \frac{\Phi_{r}}{\sqrt{\alpha\lambda}}\big{)}\big{)}-\frac{t_{r}^{r}}{4a^{1/2} \lambda/2}}{1/2-2\lambda H_{T\overline{T}}}\, \tag{27}\]
where \(t_{r}^{r}\) is the radial-radial component of the bulk matter stress tensor at the cutoff surface, and \(H_{T\overline{T}}(\lambda=0)=H_{\text{cent}}\).
The precise deformation of the energy spectrum will depend on the matter stress tensor, the dilaton potential, and the location where the cutoff is performed. However, the dependence on \(\tau(u)\) is only encoded on \(H_{\text{cent}}\), which is a bounded function \(\mathcal{B}(\mathbb{R})\). As long as the cut-off, parametrized by \(\lambda\), is _finite_ in (27) as well as the radial-radial component of the matter stress tensor \(t_{r}^{r}\), we then have that \(X_{\text{min}}\) and \(X_{\text{max}}\) will be bounded in (23). We then conclude that the trace for any element in the crossed product algebra in with \(X=H_{T\overline{T},\,L}\) and \(T=H_{T\overline{T},\,R}-H_{T\overline{T},\,L}\) will also be a bounded function. This means, that the \(T\overline{T}\) deformed theory is described by a type II\({}_{1}\) algebra, where the observables are dressed with respect to the cut-off surface; whereas, the base theory has a type II\({}_{\infty}\) algebra structure. For example, consider the centaur theory with the potential (9) and \(t_{r}^{r}=\)const in (27),
\[H_{\text{cent}}^{\lambda}\approx\tfrac{1}{4\lambda}\Bigg{(}1-\sqrt{\eta-8\sqrt {\tfrac{\lambda}{\alpha}}t_{r}^{r}-8\lambda H_{\text{cent}}}\Bigg{)}\, \tag{28}\]
with \(H_{\text{cent}}\) is the Modular Hamiltonian of the undeformed theory (16). Given the \(U(1)\) restriction in the modular Hamiltonian, it is clear that \(X\) will have to be bounded in (23). Moreover, notice that (28) would produce the spectrum of a \(T\overline{T}+\Lambda_{2}\) deformation [64; 65; 66] directly in the interpolating geometry.
Notice, our result is consistent with the prediction of [5] for closed universes. In this derivation, we worked under the assumption that the stress tensor is a bounded function; it would be interesting to study if this restriction can be relaxed.
## V The experience of an infalling observer
We employ the protocol of [67; 68; 1] that describes the experience of an infalling observer, modeled as a probe black hole, from the boundary of a generic asymptotically AdS\({}_{d+1}\) spacetime, including the centaur geometry.
We prepare a microcanonical TFD configuration dual to a black hole geometry and a copy of it, which we refer to as the reference system, with energy eigenstates \(\left|E_{n}\right\rangle_{\text{sys}}\) and \(\left|\overline{E}_{n}\right\rangle_{\text{ref}}\) respectively. We employ a conformal transformation \(\mathrm{e}^{\mathrm{i}P\rho}\) to shift the black hole into the asymptotic boundary, with \(P\) the momentum operator, and \(\rho\gg 1\) a parameter controlling the shift. Let \(\left|\psi\right\rangle\) denote the CFT state dual to a semiclassical asymptotically AdS space with a probe black hole. Defining the state [1]
\[\left|\psi\right\rangle=Z^{-1/2}\sum_{n} f(E_{n}|E_{0},\,\sigma)\,\times \tag{29}\] \[\left[V_{\text{sys}}\mathrm{e}^{-\delta\ell^{2}P^{2}-\mathrm{i}P \rho}\left|E_{n}\right\rangle_{\text{sys}}\right]\left|\overline{E}_{n}\right \rangle_{\text{ref}}\,,\]
where \(V_{\text{sys}}\) is some arbitrary operation in the interior geometry; \(f(E_{n}|E_{0})\) is an appropriate enveloping function for \(E_{n}\) to be summed over a microcanonical window of width \(\sigma\) around \(E_{0}\); \(\delta\ell\) is the wavepacket localization; and \(Z\) the microcanonical partition function.
The set of normalizable states \(\left|\psi\right\rangle=\left\{\left|\psi_{\text{eq}}\right\rangle\right\}\) are called local equilibrium operators, which by definition obey KMS conditions for the two-point functions of the set of operators available to the atmosphere around the observer, denoted by \(O=\left\{\phi_{\text{atm}}\right\}\):
\[\left\langle\psi_{\text{eq}}\right|O_{1}^{\dagger}\exp\!\left[-2\pi K\rho_{ \text{eq}}\right]\!O_{2}\left|\psi_{\text{eq}}\right\rangle=\left\langle\psi _{\text{eq}}\right|O_{2}O_{1}^{\dagger}\left|\psi_{\text{eq}}\right\rangle\, \tag{30}\]
where \(K_{\rho_{\text{eq}}}\) is the modular Hamiltonian,
\[K_{\rho_{\text{eq}}}=-\tfrac{1}{2\pi}\log\left[\rho_{\text{eq}}\right]\,, \tag{31}\]
with \(\rho_{\text{eq}}=\left|\psi_{\text{eq}}\right\rangle\left\langle\psi_{\text {eq}}\right|\). Then, the generator of Schwarzschild time translation for the proper time of these states is given by tracing out the reference system \(K_{\rho_{\text{eq}}}^{\text{sys}}=\mathrm{Tr}_{\text{ref}}K_{\rho_{\text{eq}}}\).
On the other hand, since the reference system is entangled by construction with the infalling observer; it is then natural to employ \(K_{\rho_{\text{eq}}}^{\text{ref}}=\mathrm{\hat{Tr}}_{\text{sys}}K_{\rho_{ \text{eq}}}\) as the time automorphism generator for the infalling observer. By making this identification, we employ \(T=K_{\rho_{\text{eq}}}\) as the generator of time automorphism of \(\hat{\mathcal{A}}\), and \(X=K_{\rho_{\text{eq}}}^{\text{ref}}\). This allows us to define the traces of the crossed product algebra in (23).
The nature of the algebra will be determined by the states in \(\hat{\mathcal{H}}\). In general, the presence of matter in the background geometry can introduce non-equilibrium states. Such states are in principle non-normalizable, leading to ill-defined traces for some elements in \(\mathcal{A}\). Thus, the experience of infalling observer might still be described with type II\({}_{\infty}\) algebras generically; although, considering symmetries of the system might result in a type II\({}_{1}\) description.
We focus on the case where the infalling probe black hole does not encounter bulk matter fields along its worldline. Its experienced Hilbert space is then always described by equilibrium states. In such a case, we must also account for the constraint that the reference system energy is bounded from below. This comes from the construction of the generator (31). Given that \(\left|\psi_{\text{eq}}\right\rangle\) obey the KMS relation (30) for all elements in the algebra \(\mathcal{A}\), they are normalizable states. It is then clear that the range of integration \(\left[X_{\text{min}},\,X_{\text{max}}\right]\) in (23) is a bounded interval, and as such the trace is finite \(\forall\dot{a}\in\hat{\mathcal{A}}\), i.e. the von Neumann algebra is thus type II\({}_{1}\). Notice that the particular use of two-dimensional gravity was not employed in the arguments, so the construction works for general asymptotically AdS\({}_{d+1}\) spacetimes without matter, and in particular, the centaur geometry. Moreover, the transition occurs as soon we exchange the ADM Hamiltonian for the reference system \(K_{\rho_{\text{eq}}}^{\text{ref}}\).
## VI Conclusions and outlook
In this work, we have uncovered the transition in the type II algebra of observables by considering (i) a \(T\overline{T}\) deformation of the centaur geometries, and (ii) the experience of an infalling observer from the boundary to the interior geometry of asymptotically AdS spacetimes. In both cases, the transition to a type II\({}_{1}\) algebra allows us to construct a maximally mixed state and a notion of generalized entropies based on the Tomita-Takesaki theory. The _main novelty_ of our work has been to deduce the gravitational constraints that need to be implemented in (i) from first principles, which has been the reason to pick a particular class of models; while in (ii) we studied a natural choice of constraints that can be employed in a wider family of spacetimes. We expect that the general lessons can be carried out in more generic systems. In (i), we can generalize the lesson broadly to dilaton-gravity theories of the form (6) where the spectrum of quasi-local energies at the cutoff surface in (27) remains bounded, which in particular we showed for a \(U(1)\) symmetric boundary theory. Perhaps, the simplest explicit generalizations would involve the AdS\({}_{2}\) interpolating geometries with a different cosmological constant; the \(\gamma\)-centaur [34], and the double interpolating geometry [49]; where the change in the algebra is also suggested by [5]. Meanwhile, in (ii), we have shown that if the infalling observer from the asymptotic boundary does not cross bulk matter fields, the transition of the algebra type II\({}_{\infty}\) to type II\({}_{1}\) does not depend on the interior geometry, as long as the protocol [1] can be employed.
Let us proceed by pointing out some future directions. First, as we have indicated, after the crossed product enlargement of the algebra (2), the definition of traces in (24) allows us to define reduced-density matrices and rigorous notions of generalized entropies. Interesting progress towards formulating the Page curve in the language of von Neumann algebras was initiated in [31; 69]. Perhaps such notions can establish the island formula on solid grounds for de Sitter space, which was pioneered by [70]. However, it has been argued that the appearance of islands close to the cosmological horizon violates entanglement wedge nesting [71] unless the large backreaction
is induced [72; 73]. We hope the algebraic techniques can bring a better understanding of these features.
Second, we can think of the infalling observer as a little diary falling into a black hole. Certain protocols can be used to recover the information after the scrambling time [74; 75]. In the context of the Page curve, the information encoded in the island can be recovered by applying explicit teleportation protocols [76; 77], see upcoming work in this area by [78; 79]. It would be interesting to understand how information recovery works in algebraic language. These ideas can shed some light on understanding the microscopic origin of the island formula.
Third, although the centaur geometries provide a natural background to study de Sitter space holography and a rich algebraic structure; these theories are known to be thermodynamically unstable [33], which motivated the construction of the double interpolating geometries in [49]. It would be interesting to study the thermodynamic properties of the \(T\overline{T}\) deformed centaur geometry, as they have not received much attention since the original work of [53]. However, the energy spectrum is generically complex under these deformations. Although one can restrict the energy eigenstates to describe a unitary theory, a new perspective arises with Cauchy slice holography [80; 81] where the notion of complex stress tensor plays a crucial role, which would be interesting to study explicitly for the centaur geometry, and whether the restriction on the finiteness of the radial-radial component of the matter stress tensor \(t_{r}^{r}\) can be lifted and still recover the transition in the algebra.
Fourth, as we have emphasized, our result for the infalling observer does not depend on the specific interior geometry. However, the notion of equilibrium states that were employed to define the reduced density matrices with respect to the observer in the boundary theory protocol of [1] relies on the same original assumptions, in particular, that the equilibrium states need to minimize a notion of circuit complexity in the boundary theory, which has not been developed explicitly so far. We hope that the algebraic techniques uncovered in this work can catalyze progress on rigorously defining complexity proposals from the boundary perspective and the respective bulk realization, initiated in [82]. In that case, the centaur geometry could be a productive test ground for the different proposals for holographic complexity [48] in stretched horizon holography [83; 84; 85; 86; 87; 88], and possibly to incorporate quantum corrections in such proposals, as recently studied in [89].
Finally, it would be interesting to incorporate non-equilibrium states in the protocol of the infalling observer to study modification in the algebra of observables for the probe black hole. The probe will absorb particles along its worldline. Then, the crossed product algebra could remain type II\({}_{\infty}\) instead of II\({}_{1}\), as traces might include non-normalizable states. Regardless of that, the evolution of the atmosphere operators in the algebra will be determined by scrambling modes of the modular Hamiltonian [67]. Moreover, these modes produce null shifts along the horizon of the background black hole [1]. This could allow for a wormhole teleportation protocol for the probe black hole, seen as a diary. It might be worth studying the algebraic structure of such protocol explicitly with an SYK model dual to a near AdS\({}_{2}\) space, as first proposed in [68].
###### Acknowledgements.
We would like to thank Shadi Ali Ahmad, Dio Anninos, Damian Galante, Stefan Hollands, Ro Jefferson, Andrew Rolph, Sirui Shuai, Andrew Svesko, Eleanor Harris, and Yixu Wang for useful discussions on centaur spacetimes and von Neumann algebras, and specially Manus Visser for early collaboration. SEAG thanks the University of Amsterdam and the Delta Institute for Theoretical Physics for their hospitality and support during different stages of this project. EB also wants to thank the CERN-TH for their hospitality during the preparation of this paper. The work of SEAG is partially supported by the FWO Research Project G0H9318N and the inter-university project iBOF/21/084. The work of EB is partially supported by the Eramus+ Trainee-ship programme and the INFN Iniziativa Specifica String Theory and Fundamental Interactions. RE is supported by the Dushi Zhanxxiang Fellowship and acknowledges a Shuimu Scholarship as part of the "Shuimu Tsinghua Scholar" Program.
|
Subsets and Splits